The whole Optimization Server can be deployed from a simple docker-compose file description.
Make sure you have already installed :
You should already know about docker and docker-compose basics.
You must have access to both Docker Hub Registry and DecisionBrain registry.
The docker-compose.yml file must at least contain :
You can optionally add a web-console to monitor and execute jobs, and a local documentation.
Ready to use docker-compose files can be downloaded there: Each of these files can be written or modified manually and a step-by-step procedure is given below.
Create a .env file. While this file is not mandatory, it is convenient to centralize and tune image versions used. This is a sample content:
DECISIONBRAIN_REGISTRY=dbos-registry.decisionbrain.cloud
DECISIONBRAIN_CPLEX_REGISTRY=cplex-registry.decisionbrain.cloud/dbos
DBOS_VERSION=3.2.2
DBOS_RABBITMQ_VERSION=3.8.12
DBOS_MONGO_VERSION=4.2.1
DBOS_KEYCLOAK_VERSION=8.0.1
AUTH_PROFILE=basicAuth
KEYCLOAK_AUTH_SERVER_URL=http://localhost:8081/
CLIENT_URL=http://localhost:4200
MASTER_JWT_KEY=ohxeiTiv2vaiGeichoceiChee9siweiL
This file is the main description of the different docker containers architecture.
Here is a sample docker-compose.yml file :
version: '2.4'
services:
mongo:
image: optim-server/mongo-non-root:${DBOS_MONGO_VERSION}
build:
context: './mongo'
args:
DBOS_MONGO_VERSION: $DBOS_MONGO_VERSION
restart: always
ports:
- 27017:27017
networks:
- optimserver # This internal docker network can be configured at the end of the docker-compose.yml file.
volumes:
- mongovolume:/data/db # Use volumes if you want your data to be persisted.
- mongovolume:/logs
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
- MONGODB_USER=optimserver
- MONGODB_PASSWORD=optimserver
- MONGODB_DATABASE=optimserver-master-db
rabbitmq:
image: optim-server/rabbitmq-stomp:${DBOS_RABBITMQ_VERSION}
build:
context: './rabbitmq'
args:
DBOS_RABBITMQ_VERSION: $DBOS_RABBITMQ_VERSION
restart: always
ports:
- 5671:5671
- 5672:5672 #amqp port
- 15671:15671
- 15672:15672 # management web console
- 61613:61613 #stomp port
networks:
- optimserver
environment:
- RABBITMQ_DEFAULT_USER=decisionbrain # Adapt the credentials to your needs
- RABBITMQ_DEFAULT_PASS=decisionbrain
master:
image: ${DECISIONBRAIN_REGISTRY}/dbos-master:${DBOS_VERSION} # Get the optimserver master application
ports:
- 8080:8080
networks:
- optimserver
volumes:
- ./data/logs:/logs
environment:
- SPRING_DATA_MONGODB_HOST=mongo # The different parameters are explained in a following step.
- SPRING_DATA_MONGODB_USERNAME=optimserver
- SPRING_DATA_MONGODB_PASSWORD=optimserver
- SPRING_RABBITMQ_HOST=rabbitmq
- JAVA_OPTS=-Xmx500m -Xms10m
- KEYCLOAK_AUTHSERVERURL=${KEYCLOAK_AUTH_SERVER_URL}auth/
- SPRING_PROFILES_ACTIVE=${AUTH_PROFILE}
- OPTIMSERVER_JWTKEY=${MASTER_JWT_KEY}
cpu_percent: 75
mem_limit: 500M
links:
- rabbitmq
- mongo
web-console: # The web-console client is optional.
image: ${DECISIONBRAIN_REGISTRY}/dbos-web-ui-dashboard:${DBOS_VERSION}
user: "1000:0"
ports:
- 80:8080
networks:
- optimserver
volumes:
- ./data/logs:/logs
environment:
- OPTIMSERVER_MASTER_URL=http://master:8080/ # The URL refers to the master image
- KEYCLOAK_URL=${KEYCLOAK_AUTH_SERVER_URL} # The URL should be the public url of the keycloak image
- OPTIMSERVER_MASTER_DOC_URL=http://documentation:8080/ # The documentation URL
- SPRING_PROFILES_ACTIVE=${AUTH_PROFILE}
links:
- master
documentation: # The documentation is optional.
image: ${DECISIONBRAIN_REGISTRY}/dbos-documentation:${DBOS_VERSION}
user: "1000:0"
ports:
- 1313:8080
networks:
- optimserver
volumes:
mongovolume:
networks:
optimserver:
name: "optimserver"
The Optimization Server keycloak image is based on the jboss/keycloak image and contains:
You can use this image and configure it the way you want via the admin interface or use any keycloak system you may already have. Here is a sample docker-compose file for keycloak:
version: '2.4'
services:
keycloak:
image: optim-server/dbos-keycloak:${DBOS_VERSION}
build:
context: './keycloak'
args:
DBOS_KEYCLOAK_VERSION: $DBOS_KEYCLOAK_VERSION
networks:
- optimserver
restart: always
environment:
- DB_VENDOR=H2
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- KEYCLOAK_IMPORT=/tmp/realm.json
- KEYCLOAK_FRONTEND_URL=${KEYCLOAK_AUTH_SERVER_URL}auth # Enable backend services access to keycloak, this set forceBackendUrlToFrontendUrl to false
- CLIENT_URL=${CLIENT_URL}
ports:
- 8081:8080
networks:
optimserver:
external: true
name: "optimserver"
Moreover, to use keycloak, the .env file given above has to be modified such that:
If you don’t want to use a keycloak image, you can deploy the master with a profile that lets the user access with a simple basic authentication.
To activate this mode, just pass the following environment variables to the master image :
This is the default mode using the given .env and docker-compose files, to easier simple deployments. The advisable authentication mode in production is Keycloak, though.
You can now start the whole Optimization Server environment with the following command line :
docker-compose up -d
Optimization Server capabilities depend on which workers are deployed, and on the tasks these workers can perform. To start the pre-packaged Cplex / CpOptimizer / OPL workers, use the following sample docker-compose-workers.yml file:
version: '2.4'
services:
cplex-cpo-worker:
image: ${DECISIONBRAIN_CPLEX_REGISTRY}/dbos-cplex-cpo-worker:${DBOS_VERSION}
expose:
- 8080
networks:
- optimserver
volumes:
- ./data/logs:/logs
environment:
- SPRING_RABBITMQ_HOST=rabbitmq
- SPRING_RABBITMQ_USERNAME=decisionbrain
- SPRING_RABBITMQ_PASSWORD=decisionbrain
- JAVA_OPTS=-Xmx4000m -Xms500m -XX:+CrashOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/logs/cplex-cpo-worker-heap-dump.hprof
- MASTER_JWTKEY=${MASTER_JWT_KEY}
- MASTER_URL=http://master:8080/
cpu_percent: 75
mem_limit: 4256M
healthcheck: # This health check is used to be sure that a rabbitMQ instance is already present
test: ["CMD", "curl", "-f", "http://rabbitmq:5672"]
interval: 1m30s
timeout: 10s
retries: 3
opl-worker:
image: ${DECISIONBRAIN_CPLEX_REGISTRY}/dbos-opl-worker:${DBOS_VERSION}
expose:
- 8080
networks:
- optimserver
volumes:
- ./data/logs:/logs
environment:
- SPRING_RABBITMQ_HOST=rabbitmq
- SPRING_RABBITMQ_USERNAME=decisionbrain
- SPRING_RABBITMQ_PASSWORD=decisionbrain
- JAVA_OPTS=-Xmx4000m -Xms500m -XX:+CrashOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/logs/opl-worker-heap-dump.hprof
- MASTER_JWTKEY=${MASTER_JWT_KEY}
- MASTER_URL=http://master:8080/
cpu_percent: 75
mem_limit: 4256M
healthcheck: # This health check is used to be sure that a rabbitMQ instance is already present
test: ["CMD", "curl", "-f", "http://rabbitmq:5672"]
interval: 1m30s
timeout: 10s
retries: 3
networks:
optimserver:
external: true
name: "optimserver"
One container for each pre-packaged worker can be started, using the above docker compose file, with:
docker-compose -f docker-compose-workers.yml up -d
The workers can be scaled with a simple docker-compose parameter:
docker-compose -f docker-compose-workers.yml up -d --scale cplex-cpo-worker=3
The OptimizationServer master application is a spring-boot application: you can use the standard spring boot parameters as environment parameters in the docker container. Java workers (pre-packaged or custom ones) are also spring-boot applications, so the same parameters may apply.
Please note that a parameter is formatted in upper case, with ‘_’ characters instead of ‘.’ (For example spring.rabbitmq.host becomes SPRING_RABBITMQ_HOST in docker).
This is a list of the common useful parameters to use :
Parameter name | Description | default |
---|---|---|
SPRING_RABBITMQ_USERNAME | Rabbit MQ user name. | decisionbrain |
SPRING_RABBITMQ_PASSWORD | Rabbit MQ password. | decisionbrain |
SPRING_RABBITMQ_HOST | Rabbit MQ host : this should be a valid service name in the docker compose network. | localhost |
SPRING_RABBITMQ_PORT | Rabbit MQ port. | 5672 |
SPRING_RABBITMQ_WSPORT | Rabbit MQ stomp port, used for websockets. | 61613 |
SPRING_RABBITMQ_MANAGEMENTPORT | Rabbit MQ management console port. | 15672 |
SPRING_DATA_MONGODB_HOST | Mongo db host (this should be a valid service name in the docker compose network). | localhost |
SPRING_DATA_MONGODB_PORT | Mongo db port. | 27017 |
SPRING_DATA_MONGODB_DATABASE | Mongo db database. | optimserver-master-db |
LOGGING_LEVEL_COM_DECISIONBRAIN | Global logging level (WARN, INFO, ERROR, DEBUG). | INFO |
JAVA_OPTS | JVM options (example: -Xmx500m -Xms10m). | |
OPTIMSERVER.MONGO.ENTITIES.TTL | The mongo entities time to live (in seconds) | -1 |
OPTIMSERVER.STARTJOBEXECUTION.DEFAULTASYNCJOBTTL | This value is used when a job is created without any timeout value. A job will be abandoned if no worker can resolve it within this value (in seconds). | 3600 * 24 * 7 |
OPTIMSERVER.JWTKEY | The secret key used to sign JTW tokens (32 characters string) | ohxeiTiv2vaiGeichoceiChee9siweiL |
The containers can also be tuned by using the cpu and memory parameters (Full documentation )
For example, the following parameters limit the usage of the CPU and set the maximum available memory
cpu_percent: 75
mem_limit: 500M