The following situation:
I have a Spring Boot Application
which runs in a Docker swarm
but fails to start because it was not properly configured (a property is missing).
It seems to me that the docker swarm always tries to restart the container, but always fails because of the missing property.
The restart makes no sense because docker will never be able to start the application unless I fix the missing property.
So currently the swarm ends in an endless loop.
Regarding this problem I already read:
The docker documentation: https://docs.docker.com/config/containers/start-containers-automatically/
and several StackOverflow posts: https://stackoverflow.com/search?q=Docker+restart
My "setup":
The dockerfile:
ARG nexus_docker_registry=mynexus.com:10099
FROM ${nexus_docker_registry}/openjdk:8-jdk-alpine
ADD myjar.jar myjar.jar
ENV JAVA_OPTS=""
ENTRYPOINT [ "java", "-jar", "/myjar.jar" ]
my YML-file to create the docker service:
---
- hosts: docker_manager
become: false
vars:
servicename: 'myservice'
imageurl: "mynexus.com:10099/myjar:{{version}}"
extraoptions:
- "--with-registry-auth"
- "--detach=true"
- "--log-driver gelf"
- "--log-opt 'gelf-address=udp://{{ groups['logstash'][0] }}:10001'"
- "--hostname 'myhost.com'"
- "--mount 'type=bind,source=/etc/localtime,destination=/etc/localtime:ro'"
- "--mount 'type=volume,source=mykeys,destination=/mykeys'"
- "--env 'spring.profiles.active=docker'"
- "--publish 8000:6666"
tasks:
- name: Include vault
include_vars: "myvault.yml"
- name: "delete service '{{ servicename }}'"
command: sudo docker service rm "{{ servicename }}"
args:
warn: false
ignore_errors: true
run_once: true
- name: "create service {{ servicename }}"
command: sudo docker service create {{ extraoptions | join( ' ' ) }} --name "{{ servicename }}" "{{ imageurl }}"
args:
warn: false
run_once: true
What I want to achieve is:
If the spring boot application is not able to start because of for example a BeanCreationException or something similar, then I don't want the docker service to restart endlessly.
If I restart the swarm etc. the docker service should restart automatically.
In the docker documentation is written:
If you manually stop a container, its restart policy is ignored until the Docker daemon restarts or the container is manually restarted. This is another attempt to prevent a restart loop.
So I guess that what I want to achieve is not possible with a restart policy.
Questions:
but maybe I can write something in my Dockerfile that I achieve my goals?
Or am I totally wrong here and misinterpret the documentation?
I am unfortunately not a docker expert and still learning to handle 'the swarm'.
There are 4 different restart policies in Docker:
no - Do not automatically restart the container. (the default)
on-failure - Restart the container if it exits due to an error, which manifests as a non-zero exit code.
always - Always restart the container if it stops
unless-stopped - Similar to always, except that when the container is stopped (manually or otherwise), it is not restarted even after Docker daemon restarts.
There is no way for docker to "detect" a type of error from an application and restart or not depending on that.
One way to achieve this is to use supervisord within your container and let that handle the restart depending of a list of exit codes that you define. But this means that your container will only restart when supervisord crashes, not when you application does and you'll have to change your code to return different exit codes on the errors that should be restarting and the ones that shouldn't.
Because it does not seem possible what I wanted to achieve, I read the documentation again (https://docs.docker.com/engine/reference/commandline/service_create/) and found the option --restart-max-attempts which will solve my problem with the endless loop.
You may want to try and implement the creation of a docker stack based on a docker-compose file.
In this scenario, as the compose v3 documentation indicates, you have full control over the service restart policy.
The next example won't allow restart:
version: "3.9"
services:
python:
image: my_user/my_repo:my_container
volumes:
- /home/python:/home
deploy:
restart_policy:
condition: none
You can adjust the restart_policy block with condition: [none | on-failure | any] and with max_attempts: [your_int]
Related
When I run the command docker-compose -f docker-compose.yml up my container starts normally.
In IntelliJ it appears the button to execute the container when the file docker-compose.yml is opened, When I try to upload the container directly through the * .yml file I get the error below:
Failed to deploy 'Compose: docker-compose': Sorry but parent: com.intellij.execution.impl.ConsoleViewImpl[,0,0,1188x368,invalid,layout=java.awt.BorderLayout,alignmentX=0.0,alignmentY=0.0,border=,flags=9,maximumSize=,minimumSize=,preferredSize=] has already been disposed (see the cause for stacktrace) so the child: com.intellij.util.Alarm#7566093f will never be disposed.
My docker-compose.yml file:
version: 3.4
services:
api.logistics-service:
container_name: logistics-service
build: ./docker
ports:
- "8080:8080"
I had the same problem. A wrong version in docker-compose.yaml caused the error on first startup.
After fixing this it, I was not able to start any docker-compose-services anymore.
Looks like an IntelliJ bug.
In this situation just restart IntelliJ.
I want two Docker containers to be able to communicate with each other on a Windows machine running Docker Toolbox. I am able to link the containers using the --link option; however, if I try to run the containers on a custom bridge network that I created, the containers are unable to communicate with each other :
Here are the steps I followed :
docker network create web-application-mysql-network
docker run --detach --env MYSQL_ROOT_PASSWORD=somepassword--env MYSQL_USER=some-user --env MYSQL_PASSWORD=pass --env MYSQL_DATABASE=mydb --name mysql --publish 3306:3306 --network=web-application-mysql-network mysql:5.7
docker run -p 8080:8080 -d --network=web-application-mysql-network myrepo/mywebapp:0.0.1-SNAPSHOT
The image in the last command above contains the Tomcat web server Docker image as the base image and a "WAR" (web archive file) that will be hosted in Tomcat. When I check the logs for the container started by the last command, I can see the following errors :
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
I am able to link the two containers without any issues if I used the --link option instead of running them on my custom bridge network.
Additional info : I am using localhost in my web app code for the MySQL URL. This seemed to work fine when using --link
What configuration/command parameters am I missing to make this work?
When you're using the network, you should use the container name you want to connect to in the URL. In other words, you have to use mysql in mywebapp to reach the DB.
I'd suggest you take a check to docker-compose since it allows you to avoid the manual creation of the network.
Here's an example:
version: "3"
services:
mysql:
image: mysql:5.7
env_file:
- db.env
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER:-user}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_DATABASE: "mydb"
volumes:
- dbdata:/var/lib/mysql
mywebapp:
image: myrepo/mywebapp:${TAG_VERSION:-0.0.1-SNAPSHOT}
build:
context: ./mywebapp_location
dockerfile: Dockerfile
ports:
- "8080:8080"
volumes:
dbdata:
db.env:
MYSQL_ROOT_PASSWORD=mysql_root_password
MYSQL_USER=the_user
MYSQL_PASSWORD=the_user_password
To build you can simply execute:
docker-compose build
and to start simply:
docker-compose up
for the rest you can use the normal docker commands.
I have Eureka from Spring Cloud started inside docker container. This is my Dockerfile for building and exposing Eureka:
FROM maven:3.5-jdk-8 AS build
COPY src /home/eureka/src
COPY pom.xml /home/eureka
RUN mvn -f /home/eureka/pom.xml clean package
FROM openjdk:8-jdk-alpine
COPY --from=build /home/eureka/target/service-registry-1.0-SNAPSHOT.jar /usr/app/service-registry-1.0-SNAPSHOT.jar
ENTRYPOINT ["java","-jar","/usr/app/service-registry-1.0-SNAPSHOT.jar"]
EXPOSE 8761
This is my docker compose file:
version: '2.1'
services:
eureka-service-registry-app:
build: eureka-service-registry-app
ports:
- "8761-8761"
There are more app will be in infrastructure, but right now they are commented.
I start docker-compose up, process looks ok, but when I want to check Eureka web dashboard by localhost:8761 this host is unavailable. Hm, ok. In list of my containers I see the follow:
0.0.0.0:32772->8761/tcp
and localhost:32772 is available and Eureka is alive. Moreover if I start docker-compose up again this port will be incremented and new port where Eureka will be available will be 32773. Thus I see there some schema but I don't understand how to make this port stable and regular as Eureka has been started with no Docker on 8761
You define a port range with
ports:
- "8761-8761"
Please change it to
ports:
- "8761:8761"
As others already pointed out: The port exposing in the docker-compose.yml should be changed to
-"8761:8761".
However I see more points to that.
The default port of Eureka is (as far as I know) 1111.
Are you exposing the correct port?
Furthermore be careful when using eureka in combination with docker.
They might register themselves with localhost or their internal IP-Address from the
Docker container, which might not be available from the other docker containers.
Consider having a look at the following application proporties (or environment variables):
eureka.instance.prefer-ip-address=false
eureka.instance.ip-address=$HOST_IP_ADDRESS
eureka.instance.hostname=localhost
I want to configure the gitlab pipeline to run my integration tests against a Postgres DB using Maven. I tried following this documentation but afterwards I noticed that this works only with the shared gitlab runners but I am using my own gitlab runner which runs in Kubernetes.
My gitlab-ci.yml:
cache:
key: "$CI_COMMIT_REF_NAME"
untracked: true
paths:
- .m2/repository/
variables:
MAVEN_CLI_OPTS: "-s .m2/settings.xml "
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository/"
POSTGRES_DB: postgres
POSTGRES_USER: runner
POSTGRES_PASSWORD: runner
stages:
- build
- verify
build:
image: maven:3.6.0-jdk-8
stage: build
script:
- "mvn $MAVEN_CLI_OPTS --quiet clean package -Dmaven.test.skip=true"
artifacts:
paths:
- "target/*"
test:
image: maven:3.6.0-jdk-8
services:
- postgres:latest
stage: verify
script:
- "mvn $MAVEN_CLI_OPTS --quiet -Dspring.profiles.active=dev clean test"
Using a shared runner this configuration works fine, but I have to use the runner from Kubernetes. Is there any way to execute my tests against a postgres DB without using the shared runner?
You're hitting a difference in the way network is handled on docker executor and on Kubernetes executor.
The docker executor work pretty much like a docker-compose upping all your containers in the same network. Each container get an IP and a DNS is created: if your service is named postgres the command nc postgres will resolve the postgres container IP and contact it (172.17.0.15:5432 for example).
The kubernetes executor will create a pod runner. All your containers will start in the same pod with only one IP address. Network between containers in the same pod is done by contacting 127.0.0.1. If you want to contact the postgres container you'll likely want to contact 127.0.0.1:5432. So if you use 127.0.0.1 instead of postgres it should work.
In order to get your pipeline working on both executors you can either:
Detect on which kind of runner you're using runner tags $CI_RUNNER_TAGS
Define a custom variable $POSTGRES_URL on all your executors
Try to resolve postgres and fall back to 127.0.0.1
I am using a docker-compose command to create and start my containers.
My Docker Version
docker --version
Docker version 17.09.0-ce, build afdb6d4
My Docker-Compose version
docker-compose --version
docker-compose version 1.16.1, build 6d1ac21
The .yml file that I'm using looks something like this:
(Note that I've just shortened it to take sensitive things out)
---
services:
zookeeper:
image: "zookeeper"
server-1:
cap_add:
- "NET_ADMIN"
server-0:
cap_add:
- "NET_ADMIN"
dns:
- 8.8.8.8
- 9.9.9.9
environment:
SERVER_ID: 0
NETEM_HOSTS: ""
LOSS_VALUES: ""
MAX_RATE_VALUES: ""
DELAY_VALUES: ""
image: "cloud.mycompany.com:5000/server-0:latest"
fakedns:
image: "cloud.mycompany.com:5000/fakedns:latest"
version: "3.3"
Then I start using:
docker-compose --file compose.yml up -d
My Question is this:
1) After containers come up... when I go into a container, for e.g. in this case server-0, I don't see the /etc/resolv.conf file updated to use these nameservers. Instead it uses the embedded dns of docker which is 127.0.0.11
2) How do I make sure that it uses what I specify in file that is used by docker-compose
3) I tried to do this with the command and it seems to work, but I need to do from compose-file
docker run -p 4000:53 --dns=8.8.8.8 cloud.mycompany.com:5000/server-0:latest
4) Ideally, I want it to have the IP address of the container 'fakedns' so that it uses this one instead of the embedded one #127.0.0.11
You won't see custom DNS servers in /etc/resolv.conf but Docker's resolver will forward DNS requests to them.
User Defined Networks and DNS
Docker compose definitions that are v2+ create a user defined network by default.
Docker with a user defined network uses an embedded DNS server so that Docker can respond for local container requests (service discovery).
For any DNS hosts Docker can not resolve, the request will be forwarded onto a DNS server. This is either the system default server, the server configured in dockerd or the DNS server configured for the container at run time.
Docker DNS
Be careful when using internal DNS servers. Things in the Docker daemon will break if you point the systems DNS at a container as you create a chicken or the egg problem, Docker needs DNS to start but can't start the container to provide DNS.
As your example config is only setting the DNS for one app container it should be ok, but make sure the DNS container is up and healthy before your application.