Im new at docker. I am making a java sockets music server and i have 2 files. Client.java and Server.java. Both are in separate containers. To mention, i run both services in command line
Docker files
FROM java:8
COPY Server.java /
RUN javac Server.java
EXPOSE 25000
ENTRYPOINT ["java"]
CMD ["Server"]
FROM java:8
COPY Client.java /
RUN javac Client.java
EXPOSE 25000
ENTRYPOINT ["java"]
CMD ["Client"]
i create also a network for these two to communicate.
docker network create client_server_network
and i run the images as follows:
docker run --env SERVER_HOST_ENV=server --network-alias server --network client_server_network -it server
docker run --network client_server_network -it clientimage
Now i want to create a docker compose file with those two Dockerfiles and the network. This is what i have done so far:
version:'3'
services:
client:
image: java:8
ports:
-25000:25000
network:
default:
name: client_server_network
server:
image: java:8
ports:
-25000:25000
environment:
-SERVER_HOST_ENV=server
network:
My question is how to add the common network in both services. Also is the way i write my docker compose file correct?
Below is the code, by default both container is added to network named net and it must provide in the last line also
version:'3'
services:
client:
image: java:8
ports:
- 25000:25000
networks:
- client_server_network
server:
image: java:8
ports:
- 25000:25000
environment:
- "SERVER_HOST_ENV=server"
networks:
- client_server_network
networks:
client_server_network:
Note that the network should be specified commonly at last in the same level of services and version. As indenting is a main factor in docker-compose. if it is incorrectly indent may shows error.
Delete all of the networks: blocks in the entire file. client and server will be usable as host names by these two containers.
This behavior is described further in Networking in Compose in the Docker documentation. If you don't declare top-level networks:, Compose creates a network called default, and if you don't declare networks: on a given service, Compose attaches it to that default network.
A functional translation of your two docker run commands would look like:
version: '3.8'
# Compose creates `networks: { default: }` on its own; you do not need
# top-level `networks:`
services:
server: # <-- this name is usable as a host name
image: server
environment:
- SERVER_HOST_ENV=server
# Automatically on `networks: [default]`
client:
image: clientimage
Related
I was asked to externalize my properties file from my wars file and gave a file an external path, because I have a docker image with a tomcat Inside and I was asked to master the files outside my docker.
how to do that?
I already know how to modify the pom to exclude my file from the build.
You can mount a volume in docker container to a path in host machine. Now when you create any application.properties file on host machine path, same will be visible and accessible to docker container as well.
Below is the plain docker run command to achieve it.
docker run -it --rm -v /home/k/myDocker:/k busybox sh
Below is the docker-compose.yml approach
version: '3'
services:
prometheus:
image: prom/prometheus
volumes:
- prometheus-data:/prometheus
volumes:
prometheus-data:
driver: local
driver_opts:
o: bind
type: none
device: /disk1/prometheus-data
Hi I need to dockerize a system. the way I have to do this like below
steps:
up dynamodb local instance ( just for up ).
run a custom script to create tables ( have to go through this to create the tables ).
then run the system.
I wrote a compose file also. the way I did that was, like below
version: "3"
services:
dynamodb:
image: amazon/dynamodb-local
ports:
- "8000:8000"
networks:
- custom-network
volumes:
- "db-data:/home/dynamodblocal/data"
app:
container_name: my-app
build:
context: .
dockerfile: Dockerfile
args:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
URL: ${URL}
env_file:
- docker.env
depends_on:
- dynamodb
networks:
- custom-network
volumes:
db-data:
networks:
custom-network:
docker file as below. ( sorry had to hide sensitive details )
FROM debian:buster
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ARG URL
RUN echo "deb http://ftp.us.debian.org/debian sid main" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install openjdk-8-jdk maven awscli -y
RUN aws s3 cp ${URL} db-updater.jar
RUN echo local > input
# there are few lines of configs that wrote to input file
RUN cat input | java -jar db-updater.jar http://dynamodb:8000
WORKDIR /opt/app
COPY . .
RUN mvn package
EXPOSE 8080
CMD ["java","-cp","./app/target/app-1.0.0.jar:./app/target/lib/*"]
my problem is looks like dynamodb do not start before the script run. so script throws a error as can't connect to server.
if I could write a custom a dynamodb with executed script that is also great. please help
Commands in a Dockerfile can never interact with other Docker containers. The general pattern is that an image is built once and reused, so you could delete and recreate your DynamoDB container, or run the same image on a different system, and the database setup wouldn't have happened. Mechanically, the Dockerfile runs in an environment where it's not connected to the Compose networking system, so attempts at connecting between containers will generally fail with a "no such host" error.
A typical pattern is to use an entrypoint script to do first-time setup when the container launches. For example, you could write a simple shell script:
#!/bin/sh
# Seed the database
java -jar db-updater.jar http://dynamodb:8000 < input
# Run whatever the main container command is
exec "$#"
You can then include this in your Dockerfile:
COPY entrypoint.sh . # probably included in the `COPY . .` line
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
# replaces `RUN java -jar db-updater.jar`
CMD ["java", "-cp", ...] # as in the current Dockerfile
If you only need this to run once when you first set up the container stack, you could also seed the data on your host.
# Outside Docker
aws s3 cp ... db-updater.jar
./make-seed-data.sh > input
# Start the DynamoDB container (only)
docker-compose up -d dynamodb
# Load the seed data
java -jar db-updater.jar -url http://localhost:8000 < input
# Now start the rest of the application
docker-compose up -d
This would let you remove the code to build the input file and download the updater tool from your Dockerfile. It would also let you remove the AWS credentials from the build sequence (very important: it may be possible to find them in plain text looking at the image's docker history).
I'm trying to create docker image by docker-compose. How I can do that?
I can start my project (spring boot + cassandra) by docker-compose up. And everything works great. But in next step I want to make from this project image, push to docker hub and pull it from docker hub to test on other computer. Im tried 'docker-compose build', 'docker-compose push' and then 'docker-compose pull'. After pull I can see information that cassandra and spring-boot-app is downloaded. But then, when I want to run this image by 'docker run' but it runs only springboot, without cassandra.
This is my docker-compose.yaml file:
version: '3'
services:
cassandra:
build:
context: ../
dockerfile: docker/cassandra/Dockerfile
ports:
- "9042:9042"
container_name: cassandra
spring-boot-cassandra:
build:
context: ../
dockerfile: docker/springbootsample/Dockerfile
links:
- cassandra
ports:
- "8080:8080"
environment:
SPRING_DATA_CASSANDRA_CONTACT_POINTS: cassandra
container_name: springboot
entrypoint: /wait-for-it.sh cassandra:9042 -- java -Djava.security.egd=file:/dev/./urandom -jar app.jar
depends_on:
- "cassandra"
image: myrepo/springbootsample
networks:
default:
driver: bridge
docker-compose is used to handle multiple images. I guess you are thinking it will merge two images into one, which will not happen.
docker run is used to run only one image at a time, this was the limitation because of that docker-compose was introduced.
docker-compose up does docker run, docker network create/add etc. commands to create you an environment. For use,
docker-compose build will build all your images present in the docker-compose.yml file.
docker-compose push will push all your images to the hub.
docker-compose pull will download those images from the hub if present.
and finally, if you want to run those images, use docker-compose up. If the images are not present it will download those images first from the hub.
I need to control the order of Docker containers instantiation, the problem is that I want to build a Jar file with the Docker maven container then pass that jar to an OpenJDK Docker container in order to build an image and then instantiate a MongoDB container and a Java-App container with the OpenJDK image generated before that communicates between them via docker-compose.
The problem is the Build always fails because some of the Unit tests talk to the database before it's initialized and since the tests fail the build also fails.
This is my dockerfile:
FROM maven:3.5-alpine
COPY ./ /app
RUN cd /app && mvn package
FROM openjdk:8
COPY spring-rest-iw-exam.jar /tmp/spring-rest-iw-exam.jar
EXPOSE 8087
ENTRYPOINT ["java", "-jar", "/tmp/spring-rest-iw-exam.jar"]
This is my Docker-Compose:
version: '2'
services:
mongodb:
image: mongo
container_name: iw_exam_mongo
restart: always
ports:
- "27017:27017"
environment:
- MONGO_INITDB_DATABASE=fizz_buzz_collection
volumes:
- /opt/iw-exam/data:/data/db
spring-app:
container_name: iw_exam_java_rest_api
build: ./
restart: always
ports:
- "8087:8087"
depends_on:
- mongodb
I tried with depends_on and did some other tests with a tool call dockerize but none of it works, the maven build always starts before docker-compose even start to instantiate mongodb.
This is the github repository of the proyect:
https://github.com/dsalasboscan/exam
I need to instantiate Mongodb first and THEN start with the maven build and java image generation.
I came across similar problem before, and would like to share my experience.
Basically, we need to wait for a while to make sure mongodb is completely boot up, here is the tool that you can leverage. It's fairly easy to use.
I'm building a postgres+java container, and I'd like to open a shell into the java "service". That service exits immediately after starting, how can I do to open a shell into it?
I see it in docker ps -a but it has already exited.
The file I'm using is this .yaml with docker-compose
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: postgres
volumes:
- datavolume:/var/lib/postgresql
java:
image: openjdk:8
volumes:
datavolume:
A Docker container generally runs a single process. In the same way that just running a JVM without an application attached to it isn't really meaningful, running a Docker container with a JVM but no actual application added to it isn't that useful.
You should write a Dockerfile that adds your application's jar file to a base Java image; for instance
FROM openjdk:8
COPY app.jar /
CMD ["java", "-jar", "/app.jar"]
and then your docker-compose.yml file can have instructions to build and run this image
services:
java:
build: .
If you just want a shell in a copy of the image to poke around and see what's there, you can generally run
docker run --rm -it openjdk:8 sh
The standard openjdk Dockerfile doesn't explicitly declare any specific ENTRYPOINT or CMD so it will exit immediately when run. (It probably inherits a default /bin/sh, but with no command to run, that will also exit immediately.) You can declare some other command: in the Dockerfile to cause the "service" to not exit, but it's not really doing anything useful for you.