I was asked to externalize my properties file from my wars file and gave a file an external path, because I have a docker image with a tomcat Inside and I was asked to master the files outside my docker.
how to do that?
I already know how to modify the pom to exclude my file from the build.
You can mount a volume in docker container to a path in host machine. Now when you create any application.properties file on host machine path, same will be visible and accessible to docker container as well.
Below is the plain docker run command to achieve it.
docker run -it --rm -v /home/k/myDocker:/k busybox sh
Below is the docker-compose.yml approach
version: '3'
services:
prometheus:
image: prom/prometheus
volumes:
- prometheus-data:/prometheus
volumes:
prometheus-data:
driver: local
driver_opts:
o: bind
type: none
device: /disk1/prometheus-data
Related
Hi I need to dockerize a system. the way I have to do this like below
steps:
up dynamodb local instance ( just for up ).
run a custom script to create tables ( have to go through this to create the tables ).
then run the system.
I wrote a compose file also. the way I did that was, like below
version: "3"
services:
dynamodb:
image: amazon/dynamodb-local
ports:
- "8000:8000"
networks:
- custom-network
volumes:
- "db-data:/home/dynamodblocal/data"
app:
container_name: my-app
build:
context: .
dockerfile: Dockerfile
args:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
URL: ${URL}
env_file:
- docker.env
depends_on:
- dynamodb
networks:
- custom-network
volumes:
db-data:
networks:
custom-network:
docker file as below. ( sorry had to hide sensitive details )
FROM debian:buster
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ARG URL
RUN echo "deb http://ftp.us.debian.org/debian sid main" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install openjdk-8-jdk maven awscli -y
RUN aws s3 cp ${URL} db-updater.jar
RUN echo local > input
# there are few lines of configs that wrote to input file
RUN cat input | java -jar db-updater.jar http://dynamodb:8000
WORKDIR /opt/app
COPY . .
RUN mvn package
EXPOSE 8080
CMD ["java","-cp","./app/target/app-1.0.0.jar:./app/target/lib/*"]
my problem is looks like dynamodb do not start before the script run. so script throws a error as can't connect to server.
if I could write a custom a dynamodb with executed script that is also great. please help
Commands in a Dockerfile can never interact with other Docker containers. The general pattern is that an image is built once and reused, so you could delete and recreate your DynamoDB container, or run the same image on a different system, and the database setup wouldn't have happened. Mechanically, the Dockerfile runs in an environment where it's not connected to the Compose networking system, so attempts at connecting between containers will generally fail with a "no such host" error.
A typical pattern is to use an entrypoint script to do first-time setup when the container launches. For example, you could write a simple shell script:
#!/bin/sh
# Seed the database
java -jar db-updater.jar http://dynamodb:8000 < input
# Run whatever the main container command is
exec "$#"
You can then include this in your Dockerfile:
COPY entrypoint.sh . # probably included in the `COPY . .` line
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
# replaces `RUN java -jar db-updater.jar`
CMD ["java", "-cp", ...] # as in the current Dockerfile
If you only need this to run once when you first set up the container stack, you could also seed the data on your host.
# Outside Docker
aws s3 cp ... db-updater.jar
./make-seed-data.sh > input
# Start the DynamoDB container (only)
docker-compose up -d dynamodb
# Load the seed data
java -jar db-updater.jar -url http://localhost:8000 < input
# Now start the rest of the application
docker-compose up -d
This would let you remove the code to build the input file and download the updater tool from your Dockerfile. It would also let you remove the AWS credentials from the build sequence (very important: it may be possible to find them in plain text looking at the image's docker history).
I have a problem connecting to the config-server. I am not sure what am I doing wrong. I have configured server running in a docker container named "config-server" on port 8888.
http://config-server:8888. Will be trying the next url if available
2020-08-10 17:38:35.196 ERROR 11052 --- [ main] o.s.boot.SpringApplication : Application run failed
java.lang.IllegalStateException: Could not locate PropertySource and the fail fast property is set, failing
at org.springframework.cloud.config.client.ConfigServicePropertySourceLocator.locate(ConfigServicePropertySourceLocator.java:148) ~[spring-cloud-config-client-2.2.3.RELEASE.jar:2.2.3.RELEASE]
discovery-server bootstrap.yml
spring:
application:
name: discovery-server
cloud:
config:
uri: http://config-server:8888
fail-fast: true
retry:
max-attempts: 20
EDIT
config-server Dockerfile
FROM openjdk:11.0-jre
ADD ./target/config-server-0.0.1-SNAPSHOT.jar config-server-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java", "-jar", "/config-server-0.0.1-SNAPSHOT.jar"]
EXPOSE 8888
docker run -p 8888:8888 --name config-server 3deb982c96fe
Discavery-server is not running in docker. First I want to create its .jar file
Original question already answered in comments, answering the last point here for better formatting:
Jar file will be built in /target folder of your application everytime you run mvn clean install or gradle build. In order to run this in Docker you have to copy the jar file from your /target directory to the Docker container inner files, and then run it (java -jar nameOfYourJar.jar).
Name of your jar can be defined in maven/gradle settings but to keep your Dockerfile generic I suggest following Dockerfile:
FROM openjdk:11.0-jre
ARG JAR_FILE=/target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","app.jar"]
with ARG JAR_FILE you save the path to any jar file (found in target) as JAR_FILE variable in docker and then you can copy it to your Docker inner files where it will be stored under the name app.jar.
ENTRYPOINT is the command that will be run on container start.
Place the Dockerfile next to the /target directory (so in root folder of your app) and run following command in terminal:
docker build -t springapp . && docker run --rm -d -p 8080:8080 springapp
Hope this clarifies everything.
I want to know how to configure a docker setup (docker-compose) in order to supply a configuration file which is consumed by my Spring boot application.
The configuration file is called services.xml which resides in the applications's /lib/conf directory. The file is deployed with the default configuration, however I want the file in host so that whenever I need to change the configuration, I should edit in host and the container would read the updated file.
The docker-compose.yml
version: '3.1'
services:
my-app:
image: my-app
container_name: my-app
# restart: always
ports:
- 8443:8443
volumes:
- ./my-app/conf:/opt/lib/my-app/lib/conf:rw
Expected results after running: docker-compose up
I expect this should create the directory, copy the default services.xml (along with all other files in /opt/lib/my-app/lib/conf) in container into this directory to make it available for me to edit.
Actual results
After running docker-compose, it creates an empty directory inside the my-app directory. The my-app fails to read the services.xml file and app doesn't start (as it depends on this file).
I expect this should create the directory, copy the default services.xml (along with all other files in /opt/lib/my-app/lib/conf) in container into this directory to make it available for me to edit.
From you said above, if your aim is to let the contents in container be pop to host & let you have chance to modify them, then I suggest you to use named volumes. But, the folder in host will be managed by docker itself, so you need to find where they are located.
A minimal example for your reference:
docker-compose.yaml(In my example it located in the folder 77):
version: '3'
services:
frontend:
image: alpine
command: "tail -f /dev/null"
volumes:
- my_data:/etc
volumes:
my_data:
Start the service:
shubuntu1#shubuntu1:~/77$ docker-compose up -d
Creating network "77_default" with the default driver
Creating volume "77_my_data" with default driver
Creating 77_frontend_1 ... done
Check the location of named volume in host:
shubuntu1#shubuntu1:~/77$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6635aba545c9 alpine "tail -f /dev/null" 14 minutes ago Up 14 minutes 77_frontend_1
shubuntu1#shubuntu1:~/77$ docker inspect 77_frontend_1 | grep Source
"Source": "/var/lib/docker/volumes/77_my_data/_data",
Check the content of original /etc/profile in container:
shubuntu1#shubuntu1:~/77$ docker exec 77_frontend_1 cat /etc/profile
export CHARSET=UTF-8
export LANG=C.UTF-8
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export PAGER=less
export PS1='\h:\w\$ '
umask 022
for script in /etc/profile.d/*.sh ; do
if [ -r $script ] ; then
. $script
fi
done
Modify the script from host:
shubuntu1#shubuntu1:~/77$ sudo -s -H
root#shubuntu1:/home/shubuntu1/77# cd /var/lib/docker/volumes/77_my_data/_data
root#shubuntu1:/var/lib/docker/volumes/77_my_data/_data# echo 'echo "hello"' >> profile
Check again the /etc/profile in container after we made changes on host:
shubuntu1#shubuntu1:~/77$ docker exec 77_frontend_1 cat /etc/profile
export CHARSET=UTF-8
export LANG=C.UTF-8
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export PAGER=less
export PS1='\h:\w\$ '
umask 022
for script in /etc/profile.d/*.sh ; do
if [ -r $script ] ; then
. $script
fi
done
echo "hello"
We can see echo "hello" which we add on host already be seen in container.
I have a simple Dockerfile in my spring boot as follow. I am able to build the image successfully locally, and can push using my credentials.
But my build keeps failing on every attempt to build automatically.
FROM openjdk:8-jdk-alpine
LABEL maintainer="xxxxx#xxx.com"
VOLUME /tmp
EXPOSE 8080
ARG JAR_FILE=target/jollof.jar
ADD ${JAR_FILE} jollof.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-
jar","/jollof.jar"]
From docker hub, I got this from the log.
Building in Docker Cloud's infrastructure...
Cloning into '.'...
Warning: Permanently added the RSA host key for IP address 'xxx.xx.xxx.xxx' to
the list of known hosts.
....
....
Step 6/7 : ADD ${JAR_FILE} jollof.jar
ADD failed: stat /var/lib/docker/tmp/docker-
builder674045875/target/jollof.jar:
no such file or directory
Unlike your local environment, Docker Hub fetches then builds your project in a fresh environment, so that the file target/jollof.jar that is intended to be copied is not available in the docker context. Hence the error you observe.
So I'd suggest refactoring your Dockerfile so that mvn package or so is done in the Dockerfile itself (which is a best practice to adopt, for the sake of reproducibility). Note that this configuration will be working for Docker Hub's automated builds as well as the builds in your local environment.
For example, below is an example Dockerfile that inspired by the that of this SO answer How to convert a Spring-Boot web service into a Docker image? as well as the Dockerfile of your post:
FROM maven:3.6-jdk-8 as maven
WORKDIR /app
COPY ./pom.xml ./pom.xml
RUN mvn dependency:go-offline -B
COPY ./src ./src
# TODO: jollof-* should be replaced with the proper prefix
RUN mvn package && cp target/jollof-*.jar app.jar
# Rely on Docker's multi-stage build to get a smaller image based on JRE
FROM openjdk:8-jre-alpine
LABEL maintainer="xxxxx#xxx.com"
WORKDIR /app
COPY --from=maven /app/app.jar ./app.jar
# VOLUME /tmp # optional
EXPOSE 8080 # also optional
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app/app.jar"]
I'm building a postgres+java container, and I'd like to open a shell into the java "service". That service exits immediately after starting, how can I do to open a shell into it?
I see it in docker ps -a but it has already exited.
The file I'm using is this .yaml with docker-compose
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: postgres
volumes:
- datavolume:/var/lib/postgresql
java:
image: openjdk:8
volumes:
datavolume:
A Docker container generally runs a single process. In the same way that just running a JVM without an application attached to it isn't really meaningful, running a Docker container with a JVM but no actual application added to it isn't that useful.
You should write a Dockerfile that adds your application's jar file to a base Java image; for instance
FROM openjdk:8
COPY app.jar /
CMD ["java", "-jar", "/app.jar"]
and then your docker-compose.yml file can have instructions to build and run this image
services:
java:
build: .
If you just want a shell in a copy of the image to poke around and see what's there, you can generally run
docker run --rm -it openjdk:8 sh
The standard openjdk Dockerfile doesn't explicitly declare any specific ENTRYPOINT or CMD so it will exit immediately when run. (It probably inherits a default /bin/sh, but with no command to run, that will also exit immediately.) You can declare some other command: in the Dockerfile to cause the "service" to not exit, but it's not really doing anything useful for you.