I'm building a postgres+java container, and I'd like to open a shell into the java "service". That service exits immediately after starting, how can I do to open a shell into it?
I see it in docker ps -a but it has already exited.
The file I'm using is this .yaml with docker-compose
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: postgres
volumes:
- datavolume:/var/lib/postgresql
java:
image: openjdk:8
volumes:
datavolume:
A Docker container generally runs a single process. In the same way that just running a JVM without an application attached to it isn't really meaningful, running a Docker container with a JVM but no actual application added to it isn't that useful.
You should write a Dockerfile that adds your application's jar file to a base Java image; for instance
FROM openjdk:8
COPY app.jar /
CMD ["java", "-jar", "/app.jar"]
and then your docker-compose.yml file can have instructions to build and run this image
services:
java:
build: .
If you just want a shell in a copy of the image to poke around and see what's there, you can generally run
docker run --rm -it openjdk:8 sh
The standard openjdk Dockerfile doesn't explicitly declare any specific ENTRYPOINT or CMD so it will exit immediately when run. (It probably inherits a default /bin/sh, but with no command to run, that will also exit immediately.) You can declare some other command: in the Dockerfile to cause the "service" to not exit, but it's not really doing anything useful for you.
Related
I'm using a docker container to build and run my java application and I want to see the test results that would usually be available from build/reports/tests/test/index.html.
Here's my Dockerfile:
FROM alpine:latest
RUN apk add gradle openjdk17
WORKDIR /home/proj
COPY . .
RUN gradle build
ENTRYPOINT [ "java", "-jar", "./app/build/libs/app.jar" ]
Here's my docker-compose.yml:
version: "3.8"
services:
app:
build: .
I typically build my container with docker-compose build and run it with docker-compose up and would like this to stay as it is.
EDIT
I've tried changing my docker-compose.yml to:
version: "3.8"
services:
app:
build: .
volumes:
- ./tests:/home/proj/app/build/reports/tests/test
But this just creates an empty directory called tests in my project's root directory. I am 100% sure the right path is /home/proj/app/build/reports/tests/test as you can see here:
I've hit this same issue. Unfortunately volumes are only availible to docker run, they aren't part of the docker build portion of the docker compose process.
I even looked into the new RUN --mount=type=bind,source=.,target=./build/reports,rw gradle build and while in the container I can see (with a RUN ls ./build/reports) the reports being generated, but that mount (even as RW) only puts files in the container as a layer and they never appear on the host machine.
There is a hacky way to recover those test results, in the docker output, just above the failure you'll see this line ---> Running in f98e14dd1ee4. This is the layer ID, with that value you can copy from the failed layer to the local machine using
$ docker cp f98e14dd1ee4:/tmp/build/reports ./
$ ls reports/
checkstyle/ jacoco/ tests/
It shouldn't be to difficult to automate this kind of recover but it feels like it would be fragile automated.
I'm very interested if anyone has a better solution to this, even with an alternative to docker. I know I can build the container with gradle but I'd rather everything happens inside the container build process to keep the build environment defined as code.
Hi I need to dockerize a system. the way I have to do this like below
steps:
up dynamodb local instance ( just for up ).
run a custom script to create tables ( have to go through this to create the tables ).
then run the system.
I wrote a compose file also. the way I did that was, like below
version: "3"
services:
dynamodb:
image: amazon/dynamodb-local
ports:
- "8000:8000"
networks:
- custom-network
volumes:
- "db-data:/home/dynamodblocal/data"
app:
container_name: my-app
build:
context: .
dockerfile: Dockerfile
args:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
URL: ${URL}
env_file:
- docker.env
depends_on:
- dynamodb
networks:
- custom-network
volumes:
db-data:
networks:
custom-network:
docker file as below. ( sorry had to hide sensitive details )
FROM debian:buster
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ARG URL
RUN echo "deb http://ftp.us.debian.org/debian sid main" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install openjdk-8-jdk maven awscli -y
RUN aws s3 cp ${URL} db-updater.jar
RUN echo local > input
# there are few lines of configs that wrote to input file
RUN cat input | java -jar db-updater.jar http://dynamodb:8000
WORKDIR /opt/app
COPY . .
RUN mvn package
EXPOSE 8080
CMD ["java","-cp","./app/target/app-1.0.0.jar:./app/target/lib/*"]
my problem is looks like dynamodb do not start before the script run. so script throws a error as can't connect to server.
if I could write a custom a dynamodb with executed script that is also great. please help
Commands in a Dockerfile can never interact with other Docker containers. The general pattern is that an image is built once and reused, so you could delete and recreate your DynamoDB container, or run the same image on a different system, and the database setup wouldn't have happened. Mechanically, the Dockerfile runs in an environment where it's not connected to the Compose networking system, so attempts at connecting between containers will generally fail with a "no such host" error.
A typical pattern is to use an entrypoint script to do first-time setup when the container launches. For example, you could write a simple shell script:
#!/bin/sh
# Seed the database
java -jar db-updater.jar http://dynamodb:8000 < input
# Run whatever the main container command is
exec "$#"
You can then include this in your Dockerfile:
COPY entrypoint.sh . # probably included in the `COPY . .` line
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
# replaces `RUN java -jar db-updater.jar`
CMD ["java", "-cp", ...] # as in the current Dockerfile
If you only need this to run once when you first set up the container stack, you could also seed the data on your host.
# Outside Docker
aws s3 cp ... db-updater.jar
./make-seed-data.sh > input
# Start the DynamoDB container (only)
docker-compose up -d dynamodb
# Load the seed data
java -jar db-updater.jar -url http://localhost:8000 < input
# Now start the rest of the application
docker-compose up -d
This would let you remove the code to build the input file and download the updater tool from your Dockerfile. It would also let you remove the AWS credentials from the build sequence (very important: it may be possible to find them in plain text looking at the image's docker history).
I am trying to set up a very simple Docker container, that will execute a Jar file. I want an image that will run a simple Jar. I dont need anything else special.
So far my docker-compose.yml looks like this, but it doesn't start correctly:
version: "3.3"
services:
myapp:
image: openjdk:8
container_name: "myapp"
restart: always
ports:
- 8091:8091
volumes:
- "./meanwhile-in-hell.jar:/app.jar"
command: ['java', '-jar', '/app.jar']
STATUS
Restarting (1) 2 seconds ago
Using the command: ['java', '-jar', '/app.jar'] option, I see this in the Docker logs:
Error: Invalid or corrupt jarfile /app.jar
If I change to use entrypoint: [ "sh", "-c", "java -jar /app.jar" ], I see the same error.
The Jar file is absolutely fine and not corrupt. I have run it manually on another tomcat:8-alpine container I have running and it starts up successfully.
I have a Dockerfile which looks like this:
FROM alpine:3.9
RUN apk add --update openjdk8
RUN mkdir /var/generator/
COPY generator.jar /var/generator
EXPOSE 8080
ENTRYPOINT [ "/bin/sh" ]
Dockerfile is inside generator/ folder. I am building it using:
docker build -t generator generator/
It builds successfully:
Successfully built 878e81f622cc
Successfully tagged generator:latest
but when I am trying to run this image with
docker run -d -p 8080:8080 generator
it dies immediately. docker logs gives no output.
What is wrong with my Dockerfile? Why is the container dying?
Try to run the JAR. Currently, it just runs sh command and exits. Make it something as below to run the JAR in foreground -
FROM alpine:3.9
RUN apk add --update openjdk8
RUN mkdir /var/generator/
COPY generator.jar /var/generator
EXPOSE 8080
ENTRYPOINT ["java","-jar","/var/generator/generator.jar"]
Beside your entrypoint is wrong (sh exits immediately) I would also recommend to start with an appropriate base image instead of starting with alpine and installing the openjdk package. Since you want to run a java application just use the JRE and not a full JDK and start the application as a foreground process.
Here's a minimal version which is also more efficient in disksize as the image will be smaller.
FROM openjdk:8-jre-alpine
COPY generator.jar /opt/generator.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/opt/generator.jar"]
I have the following docker file to run my java application
FROM gidikern/rhel-oracle-jre
RUN mkdir /application
WORKDIR /application
CMD "java -Dspring.profiles.active=sprofileName -jar my.war --spring.config.location=./application.properties > app.log > 2>&1"
and i'm running using docker-compose:
backend_app:
restart: always
image: my-app-runner:latest
ports:
- "8080:8080"
volumes:
- ./app/my.war:/application/my.war:Z
- ./app/application.properties:/application/application.properties:Z
- /srv/docker/backend_app/logs:/application/my.log:Z
tty: true
however when i start i get that my app exited with code 0 constantly.
I can't find what is wrong.
The problem was actually with my base image gidikern/rhel-oracle-jre
I tested and even basic commands like ls didn't work..
Switched to openjdk for now and it is fine.
Very probably there's an error in your application. To debug I suggest you to run the container in shell and to execute the java command manually to see what's happening.
In other words:
docker run -it --name app-debug my-app-runner:latest /bin/bash
java -Dspring.profiles.active=sprofileName -jar my.war --spring.config.location=./application.properties