Heroku process crashing as soon as the process starts - java

I have a simple Micronaut server I am trying to launch on Heroku by building it with a heroku.yml but for some reason when I check the logs the process exits as soon as it starts. The docker image runs just fine locally and nothing else prints out in the logs so I can't find out why.
Here is my Dockerfile
FROM node as build-frontend
WORKDIR /app
COPY frontend/myfrontend/package.json .
COPY frontend/myfrontend/public ./public
COPY frontend/myfrontend/src ./src
RUN npm install .
RUN npm run build
FROM gradle:6.8.2-jre11 AS build-env
# Set the working directory to /home
WORKDIR /home
COPY --chown=gradle:gradle backend ./
COPY --from=build-frontend /app/build /home/src/main/resources/public
# Compile the application.
RUN ./gradlew assemble
FROM openjdk:11.0.10-jre-slim-buster
# Set the working directory to /home
WORKDIR /home
# Copy the compiled files over.
COPY --from=build-env /home/build/libs/myjar-0.1-all.jar /home/myjar.jar
# Starts the java app
# Using EntryPoing to pass env. variables as describe in this article
ENTRYPOINT exec java -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0 -noverify -XX:+AlwaysPreTouch -jar myjar.jar
Here is my heroku.yml
setup:
addons:
- plan: heroku-postgresql
as: DATABASE
build:
docker:
web: Dockerfile
run:
web: "exec java -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0 -noverify -XX:+AlwaysPreTouch -jar myjar.jar"
and finally my Micronaut application.yml which just sets some configs
micronaut:
application:
name: mypackage
router:
static-resources:
default:
enabled: true
paths:
- classpath:public
mapping: /**
server:
port: ${PORT:8080}
cors:
enabled: true
datasources:
default:
driverClassName: org.postgresql.Driver
dbUrl: jdbc:postgresql://mydbhost.com:port/dbname?sslmode=require
dbUsername: dbuser
dbPassword: dbpasswd
scan:
packages: mypackage
netty:
default:
allocator:
max-order: 3
When I just do docker build -t test-image:latest . and docker run -p 80:8080 test-image:latest I can connect just fine on localhost and the docker container runs the micronaut server. If it fails for some reason I see an output in the container logs detailing why. When I upload this to heroku (through a github deployment) All i see in the logs are
2023-01-09T22:25:50.145942+00:00 heroku[web.1]: Starting process with command `/bin/sh -c exec\ java\ -XX:\+UseContainerSupport\ -XX:MaxRAMPercentage\=80.0\ -noverify\ -XX:\+AlwaysPreTouch\ -jar\ myjar.jar`
2023-01-09T22:25:51.381760+00:00 heroku[web.1]: Process exited with status 0
2023-01-09T22:25:51.439962+00:00 heroku[web.1]: State changed from starting to crashed
I have tried:
To run it locally connected to the heroku addon postgres database, works just fine
Simplifying the build as much as possible
Removing the default on the port (to ensure it picks up $PORT), and running it locally with export PORT=8080 set (works just fine, docker container picks up the env port like we expect in heroku)
And I cannot figure out why it just quits immediately on Heroku. editted: originally thought it had to do with the port value but but I figured out how to give micronaut a port through the command line and it still doesn't work on Heroku (works locally)
edit:
I tried changing my application.yml to this with the DB hardcoded and still nothing. Just does not seem to work. App still crashes and there is nothing to indicate why in the logs
setup:
config:
PORT: $PORT
MICRONAUT_SERVER_PORT: $PORT
build:
docker:
web: Dockerfile
run:
web: "exec java -Dmicronaut.server.port=$PORT -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0 -noverify -XX:+AlwaysPreTouch -jar myjar.jar"
I still have no extra output from Heroku. No stacktrace or stderr or stdout at all.

Well, it isn't much of an answer. And nothing in the Heroku docs points me to why this works but removing ENTRYPOINT exec java -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0 -noverify -XX:+AlwaysPreTouch -jar myjar.jar from my Dockerfile works. With entrypoint removed I see logs again and can see my micronaut server starting. With ENTRYPOINT defined in my Dockerfile I just get what I posted about above "Process starting" followed immediately by process crashed

Related

docker image to build another image

Hi I need to dockerize a system. the way I have to do this like below
steps:
up dynamodb local instance ( just for up ).
run a custom script to create tables ( have to go through this to create the tables ).
then run the system.
I wrote a compose file also. the way I did that was, like below
version: "3"
services:
dynamodb:
image: amazon/dynamodb-local
ports:
- "8000:8000"
networks:
- custom-network
volumes:
- "db-data:/home/dynamodblocal/data"
app:
container_name: my-app
build:
context: .
dockerfile: Dockerfile
args:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
URL: ${URL}
env_file:
- docker.env
depends_on:
- dynamodb
networks:
- custom-network
volumes:
db-data:
networks:
custom-network:
docker file as below. ( sorry had to hide sensitive details )
FROM debian:buster
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ARG URL
RUN echo "deb http://ftp.us.debian.org/debian sid main" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install openjdk-8-jdk maven awscli -y
RUN aws s3 cp ${URL} db-updater.jar
RUN echo local > input
# there are few lines of configs that wrote to input file
RUN cat input | java -jar db-updater.jar http://dynamodb:8000
WORKDIR /opt/app
COPY . .
RUN mvn package
EXPOSE 8080
CMD ["java","-cp","./app/target/app-1.0.0.jar:./app/target/lib/*"]
my problem is looks like dynamodb do not start before the script run. so script throws a error as can't connect to server.
if I could write a custom a dynamodb with executed script that is also great. please help
Commands in a Dockerfile can never interact with other Docker containers. The general pattern is that an image is built once and reused, so you could delete and recreate your DynamoDB container, or run the same image on a different system, and the database setup wouldn't have happened. Mechanically, the Dockerfile runs in an environment where it's not connected to the Compose networking system, so attempts at connecting between containers will generally fail with a "no such host" error.
A typical pattern is to use an entrypoint script to do first-time setup when the container launches. For example, you could write a simple shell script:
#!/bin/sh
# Seed the database
java -jar db-updater.jar http://dynamodb:8000 < input
# Run whatever the main container command is
exec "$#"
You can then include this in your Dockerfile:
COPY entrypoint.sh . # probably included in the `COPY . .` line
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
# replaces `RUN java -jar db-updater.jar`
CMD ["java", "-cp", ...] # as in the current Dockerfile
If you only need this to run once when you first set up the container stack, you could also seed the data on your host.
# Outside Docker
aws s3 cp ... db-updater.jar
./make-seed-data.sh > input
# Start the DynamoDB container (only)
docker-compose up -d dynamodb
# Load the seed data
java -jar db-updater.jar -url http://localhost:8000 < input
# Now start the rest of the application
docker-compose up -d
This would let you remove the code to build the input file and download the updater tool from your Dockerfile. It would also let you remove the AWS credentials from the build sequence (very important: it may be possible to find them in plain text looking at the image's docker history).

Running test with testcontainers as part of a Dockerfile

My dockerfile looks something like this:
FROM maven:3-jdk-11-slim
COPY pom.xml .
COPY src src
RUN mvn clean install
That means that part of the build is the execution of the unit tests. Some of the unit tests use a testcontainer. Running mvn clean install on my local machine works fine, but running docker build . -t my-app doesn't because the testcontainers won't start.
(...)
15:54:38.793 [ducttape-0] DEBUG org.testcontainers.dockerclient.DockerClientProviderStrategy - Pinging docker daemon...
15:54:38.794 [ducttape-0] DEBUG com.github.dockerjava.core.command.AbstrDockerCmd - Cmd: org.testcontainers.dockerclient.transport.okhttp.OkHttpDockerCmdExecFactory$1#355cb260
15:54:39.301 [ducttape-0] DEBUG org.testcontainers.dockerclient.DockerClientProviderStrategy - Pinging docker daemon...
15:54:39.301 [ducttape-0] DEBUG com.github.dockerjava.core.command.AbstrDockerCmd - Cmd: org.testcontainers.dockerclient.transport.okhttp.OkHttpDockerCmdExecFactory$1#1c1a1359
15:54:39.469 [main] ERROR org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy - ping failed with configuration Environment variables, system properties and defaults. Resolved dockerHost=unix:///var/run/docker.sock due to org.rnorth.ducttape.TimeoutException: Timeout waiting for result with exception
org.rnorth.ducttape.TimeoutException: Timeout waiting for result with exception
(...)
I've seen examples of running docker run with working testcontainers, but how do I make my docker build work?
Help is much appreciated.
For future reference: I believe this is simply not possible.
docker run allows you to mount the Docker socket (and thus access the host's Docker daemon) with -v /var/run/docker.sock:/var/run/docker.sock.
docker build doesn't support such an argument.
My workaround will be to modify my Dockerfile to
RUN mvn clean install -Dmaven.test.skip=true and run the unit tests separately.
In the docker file, you need to have docker installed to run TestContainer.
I have learned this from one of the existing projects in my workplace. hopefully, it can give you some hints to solve your problem.
RUN curl -fsSL get.docker.com | sh
RUN curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose && \
chmod +x /usr/local/bin/docker-compose
In the docker-compose file,
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /mnt/cachedata/.m2:/root/.m2
- .:/app

ADD failed: stat /var/lib/docker/tmp/docker-builderXYZ/myapp.jar: no such file or directory

I put the Dockerfile into the project root.
meaning there's a ./target directory and therein, maven generates a spring-boot-web-0.0.1-SNAPSHOT.jar file.
I now want to add that to a docker image:
FROM centos
RUN yum install -y java # `-y` defaults questions to 'yes'
VOLUME /tmp # where Spring Boot will store temporary files
WORKDIR / # self-explanatory
ADD /target/spring-boot-web-0.0.1-SNAPSHOT.jar myapp.jar # add fat jar as "myapp.jar"
RUN sh -c 'touch ./myapp.jar' # updates dates on the application (important for caching)
EXPOSE 8080 # provide a hook into the webapp
# run the application; the `urandom` gets tomcat to start faster
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/myapp.jar"]
This ADD fails:
ADD failed: stat /var/lib/docker/tmp/docker-builder119635304/myapp.jar: no such file or directory
The solution seems to be moving the comments to their own lines as they will break the commands if they're in the same line as them.
This Dockerfile works prefectly fine:
# a linux runtime environment
FROM centos
# install java; `-y` defaults questions to 'yes'
RUN yum install -y java
# where Spring Boot will store temporary files
VOLUME /tmp
# self-explanatory
WORKDIR /
# add fat jar as "myapp.jar"
ADD /target/spring-boot-web-0.0.1-SNAPSHOT.jar myapp.jar
# updates dates on the application (important for caching)
RUN sh -c 'touch ./myapp.jar'
# provide a hook into the webapp
EXPOSE 8080
# run the application; the `urandom` gets tomcat to start faster
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/myapp.jar"]

Connecting a Spring Boot Application with a Dockerized MySQL database

First, I have dockerized a datasource, and I have linked it to a network.
Follow the guideline just here : https://github.com/SimonGirardSfeir/MySQL-Docker
Create a network
docker network create -d bridge mybridge
Link the image of the database to the network
docker run -d --net=mybridge --name=db docker-mysql
I go to the database in the shell to add the appropriate properties (username, password, name of the database), it works ok.
Now, it is perfectly ok.
Now, I am trying to link a Spring boot application with the following properties
spring.jpa.hibernate.ddl-auto=update
spring.datasource.url=jdbc:mysql://db:3306/db_example
spring.datasource.username=springuser
spring.datasource.password=ThePassword
# Keep the connection alive if idle for a long time (needed in production)
spring.datasource.testWhileIdle = true
spring.datasource.validationQuery = SELECT 1
And the Dockerfile is the following :
# Start with a base image containing Java runtime
FROM java:8-jre
# Add a volume pointing to /tmp
VOLUME /tmp
# Make port 8080 available to the world outside this container
EXPOSE 8080
# The application's jar file
ARG JAR_FILE=build/libs/mysql-0.0.1-SNAPSHOT.jar
# Add the application's jar to the container
ADD ${JAR_FILE} mysql.jar
RUN apt-get update
RUN apt-get install mysql-client -y
RUN apt-get install libmysql-java -y
# Run the jar file
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/mysql.jar"]
I build from the Dockerfile, it is ok.
Hence, I launch the following command
docker run --net=mybridge --name=app myappimage
But, the program is unable to connect to the database and the logs show:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
I don't understand where is the problem.
My sources are :
https://docs.docker.com/v17.03/engine/tutorials/networkingcontainers/#add-containers-to-a-network
https://spring.io/guides/gs/accessing-data-mysql/
https://github.com/moby/moby/issues/25562

Dockerized Java app dies with no error message but runs fine standalone

Please note: I know this questions is very similar to this one however you'll note that the solution in that case was to EXPOSE the port, which I am already doing. Hence although this questions sounds similar, I think its simply a different problem altogether with similar symptoms as the other question.
Docker Version 17.12.0-ce-mac49 (21995) here. I am experimenting with Docker for the first time and have built my first Docker image. My Dockerfile is:
FROM openjdk:8
RUN mkdir /opt/myapp
ADD build/libs/myapp.jar /opt/myapp
ADD application.yml /opt/myapp
ADD logback.groovy /opt/myapp
WORKDIR /opt/myapp
EXPOSE 9200
ENTRYPOINT java -Dspring.config=. -jar myapp.jar
I build it via:
docker build -t myapp .
Everything succeeds. I then tag it as if I'm going to push it to Quay:
docker tag <imageId> quay.io/myregistry/myapp:0.1.0-SNAPSHOT
However before I publish to Quay I want to run it locally to make sure it works:
docker run -it -p 9200:9200 -d --env-file /Users/myuser/myapp-local.env --name myapp myapp
When I run this I get an indication that the container is running, and I can even see it for a few seconds via docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f3fa8f7a4288 myapp "/bin/sh -c 'java -D…" Less than a second ago Up 7 seconds 0.0.0.0:9200->9200/tcp myapp
However after a few seconds it stops running and disappears from docker ps altogether:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Furthermore I'm not able to SSH into the container:
docker exec -it f3fa8f7a4288 bash
Error: No such container: f3fa8f7a4288
...or see any logs/console output.
When I run myapp.jar outside of Docker (as a typical Spring Boot app, it starts up and runs beautifully without exceptions). How can I troubleshoot what is going on?
The docker logs command will show you the output a container is generating when you run it detached (with -d). This is likely to include the error message.
docker logs --tail 50 --follow --timestamps container
You can run the image in the foreground without the -d to see the output like when you run myapp.jar outside of Docker.
docker run my/image
So in this specific case:
docker run -it -p 9200:9200 --env-file /Users/myuser/myapp-local.env --name myapp myapp
If I am not mistaken, the issue you are experiencing is because you are using the shell form of ENTRYPOINT. Change it to use the exec version, as follows:
ENTRYPOINT ["java", "-Dspring.config=.", "-jar", "myapp.jar"]
The shell form will launch Java as a separate process just like a shell command. This causes PID 1 to return making Docker believe the container is finished. Using the exec form, the Java process replaces PID 1 and the container will continue running.

Categories