I'm attempting to build an image for my application using a multi-stage Dockerfile with Gradle and OpenJDK as base images.
Here's the Dockerfile:
ARG JAVA_VERSION
FROM gradle:7.5.1 as build
# Change the work directory to the app root
WORKDIR /app
# Copy all source contents to the build directory
COPY . .
# Execute the build
RUN gradle build
# Create a production stage
FROM openjdk:${JAVA_VERSION}-slim as app
# Change the work directory to the app root
WORKDIR /app
# Declare the version for use in the next step
ARG APP_VERSION
# Copy the JAR from the build directory in the previous stage
COPY --from=build /app/build/libs/app-jvm-$APP_VERSION.jar app-$APP_VERSION.jar
# Execute the startup command
CMD [ "java", "-jar", "app-$APP_VERSION.jar" ]
The script I'm using to export environment variables an subsequently run the buildx bake command is:
#!/bin/bash
# shellcheck disable=SC2046
#
# This script is intended to deploy all Docker images to the registry
# 1. Read all build environment variables from .env
# 2. Login to the Docker registry
# 3. Build and push all images with Docker buildx
#
# Export all the environment variables that are present in .env
echo "Exporting all variables from .env"
export $(grep -v '^#' .env | xargs -d '\n')
# Login to the registry if it ends in .com
if [[ $REGISTRY == *.com ]]; then
echo "Logging in to https://$REGISTRY Docker registry"
docker login https://"$REGISTRY"
fi
# Build and push with buildx bake using the docker compose file
echo "Building and pushing images"
docker buildx bake -f docker-compose.yml --push
When building the image with docker compose build, everything works.
The issue is that when running the script (or the command directly after exporting the variables), an error is produced at unusual places.
With the Dockerfile above, running the command produces the following error (FYI making the variable explicit changes nothing either):
--------------------
14 |
15 | # Create a production stage
16 | >>> FROM openjdk:${JAVA_VERSION}-slim as app
17 |
18 | # Change the work directory to the app root
--------------------
error: failed to solve: openjdk:17.0.2-slim: no match for platform in manifest sha256:aaa3b3cb27e3e520b8f116863d0580c438ed55ecfa0bc126b41f68c3f62f9774: not found
If I change the base image to some other Java based image such as gradle:7.5.1, then I get another error:
------
> [app linux/arm/v7 build 4/4] RUN gradle build:
#0 1.199 org.gradle.api.internal.classpath.UnknownModuleException: Cannot locate JAR for module 'ant' in distribution directory '/opt/gradle'.
#0 1.202 at org.gradle.api.internal.classpath.DefaultModuleRegistry.loadExternalModule(DefaultModuleRegistry.java:109)
#0 1.202 at org.gradle.api.internal.classpath.DefaultModuleRegistry.getExternalModule(DefaultModuleRegistry.java:97)
#0 1.202 at org.gradle.api.internal.DefaultClassPathProvider.findClassPath(DefaultClassPathProvider.java:70)
#0 1.202 at org.gradle.api.internal.DefaultClassPathRegistry.getClassPath(DefaultClassPathRegistry.java:35)
#0 1.202 at org.gradle.launcher.bootstrap.ProcessBootstrap.runNoExit(ProcessBootstrap.java:48)
#0 1.203 at org.gradle.launcher.bootstrap.ProcessBootstrap.run(ProcessBootstrap.java:37)
#0 1.203 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
#0 1.203 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
#0 1.203 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
#0 1.203 at java.base/java.lang.reflect.Method.invoke(Method.java:568)
#0 1.204 at org.gradle.launcher.GradleMain.main(GradleMain.java:34)
------
--------------------
9 |
10 | # Execute the build
11 | >>> RUN gradle build
12 |
13 | # Create a production stage
--------------------
error: failed to solve: process "/bin/sh -c gradle build" did not complete successfully: exit code: 1
Further examination of the gradle:7.5.1 image reveals that /opt/gradle DOES indeed have ant modules available.
Since the build is successful and none of these issues show themselves when running docker compose build, it seems that this has to do with buildx.
I'm running Windows 10 with WSL Debian. Note that I've also tried restarting docker, clearing caches, enabling experimental mode and even running it on a different machine (an Ubuntu server).
If anyone has any insight or possible solution I'd appreciate it!
EDIT:
I've realized that I completely missed the platform error on arm/v7 for OpenJDK. Looks like there isn't an OpenJDK image for arm/v7 and leaving me having to use eclipse-temurin.
The Gradle error is similar in that the ant module may be missing for the specific arm/v7 image. However, I'm unsure how to get past that particular issue other than possibly installing ant on the Gradle image if there is an arm/v7 version.
Related
I have a simple Micronaut server I am trying to launch on Heroku by building it with a heroku.yml but for some reason when I check the logs the process exits as soon as it starts. The docker image runs just fine locally and nothing else prints out in the logs so I can't find out why.
Here is my Dockerfile
FROM node as build-frontend
WORKDIR /app
COPY frontend/myfrontend/package.json .
COPY frontend/myfrontend/public ./public
COPY frontend/myfrontend/src ./src
RUN npm install .
RUN npm run build
FROM gradle:6.8.2-jre11 AS build-env
# Set the working directory to /home
WORKDIR /home
COPY --chown=gradle:gradle backend ./
COPY --from=build-frontend /app/build /home/src/main/resources/public
# Compile the application.
RUN ./gradlew assemble
FROM openjdk:11.0.10-jre-slim-buster
# Set the working directory to /home
WORKDIR /home
# Copy the compiled files over.
COPY --from=build-env /home/build/libs/myjar-0.1-all.jar /home/myjar.jar
# Starts the java app
# Using EntryPoing to pass env. variables as describe in this article
ENTRYPOINT exec java -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0 -noverify -XX:+AlwaysPreTouch -jar myjar.jar
Here is my heroku.yml
setup:
addons:
- plan: heroku-postgresql
as: DATABASE
build:
docker:
web: Dockerfile
run:
web: "exec java -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0 -noverify -XX:+AlwaysPreTouch -jar myjar.jar"
and finally my Micronaut application.yml which just sets some configs
micronaut:
application:
name: mypackage
router:
static-resources:
default:
enabled: true
paths:
- classpath:public
mapping: /**
server:
port: ${PORT:8080}
cors:
enabled: true
datasources:
default:
driverClassName: org.postgresql.Driver
dbUrl: jdbc:postgresql://mydbhost.com:port/dbname?sslmode=require
dbUsername: dbuser
dbPassword: dbpasswd
scan:
packages: mypackage
netty:
default:
allocator:
max-order: 3
When I just do docker build -t test-image:latest . and docker run -p 80:8080 test-image:latest I can connect just fine on localhost and the docker container runs the micronaut server. If it fails for some reason I see an output in the container logs detailing why. When I upload this to heroku (through a github deployment) All i see in the logs are
2023-01-09T22:25:50.145942+00:00 heroku[web.1]: Starting process with command `/bin/sh -c exec\ java\ -XX:\+UseContainerSupport\ -XX:MaxRAMPercentage\=80.0\ -noverify\ -XX:\+AlwaysPreTouch\ -jar\ myjar.jar`
2023-01-09T22:25:51.381760+00:00 heroku[web.1]: Process exited with status 0
2023-01-09T22:25:51.439962+00:00 heroku[web.1]: State changed from starting to crashed
I have tried:
To run it locally connected to the heroku addon postgres database, works just fine
Simplifying the build as much as possible
Removing the default on the port (to ensure it picks up $PORT), and running it locally with export PORT=8080 set (works just fine, docker container picks up the env port like we expect in heroku)
And I cannot figure out why it just quits immediately on Heroku. editted: originally thought it had to do with the port value but but I figured out how to give micronaut a port through the command line and it still doesn't work on Heroku (works locally)
edit:
I tried changing my application.yml to this with the DB hardcoded and still nothing. Just does not seem to work. App still crashes and there is nothing to indicate why in the logs
setup:
config:
PORT: $PORT
MICRONAUT_SERVER_PORT: $PORT
build:
docker:
web: Dockerfile
run:
web: "exec java -Dmicronaut.server.port=$PORT -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0 -noverify -XX:+AlwaysPreTouch -jar myjar.jar"
I still have no extra output from Heroku. No stacktrace or stderr or stdout at all.
Well, it isn't much of an answer. And nothing in the Heroku docs points me to why this works but removing ENTRYPOINT exec java -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0 -noverify -XX:+AlwaysPreTouch -jar myjar.jar from my Dockerfile works. With entrypoint removed I see logs again and can see my micronaut server starting. With ENTRYPOINT defined in my Dockerfile I just get what I posted about above "Process starting" followed immediately by process crashed
I try to build a CI/CD Pipeline with Bitbucket-Pipelines.
My dockerized Spring Boot Application should be pushed to my AWS ECR. But when the pipeline executes the docker commands it throws an error:
COPY failed: file not found in build context or excluded by .dockerignore: stat target/app.jar: file does not exist
Does anyone set up a similar pipeline and can help me out?
Here is my dockerfile:
FROM openjdk:11
ARG JAR_FILE
COPY target/app.jar app.jar
EXPOSE 80
ENTRYPOINT ["java", "-jar", "app.jar"]
Here is my pipeline yaml config:
image: maven:3.6.3
pipelines:
default:
- step:
name: Clean install
script:
- cd path.to.file
- mvn clean install -DskipTests
- step:
name: Push to ECR
services:
- docker # Enable Docker for your repository
script:
# build the image
- cd path.to.dockerfile
- docker build -t app-devtest .
# use the pipe to push the image to AWS ECR
- pipe: atlassian/aws-ecr-push-image:1.4.2
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
IMAGE_NAME: app-devtest
TAGS: 'latest'
The issue is with your dockerfile. This COPY target/app.jar app.jar will work in your local workspace, because you already have the jar locally (it is created when you run your app). In the VM where the pipeline is running, there is no provision to build the jar file over there, so the app.jar is missing.
To fix the issue, first you will need to tell maven to name your jar file using the name you give it; in this case "app". In the pom file of your project, add the following.
<build>
<finalName>app</finalName>
</build>
Then you will need to build the project in the docker build process and then copy your jar file. So the docker file should look something like below.
# first stage: use a maven project to clean package the application
FROM maven:3.8.1-openjdk-11-slim AS MAVEN_BUILD
COPY / /home/myapp/
# Building the jar file using maven
RUN mvn -f /home/app/pom.xml clean package
# second stage: use open jdk 11.0.11 image to build the project docker image
FROM adoptopenjdk/openjdk11:jre-11.0.11_9-alpine
# copy jar needed from the first stage and discard the rest
COPY --from=MAVEN_BUILD /home/myapp/app.jar /usr/local/lib/app.jar
EXPOSE 80
# set the startup command to execute the jar
CMD java -jar /usr/local/lib/app.jar
The above uses a 2 stage docker build process which creates a lightweight docker image.
This should be enough to solve the issue.
COPY failed: file not found in build context or excluded by
.dockerignore: stat target/app.jar: file does not exist
I put the Dockerfile into the project root.
meaning there's a ./target directory and therein, maven generates a spring-boot-web-0.0.1-SNAPSHOT.jar file.
I now want to add that to a docker image:
FROM centos
RUN yum install -y java # `-y` defaults questions to 'yes'
VOLUME /tmp # where Spring Boot will store temporary files
WORKDIR / # self-explanatory
ADD /target/spring-boot-web-0.0.1-SNAPSHOT.jar myapp.jar # add fat jar as "myapp.jar"
RUN sh -c 'touch ./myapp.jar' # updates dates on the application (important for caching)
EXPOSE 8080 # provide a hook into the webapp
# run the application; the `urandom` gets tomcat to start faster
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/myapp.jar"]
This ADD fails:
ADD failed: stat /var/lib/docker/tmp/docker-builder119635304/myapp.jar: no such file or directory
The solution seems to be moving the comments to their own lines as they will break the commands if they're in the same line as them.
This Dockerfile works prefectly fine:
# a linux runtime environment
FROM centos
# install java; `-y` defaults questions to 'yes'
RUN yum install -y java
# where Spring Boot will store temporary files
VOLUME /tmp
# self-explanatory
WORKDIR /
# add fat jar as "myapp.jar"
ADD /target/spring-boot-web-0.0.1-SNAPSHOT.jar myapp.jar
# updates dates on the application (important for caching)
RUN sh -c 'touch ./myapp.jar'
# provide a hook into the webapp
EXPOSE 8080
# run the application; the `urandom` gets tomcat to start faster
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/myapp.jar"]
Running a build of Maven based project but it fails.
The reason is no such file or directory when it tries to find the jar.
Dockerfile:
FROM frolvlad/alpine-oraclejdk8:slim
FROM maven:3.5.2-jdk-8-slim
VOLUME /tmp
CMD ['mvn package']
ADD target/app-0.1.0-SNAPSHOT.jar app.jar <-- fails there
RUN sh -c 'touch /app.jar'
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]
The log output:
...
Removing intermediate container 60da937dde8a
Step 4/8 : CMD ['mvn package']
---> Running in 8ba364ba9d98
---> 4a722569d1a7
Removing intermediate container 8ba364ba9d98
Step 5/8 : ADD target/app-0.1.0-SNAPSHOT.jar app.jar
ADD failed: stat /var/lib/docker/tmp/docker-builder1534563/target/app-0.1.0-SNAPSHOT.jar: no such file or directory
ERROR: Build failed: ADD failed: stat /var/lib/docker/tmp/docker-builder1534563/target/app-0.1.0-SNAPSHOT.jar: no such file or directory
ERROR: Build failed with exit code 2
Have played with the different settings but it still doesn't work despite the app name is matches to the built jar one.
How to fix the issue?
This question IMO has nothing to do with Spring Boot, and it's Docker related.
In general please share more information about how exactly you run docker build command, from which directory and where does your Dockerfile reside exactly. Without this information we can only speculate and provide generic answers:
To provide some points for consideration: Docker knows nothing about Maven structure of your project so you can just maintain the following layout:
<some_dir>
|____ Dockerfile
|____ app-0.1.0-SNAPSHOT.jar
Then you can run the docker build command from this directory and this should work. Then you can experiment with target directory and once you'll understand when it works and when it doesn't proceed with your current folders layout, the chances that with this practice you'll find out the answer very quickly.
The remote worker guide of bazel (here) explains how to start the remote-worker locally and then run bazel against it.
I tried it and indeed that worked (with bugs that reported in GH)
Another attempt was to create run the remote worker on a virtual separate machine, by running it inside docker container and running bazel against it. But it failed in a different way - and I think this time I'm using it wrong.
Here's my docker file:
FROM openjdk:8
# install release bazel from apt
RUN echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | tee /etc/apt/sources.list.d/bazel.list
RUN curl https://bazel.build/bazel-release.pub.gpg | apt-key add -
RUN apt-get update && apt-get install -y zip bazel
# compile dev bazel from sources
RUN mkdir -p /usr/src/bazel
# "bazel" has the latest development code of bazel from github
COPY bazel /usr/src/bazel
WORKDIR /usr/src/bazel
RUN bazel build src/bazel
# compile remote_worker using latest development bazel
RUN bazel-bin/src/bazel build //src/tools/remote_worker
# prepare cache folder
RUN mkdir -p /tmp/test
# Run remote-worker
CMD ["bazel-bin/src/tools/remote_worker/remote_worker","--work_path=/tmp/test","--listen_port=3030"]
After building it I simply ran the docker binding the port to the localhost:
$ docker build -t bazel-worker .
$ docker run -p 3030:3030 bazel-worker
Then ran bazel java test to run using the remote worker:
(Can check out my test repo here)
$ bazel --host_jvm_args=-Dbazel.DigestFunction=SHA1 test \
--spawn_strategy=remote \
--remote_executor=localhost:3030 \
--remote_cache=localhost:3030 \
--strategy=Javac=remote \
--remote_local_fallback=false \
--remote_timeout=600 \
//src/main/java/com/example/...
But I got this weird error message:
____Loading package: src/main/java/com/example
____Loading package: #bazel_tools//tools/cpp
____Loading package: #local_jdk//
____Loading package: #local_config_xcode//
____Loading package: #local_config_cc//
____Loading complete. Analyzing...
____Loading package: tools/defaults
____Loading package: #bazel_tools//third_party/java/jdk/langtools
____Loading package: #junit//jar
____Found 1 test target...
____Building...
____[0 / 2] BazelWorkspaceStatusAction stable-status.txt
____[2 / 4] Creating source manifest for //src/main/java/com/example:my_test
____From Extracting interface #junit//jar:jar:
/tmp/test/build-80057300-ffd2-49ea-a20b-3f234d9963db/external/bazel_tools/tools/jdk/ijar/ijar: 1: /tmp/test/build-80057300-ffd2-49ea-a20b-3f234d9963db/external/bazel_tools/tools/jdk/ijar/ijar: �����0��!H__PAGEZEROx__TEXTpp__text__TEXT/��__stubs__TEXT0p�__stub_helper__TEXT���__gcc_except_tab__TEXT�: not found
/tmp/test/build-80057300-ffd2-49ea-a20b-3f234d9963db/external/bazel_tools/tools/jdk/ijar/ijar: 2: /tmp/test/build-80057300-ffd2-49ea-a20b-3f234d9963db/external/bazel_tools/tools/jdk/ijar/ijar: Syntax error: word unexpected (expecting ")")
ERROR: /private/var/tmp/_bazel_ors/719f891d5db9fd5e73ade25b0c847fd1/external/junit/jar/BUILD.bazel:2:1: output 'external/junit/jar/_ijar/jar/external/junit/jar/junit-4.12-ijar.jar' was not created.
ERROR: /private/var/tmp/_bazel_ors/719f891d5db9fd5e73ade25b0c847fd1/external/junit/jar/BUILD.bazel:2:1: not all outputs were created or valid.
____Building complete.
Target //src/main/java/com/example:my_test failed to build
Use --verbose_failures to see the command lines of failed build steps.
____Elapsed time: 13.614s, Critical Path: 0.21s
Am I doing anything wrong? Do I need to run it differently when running the remote worker on an actual (or virtual) remote machine (vs. just running it locally)?
Important to mention: my machine is mac osx sierra. , I believe that docker openjdk:8 is ubuntu based, I'm running locally bazel development version (sha 956810b6ee24289e457a4b8d0a84ff56eb32c264).
Running the remote worker on a different architecture / OS combination than Bazel itself isn't working yet. We still have a couple of places in Bazel where we inspect the local machine - they were added as temporary measures, but haven't been fixed yet.
Edit: It may work in some cases, especially for platform-independent code (e.g., Java or Scala).
If your build is test-heavy, you could try only running tests remotely with --test_strategy=remote; I'm not sure if the default Jvm configuration will work, though.
If you want to run the entire build remotely, then you need to tell Bazel what kind of machines / OS it's executing on. Right now, that'd require setting --host_cpu, and probably --crosstool_top / --host_crosstool_top to configure a C++ compiler for that platform.
Also, some combinations of platforms are more and some less likely to work. In particular, combining MacOS and Linux or different flavors of Linux are much more likely to work than Windows in any combination.