We have a Java Spring Boot application that runs in a Docker container. It is based on openjdk:13-jdk-alpine. We deploy it to Linux machines, but we are also able to run it locally on Windows machines, as well as on an Intel-based iMac.
We have found, though, that it cannot run properly on an ARM-based MacBook Pro. The exceptions we get are basic Java errors like "Can't find symbol Java.class[]," and other things that look like the JVM is off.
Is there a way to build a Docker image that will work on all these platforms, including the M1 MacBook Pro?
I have a lot of problems with Java containers too on my M1 macbook. For your problem, maybe you need to create your own docker image:
Dockerfile
FROM --platform=linux/arm64/v8 ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
EXPOSE 8080
RUN apt update \
&& apt upgrade -y \
&& apt install -y openjdk-13-jre git \
&& apt clean
RUN mkdir -pv /app && cd /app && \
git clone https://github.com/spring-guides/gs-spring-boot.git && \
cd /app/gs-spring-boot/initial && ./gradlew build
WORKDIR /app/gs-spring-boot/initial
ENTRYPOINT [ "./gradlew", "bootRun" ]
Build image
docker build -t test .
Run container
docker run --rm -p 8080:8080 test
Go to http://localhost:8080/ on your browser and your Spring-Boot application is running without Rosetta 2.
Disclaimer: I'm not a Java developer and my Dockerfile is for Proof of Concept purpose.
Remember that your Docker image is builded to ARM64 architecture. If you wanna run this container on a Intel/AMD processor, you have to change FROM --platform=linux/amd64 ubuntu:20.04 on your Dockerfile.
I made it work with the following image.
I pulled the image with
docker pull bellsoft/liberica-openjdk-alpine-musl:17
My Dockerfile:
FROM bellsoft/liberica-openjdk-alpine-musl:17
ADD build/libs/app-0.0.1-SNAPSHOT-plain.jar app.jar
ENTRYPOINT ["java","-jar","app.jar"]
Now the docker build command worked
Build your images with multiarch support to get rid of all possible architecture failures in the future. To do this cleanly, avoid using anything related to the platform in your Dockerfile, just old-school Dockerfiles are ok.
If you are using github and github-actions, you may check this to build your images and push them into your image repository. This can be also used for building images which work on RaspberryPi like SBCs.
it's because image is not supported for m1 yet, you can build it for cross platform and run it
docker build --platform=linux/arm64 -t image:latest .
Related
I am playing around with a docker project that builds and starts with
docker run -p 8888:8888 -v /$(pwd)/example/proto:/proto <image-name>
Inside it is a gradle based java application, about which I would like to get to know somewhat more, so I started to modify its source, adding some logs etc.
I tried to rebuild and rerun the docker image the above way but the results of my modifications don't seem to visible, the logs aren't printed etc.
I removed the image with docker rmi, but after every rebuild it seems to be the same image is being created. docker images always shows it is created 3 weeks ago and the image id is always the same
Checking on the application level the build directory contains the newly compiled java classes, so apparently on that level my changes are in effect, but it seems docker still uses the old code
Any help would be appreciated
Updated: Dockerfile
FROM gradle:7.0.0-jdk11 as cache
RUN mkdir -p /home/gradle/cache_home
RUN mkdir -p /proto
RUN touch /proto/any.proto
ENV GRADLE_USER_HOME /home/gradle/cache_home
COPY build.gradle /home/gradle/java-code/
COPY gradle.properties /home/gradle/java-code/
WORKDIR /home/gradle/java-code
RUN gradle build -i --no-daemon || return 0
FROM gradle:7.0.0-jdk11 as runner
COPY --from=cache /home/gradle/cache_home /home/gradle/.gradle
COPY . /usr/src/java-code/
WORKDIR /usr/src/java-code
EXPOSE 8888
ENTRYPOINT ["gradle", "bootRun", "-i"]
A docker build will send your local changes to your local docker deamon to be built into an image.
cd projectWithDockerfile
docker build -f ./Dockerfile -t me/gradlethingy .
docker run -p 8888:8888 -v /$(pwd)/example/proto:/proto me/gradlethingy
Without the build I'm guessing you are pulling in their <image-name> from the net each time.
I'm trying to figure out how and when to run the mybatis schema migrations from a Docker container deployed inside a Docker Swarm. I mean: I need the most correct way to do that.
At the moment We build a Docker container from a Dockerfile
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y \
openjdk-11-jre \
openjdk-11-jdk \
maven
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
COPY start.sh start.sh
RUN chmod +x start.sh
ENTRYPOINT ["/bin/sh","start.sh"]
then the start.sh script contains
mvn resources:resources migration:up -Dmigration.path="target/classes/migrations" -Dmigration.env=development -Papply_migrations
java -jar /app.jar
But in this way we have to build an image from Ubuntu, install Maven and lunch the migrations with the environment "hardcoded" into the start.sh file, so We need different files from different envs.
What do you think is the most correct method to run these scheme migrations during the build/deployment process?
Thanks in advance.
EDIT:
I've found useful the solution to use the mybatis migration docker image found on DockerHub and posted by #h3adache but still to have an Issue trying to execute it on a DockerSwarm: the issue is related to the volume mounted between the host folder with mybatis migrations files and the container folder "/migration"
-v $PWD:/migration
My docker-compose.yml is
mybatis-migration:
image: mybatis/migrations
volumes:
- ./mybatis-migrations:/migration
command:
- up
It works fine locally against a dockerized MySQL but fails during the deploy with a GitLab pipeline.
The ./mybatis-migrations folder is, obviously, on my localhost when I checkout the code and It is in the build path of the GitLab repository when the GitLab runner builds everything but is not on the DockerSwarm host so it's unable to find that directory.
This is the error message:
invalid mount config for type "bind": bind source path does not exist
How can I fix this?
Let's look to the problem with maven first. I understand that you (quite rightfully) don't want to install maven (and probably JDK).
There are two ways to achieve what you need.
Runtime Schema Upgrade
You can run migration right from your application when it starts. It may be run from the main method or from the custom javax.servlet.ServletContextListener if you deploy a web application.
Here's how it may look like:
new UpOperation().operate(
new DataSourceConnectionProvider(dataSource),
new JavaMigrationLoader("mycompany.migration.script"), null, null);
Check the documentation with details how to configure this.
This would require to only include mybatis migration to the dependencies of your project (which you might already have).
Using mybatis migrations library directly
The other way is to run mybaits migration directly that is without maven. This can be done by installing the library inside docker as described in the documentation. Note that you only need the libraryr itself and JRE, so no JDK and maven is required.
Then you can run migration using migrate script that is part of the distribution archive.
Environment
In order to fix this you can pass that as a parameter to the docker container that runs start.sh. One option is to use environment variable via --env option for docker service create or docker run. The variable passed this way can be accessed as a regular environment variable in linux in your start.sh.
I suggest you follow the guide I posted on medium which uses the official Mybatis Migrations docker hub image
It gives you an 'out of the box' docker experience and allows you to target different environments (as mentioned in my post).
tl;dr
Use https://hub.docker.com/r/mybatis/migrations for your base image.
This gives you migration commands out of the box
Instead of using the .gitlab-ci.yml from the post, you can add the action (eg. migrate up) as your docker image entrypoint or command.
You can control env or directly affect parameters used by using docker --env
eg.
docker run \
--rm \
--env "MIGRATIONS_URL=jdbc:mysql://$(hostname):3306/mb_migration" \
-v $PWD:/migration \
-it mybatis/migrations status
I have a Dockerfile which looks like this:
FROM alpine:3.9
RUN apk add --update openjdk8
RUN mkdir /var/generator/
COPY generator.jar /var/generator
EXPOSE 8080
ENTRYPOINT [ "/bin/sh" ]
Dockerfile is inside generator/ folder. I am building it using:
docker build -t generator generator/
It builds successfully:
Successfully built 878e81f622cc
Successfully tagged generator:latest
but when I am trying to run this image with
docker run -d -p 8080:8080 generator
it dies immediately. docker logs gives no output.
What is wrong with my Dockerfile? Why is the container dying?
Try to run the JAR. Currently, it just runs sh command and exits. Make it something as below to run the JAR in foreground -
FROM alpine:3.9
RUN apk add --update openjdk8
RUN mkdir /var/generator/
COPY generator.jar /var/generator
EXPOSE 8080
ENTRYPOINT ["java","-jar","/var/generator/generator.jar"]
Beside your entrypoint is wrong (sh exits immediately) I would also recommend to start with an appropriate base image instead of starting with alpine and installing the openjdk package. Since you want to run a java application just use the JRE and not a full JDK and start the application as a foreground process.
Here's a minimal version which is also more efficient in disksize as the image will be smaller.
FROM openjdk:8-jre-alpine
COPY generator.jar /opt/generator.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/opt/generator.jar"]
We are having java code that runs curl command to fetch the some result.
We have built a jar file and the jar file executes fine
Now, when we try to dokerize the java program (using jar) and run the application in docker we get this error:
errorjava.io.IOException: Cannot run program "curl": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at com.ps.api.common.CoreAPI_Spec.executeCoreAPI(CoreAPI_Spec.java:295)
at com.ps.api.common.CoreAPI_Spec.getAccessTokens(CoreAPI_Spec.java:319)
Dockerfile used :
FROM ubuntu:16.04
MAINTAINER niro;
# Install prerequisites
RUN apt-get update && apt-get install -y \
curl
FROM java:8-jdk-alpine
# Set the working directory to /app
WORKDIR /Users/******/Desktop/CoreAPI_Jar
# Copy the current directory contents into the container at /app
ADD *******_Automation-0.0.1-SNAPSHOT-jar-with-dependencies.jar ******_Automation-0.0.1-SNAPSHOT-jar-with-dependencies.jar
# Run app.py when the container launches
CMD ["java", "-jar", "******-0.0.1-SNAPSHOT-jar-with-dependencies.jar"]
The Java base image you are using is Alpine Linux one and curl package also needs to be downloaded from there. Here is Dockerfile I have used for Production deployments.
FROM openjdk:8-jre-alpine
RUN apk add --update \
curl \
&& rm -rf /var/cache/apk/*
Update 05/2019
As of Alpine Linux 3.3 there exists a new --no-cache option for apk. It allows users to install packages with an index that is updated and used on-the-fly and not cached locally:
FROM openjdk:8-jre-alpine
RUN apk --no-cache add curl
This avoids the need to use --update and remove /var/cache/apk/* when done installing packages.
Reference -
https://github.com/gliderlabs/docker-alpine/blob/master/docs/usage.md and Thank you #Daniel for the comment.
Your example dockerfile contains multiple FROM statements. This is valid but as the documentation says each FROM clears the state from previous instructions. And so the fresh installed curl is wiped after the second FROM.
Most languages have readily available HTTP clients these days; you should almost never be calling out to curl from a program in a language more sophisticated than a shell script. java.net.URLConnection has been a part of Java since Java 1.0 and (without knowing why you're trying to shell out for this) it's almost definitely the right tool here.
Assuming you control the executeCoreAPI method from your backtrace, you should change it to use the built-in Java HTTP client, and just delete all of the Dockerfile parts that try to install curl.
Mac 10.10.5 here, using docker-machine to create a VirtualBox host VM for my local Docker. I have a project that builds an executable JVM located at build/libs/myapp-SNAPSHOT.jar. My Dockerfile, which is located in the root of the project, looks like:
FROM frolvlad/alpine-oraclejdk8:slim
VOLUME /tmp
ADD build/libs/myapp-SNAPSHOT.jar myapp.jar
RUN sh -c 'touch /myapp.jar'
ENTRYPOINT ["java","-jar","/myapp.jar"]
Please note, I don't wish to push my images to any registry, just keep/run them locally (for now). When I run:
docker build -t myorg/myapp .
I get the following console output:
myuser#mymachine:~/sandbox/myapp$docker build -t myorg/myapp .
Sending build context to Docker daemon 42.69 MB
Step 1 : FROM frolvlad/alpine-oraclejdk8:slim
slim: Pulling from frolvlad/alpine-oraclejdk8
d0ca440e8637: Downloading [=================================================> ] 2.295 MB/2.32 MB
0f86278f6be1: Downloading [=================================================> ] 3.149 MB/3.172 MB
c704a6161dca: Download complete
And then the command-line just hangs after printing that "Download complete" message. I've waited for as long as 30 minutes (!!!) and nothing happens.
Any ideas where I'm going awry?
The VM is probably hanging. Try the following: https://github.com/docker/machine/issues/1819#issuecomment-138981139
docker-machine rm -f default
rm -fv ~/.docker/machine
docker-machine -D create -d virtualbox default
There are more issues about this on OSX.
I think the best practice is to setup a Linux native build box if you are doing any serious development. That way you can run docker without any VM overhead(which is ironically one of the major pain points docker is trying to solve)
There's also a Docker Beta program which runs on libcontainer natively on OSX and Windows.