I built a Docker container for my JBoss application. Container ran in AWS EKS Fargate. I found that an OutOfMemory error occurred after the application ran for several minutes. However, it would be fine if the root user is used rather than the created user.
My Dockerfile:
FROM openjdk:8u342-oraclelinux8
ENV WILDFLY_VERSION 11.0.0.Final
ENV JBOSS_HOME /usr/local/jboss_api
ENV HOME /usr/local
RUN groupadd -r jboss && useradd -ms /bin/bash -l -r -g jboss jboss \
&& chown jboss /usr/local
USER jboss
WORKDIR ${HOME}
COPY ./wildfly .
RUN tar -xvf wildfly-$WILDFLY_VERSION.tar.gz
RUN rm ./wildfly-$WILDFLY_VERSION.tar.gz
RUN mv ./wildfly-$WILDFLY_VERSION ./jboss_api \
&& mkdir ${JBOSS_HOME}/standalone/configuration/properties
WORKDIR ${JBOSS_HOME}
COPY ./script ./bin
COPY ./properties ./standalone/configuration
COPY ./configurations/standalone.conf ./bin/standalone.conf
COPY ./ROOT.war ./standalone/deployments/ROOT.war
WORKDIR ${JBOSS_HOME}
CMD ./bin/server.sh start
I figured out there is a memory issue for JVM. Therefore -Xmx2500m is set in java config.
-XX:+UnlockExperimentalVMOptions and -XX:+UseCGroupMemoryLimitForHeap are also added by reading some reference articles but the problem still cannot be solved.
Does anyone have an idea about this?
Related
I am copying the working version of the JRE directory into docker and trying to run /JRE/bin/java.
But it throws ash: java not found error. I am doing the same in a linux VM . Just copying the JRE folder and executing java command which works fine in VM. I don't want to download JRE from anywhere.
Want this specific JRE bundled.How to resolve this.
I entered into the shell console and navigated to the JRE/bin/ directory and executed "java". even then it fails . The error is same ash: java not found error.
Dockerfile:
FROM alpine:latest
ENV HOME=/root \
DEBIAN_FRONTEND=noninteractive \
LANG=en_US.UTF-8 \
LANGUAGE=en_US.UTF-8 \
LC_ALL=C.UTF-8 \
DISPLAY=:0.0 \
DISPLAY_WIDTH=1024 \
DISPLAY_HEIGHT=768
RUN apk --update --upgrade add \
bash \
fluxbox \
x11vnc \
xterm \
xvfb
COPY MyJavaApp MyJavaApp/
WORKDIR /MyJavaApp
ENV PATH="./JRE/bin:${PATH}"
When are you copying the JRE directory to the docker? i.e Docker build time or after spinning up the Docker container?
Looks like you are correctly copying the local Java directory to the image, however the current location cannot access the java binaries, thus, make sure to set the PATH. It should be something like,
RUN export PATH=/JRE/bin:${PATH}
or pass the path to the ENV in the Dockerfile,
ENV PATH="/JRE/bin:${PATH}"
The docker image definition below does not contain according to the documentation a SQL Server driver.
How can I install it?
Documentation:
https://github.com/camunda/docker-camunda-bpm-platform
Docker File
FROM alpine:3.10 as builder
ARG VERSION=7.12.0
ARG DISTRO=tomcat
ARG SNAPSHOT=true
ARG EE=false
ARG USER
ARG PASSWORD
RUN apk add --no-cache \
ca-certificates \
maven \
tar \
wget \
xmlstarlet
COPY settings.xml download.sh camunda-tomcat.sh camunda-wildfly.sh /tmp/
RUN /tmp/download.sh
##### FINAL IMAGE #####
FROM alpine:3.10
ARG VERSION=7.12.0
ENV CAMUNDA_VERSION=${VERSION}
ENV DB_DRIVER=org.h2.Driver
ENV DB_URL=jdbc:h2:./camunda-h2-dbs/process-engine;MVCC=TRUE;TRACE_LEVEL_FILE=0;DB_CLOSE_ON_EXIT=FALSE
ENV DB_USERNAME=sa
ENV DB_PASSWORD=
ENV DB_CONN_MAXACTIVE=20
ENV DB_CONN_MINIDLE=5
ENV DB_CONN_MAXIDLE=20
ENV DB_VALIDATE_ON_BORROW=false
ENV DB_VALIDATION_QUERY="SELECT 1"
ENV SKIP_DB_CONFIG=
ENV WAIT_FOR=
ENV WAIT_FOR_TIMEOUT=30
ENV TZ=UTC
ENV DEBUG=false
ENV JAVA_OPTS="-Xmx768m -XX:MaxMetaspaceSize=256m"
EXPOSE 8080 8000
# Downgrading wait-for-it is necessary until this PR is merged
# https://github.com/vishnubob/wait-for-it/pull/68
RUN apk add --no-cache \
bash \
ca-certificates \
openjdk11-jre-headless \
tzdata \
tini \
xmlstarlet \
&& wget -O /usr/local/bin/wait-for-it.sh \
"https://raw.githubusercontent.com/vishnubob/wait-for-it/a454892f3c2ebbc22bd15e446415b8fcb7c1cfa4/wait-for-it.sh" \
&& chmod +x /usr/local/bin/wait-for-it.sh
RUN addgroup -g 1000 -S camunda && \
adduser -u 1000 -S camunda -G camunda -h /camunda -s /bin/bash -D camunda
WORKDIR /camunda
USER camunda
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["./camunda.sh"]
COPY --chown=camunda:camunda --from=builder /camunda .
I was able to make it work after several days.
Steps
Download JDBC Driver from Microsoft Site, Version 7.2, it will include 2 JAR FILES
Uncompress, and copy the file into your docker folder
Copy the file into the camunda LIB folder, this is not explained anywhere, but after a short chat with camunda docker git repo people, they adviced me to do that.
Only line needs to add in DOCKER file is:
#MSSQL SERVER JDBC DRIVER INSTALL
COPY mssql-jdbc-7.2.2.jre11.jar /camunda/lib/
According to the documentation you refer to, Microsoft SQL Server is not supported.
So while you could try some exercise of downloading the JDBC driver (https://learn.microsoft.com/en-us/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server?view=sql-server-2017) and then add it to the Docker image and the classpath:
COPY name_of_jdbc_driver.jar /camunda/mssqlserver.jdbc
env CLASSPATH=/camunda/mssqlserver.jdbc
It's more than likely that this won't work because the camunda software does not support MS SQL Server.
So you should consider simply using one of the other databases that they explicitly support. I'd recommend PostgreSQL for example. It's free (as in beer and speech) and you can use it in production if you want to.
If you're just looking to do some testing and don't need this in a production environment. The instructions you point to have a decent explanation on how to start PostgreSQL in a Docker container and then start the Camunda container that uses the PostgreSQL container as a database.
I have two docker images: imageA and imageB.
ImageA Dockerfile
FROM openjdk:11-jre-slim
COPY ./target/java-app.jar /java-application/
ImageB Dockerfile
FROM imageA
# Install Python.
RUN \
apt-get update && \
apt-get install -y python python-dev python-pip python-virtualenv && \
rm -rf /var/lib/apt/lists/*
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
ENTRYPOINT ./startPythonServiceAndJavaApp.sh
startPythonServiceAndJavaApp.sh - is the scrip to start both java app and python app.
java -XX:+UseContainerSupport $JAVA_OPTIONS -jar ./java-application/java-app.jar & python app.py;
Then I build imageA - docker build -t imageA .. It builds successfully.
Then I build imageB and start the container. The python app starts successfully, but I get error
Error: Unable to access jarfile ./java-application/java-app.jar
When I ssh to the running container (note, it is running), I go into app directory. I ran ls and I saw these files:
C:\Users\user>docker exec -it 12345 bash
root#12345:/app# ls
Dockerfile app.py deploy.sh requirements.txt java-app.jar startPythonServiceAndJavaApp.sh
My question, why did java-app.jar end up in the app directory? In the Dockerfile of the imageA I told it to be in java-application directory:
COPY ./target/java-app.jar /java-application/
My question can be rephrased COPY and WORKDIR together. As a quick solution, I put the jar file into root of the docker context. Then I started both application from that context.
Here are the changes:
ImageA Dockerfile - just copy to the root of container.
COPY ./target/java-app.jar /
startPythonServiceAndJavaApp.sh
java -XX:+UseContainerSupport $JAVA_OPTIONS -jar ./java-app.jar & python app.py;
Both applications are running now in the single container. Hope this will help others. Please, correct me if I am wrong or share your ideas.
I am trying to run a Kafka Streams application in kubernetes. When I launch the pod I get the following exception:
Exception in thread "streams-pipe-e19c2d9a-d403-4944-8d26-0ef27ed5c057-StreamThread-1"
java.lang.UnsatisfiedLinkError: /tmp/snappy-1.1.4-5cec5405-2ce7-4046-a8bd-922ce96534a0-libsnappyjava.so:
Error loading shared library ld-linux-x86-64.so.2: No such file or directory
(needed by /tmp/snappy-1.1.4-5cec5405-2ce7-4046-a8bd-922ce96534a0-libsnappyjava.so)
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
at java.lang.Runtime.load0(Runtime.java:809)
at java.lang.System.load(System.java:1086)
at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:179)
at org.xerial.snappy.SnappyLoader.loadSnappyApi(SnappyLoader.java:154)
at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:435)
at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:466)
at java.io.DataInputStream.readByte(DataInputStream.java:265)
at org.apache.kafka.common.utils.ByteUtils.readVarint(ByteUtils.java:168)
at org.apache.kafka.common.record.DefaultRecord.readFrom(DefaultRecord.java:292)
at org.apache.kafka.common.record.DefaultRecordBatch$1.readNext(DefaultRecordBatch.java:264)
at org.apache.kafka.common.record.DefaultRecordBatch$RecordIterator.next(DefaultRecordBatch.java:563)
at org.apache.kafka.common.record.DefaultRecordBatch$RecordIterator.next(DefaultRecordBatch.java:532)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.nextFetchedRecord(Fetcher.java:1060)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1095)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1200(Fetcher.java:949)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:570)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:531)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1146)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1103)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:851)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:808)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
Previously I have tried launching kafka and kafka-streams-app using docker containers and they worked perfectly fine. This is the first time I am trying with Kubernetes.
This is my DockerFile StreamsApp:
FROM openjdk:8u151-jdk-alpine3.7
COPY /target/streams-examples-0.1.jar /streamsApp/
COPY /target/libs /streamsApp/libs
CMD ["java", "-jar", "/streamsApp/streams-examples-0.1.jar"]
What can I do to get past this issue? Kindly help me out.
EDIT:
/ # ldd /usr/bin/java
/lib/ld-musl-x86_64.so.1 (0x7f03f279a000)
Error loading shared library libjli.so: No such file or directory (needed by /usr/bin/java)
libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7f03f279a000)
Error relocating /usr/bin/java: JLI_Launch: symbol not found
In my case, install the missing libc6-compat didn't work. Application still throw java.lang.UnsatisfiedLinkError.
Then I find in the docker, /lib64/ld-linux-x86-64.so.2 exist and is a link to /lib/libc.musl-x86_64.so.1, but /lib only contains ld-musl-x86_64.so.1, not ld-linux-x86-64.so.2.
So I add a file named ld-linux-x86-64.so.2 linked to ld-musl-x86_64.so.1 in /lib dir and solve the problem.
Dockerfile I use:
FROM openjdk:8-jre-alpine
COPY entrypoint.sh /entrypoint.sh
RUN apk update && \
apk add --no-cache libc6-compat && \
ln -s /lib/libc.musl-x86_64.so.1 /lib/ld-linux-x86-64.so.2 && \
mkdir /app && \
chmod a+x /entrypoint.sh
COPY build/libs/*.jar /app
ENTRYPOINT ["/entrypoint.sh"]
In conclusion:
RUN apk update && apk add --no-cache libc6-compat
ln -s /lib/libc.musl-x86_64.so.1 /lib/ld-linux-x86-64.so.2
Error message states that *libsnappyjava.so cannot find ld-linux-x86-64.so.2. This is a glibc dynamic loader, while Alpine image doesn't run with glibc. You may try to get it running by installing libc6-compat package in your Dockerfile, e.g.:
RUN apk update && apk add --no-cache libc6-compat
There are two solutions of this problem:
You may use some other base image with pre-installed snappy-java lib. For example openjdk:8-jre-slim works fine for me
And the other solution is to still use openjdk:8-jdk-alpine image as base one, but then install snappy-java lib manually:
FROM openjdk:8-jdk-alpine
RUN apk update && apk add --no-cache gcompat
...
in docker with alpine kernel
run apk update && apk add --no-cache libc6-compat gcompat
save my life
If you are adding docker file through build.sbt then correct way to do it is
dockerfile in docker := {
val artifact: File = assembly.value
val artifactTargetPath = s"/app/${artifact.name}"
new Dockerfile {
from("openjdk:8-jre-alpine")
copy(artifact, artifactTargetPath)
run("apk", "add", "--no-cache", "gcompat")
entryPoint("java", "-jar", artifactTargetPath)
}
installing gcompat will serve your purpose
It seems strange, but looks like the docker image you use- openjdk:8u151-jdk-alpine3.7 is inconsistent, and
some dynamically loaded objects are not included into the package, or you need to run “ldconfig -v” in this image to update
map of the shared objects, or, at last, there is /etc/ld.so.conf with the paths to places where OS is looking for .so objects.
Please consider using another docker image providing java binary if you do not want to lose time on debugging it. Last but not least, ask for a remedy on alpine forum.
I have implemented a docker image with which I run a Spring Boot microservice with a Kafka Strean Topology working perfectly.
Here I share the Dockerfile file.
FROM openjdk:8-jdk-alpine
# Add Maintainer Info
LABEL description="Spring Boot Kafka Stream IoT Processor"
# Args for image
ARG PORT=8080
RUN apk update && apk upgrade && apk add --no-cache gcompat
RUN ln -s /bin/bash /usr/bin
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY resources/wait-for-it.sh wait-for-it.sh
COPY target/iot_processor.jar app.jar
RUN dos2unix wait-for-it.sh
RUN chmod +x wait-for-it.sh
RUN uname -a
RUN pwd
RUN ls -al
EXPOSE ${PORT}
CMD ["sh", "-c", "echo 'waiting for 300 seconds for kafka:9092 to be accessable before
starting application' && ./wait-for-it.sh -t 300 kafka:9092 -- java -jar app.jar"]
Hope it can help someone
I don't need to add libc6-compat in dockerFile
Because the file /lib/libc.musl-x86_64.so.1 exist in my container
In dockerFile add only
run ln -s /lib/libc.musl-x86_64.so.1 /lib/ld-linux-x86-64.so.2
My container don't have error when consumming msg on snappy compressing
Exception in thread "streams-pipe-e19c2d9a-d403-4944-8d26-0ef27ed5c057-StreamThread-1"
java.lang.UnsatisfiedLinkError: /tmp/snappy-1.1.4-5cec5405-2ce7-4046-a8bd-
922ce96534a0-libsnappyjava.so:
Error loading shared library ld-linux-x86-64.so.2: No such file or directory
(needed by /tmp/snappy-1.1.4-5cec5405-2ce7-4046-a8bd-922ce96534a0-libsnappyjava.so)
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
I have a spring-boot project and I want automatically redeploy my jar in the container.
How to do it correctly?
So far, all I see is this way. It's the right way?
# cd /home/jdev;
# sudo docker stop ca_spring_boot;
# sudo docker rm ca_spring_boot;
# sudo docker rmi ca_app_image;
# sudo docker build -t ca_app_image .;
# sudo docker run -d -p 8888:8080 --name ca_spring_boot ca_app_image
And my Dockerfile
FROM java:8
VOLUME /tmp
EXPOSE 8080
ADD docker-storage/jenkins/workspace/CA/build/libs/ca-1.0.jar app.jar
RUN bash -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-Dspring.profiles.active=container","-jar","/app.jar"]
Thanks.
You could mount a volume and put your app.jar in there. So you do not need to rebuild the image, you just restart the container.
Dockerfile
FROM java:8
ENTRYPOINT [ "sh", "-c", "java -jar /mnt/app.jar" ]
Put your app.jar in /docker/spring/
Build and run:
docker build -t spring_test .
docker run -d -v /docker/spring/:/mnt -p 12384:8080 --name spring_test_running spring_test
If you update your spring application you just do:
docker restart spring_test_running
The previous answer is good. But there is need to restart container every time when you want to test your code. But we can avoid this problem. Just use Spring dev tool
And mount destination directory as described above.