Jenkins Slave can't read settings.xml - java

I have created a jenkins slave image for Docker, which I want to use to build all of my Java projects, however, I can't work out how to reference the .m2/settings.xml file to tell it where to pull from.
My Dockerfile is:
FROM openjdk:8
MAINTAINER Chris Hudson <chudson#amelco.co.uk>
RUN apt-get -qqy update && \
apt-get -y install openssh-server sudo
RUN useradd -m -u 1000 -s /bin/bash jenkins && \
mkdir -p /home/jenkins/.ssh && \
mkdir -p /home/jenkins/.m2 && \
echo jenkins:jenkins | chpasswd && \
mkdir -p /etc/sudoers.d/ && \
echo "jenkins ALL=(root) NOPASSWD: ALL" > /etc/sudoers.d/jenkins && \
chmod 440 /etc/sudoers.d/jenkins
COPY id_rsa.pub /home/jenkins/.ssh/authorized_keys
COPY settings.xml /home/jenkins/.m2/
RUN chown -R jenkins:jenkins /home/jenkins
RUN mkdir -p /var/run/sshd
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
But when I run the build, it attempts to pull from maven central, and not our local Artifactory instance, which is configured in the settings file.
This works when I run it on Jenkins Master, but I want to offload the builds to the slaves, but I can't work out how to configure Maven correctly.

I think that your workspace is mounted from the slave. and it's not read the .m2 from your container
you can try use his plugin - https://wiki.jenkins-ci.org/display/JENKINS/Config+File+Provider+Plugin to create the settings.xml and configure your MVN build step to use it.

Related

japp.jar is not found when building docker image

I'm very much new to docker. I have a node application that process/read a java file. I'm trying to create a docker image that runs node and java. I node configs are working well but I'm always getting errors when building the docker file because of java - it shows
failed to compute cache key: "/japp.jar" not found: not found"
My Dockerfile contains:
FROM node:14.18.1
RUN apt-get update && apt-get -y upgrade
WORKDIR /code
EXPOSE 8443
COPY package.json /code/package.json
RUN npm install
COPY . /code
# Jave Config
FROM eclipse-temurin:11
RUN mkdir /opt/app
COPY japp.jar /opt/app
ENTRYPOINT ["java", "-jar", "/opt/app/japp.jar"]
CMD ["node","app.js"]
I was trying to follow the docs here but no luck.
In the past, I was using the following for Java config and that was working fine, but it shows some vulnerability so I need to switch to eclipse-temurin:11
# RUN echo 'deb http://ftp.debian.org/debian stretch-backports main' | tee /etc/apt/sources.list.d/stretch-backports.list
# RUN apt-get update && apt-get -y upgrade && \
# apt-get install -y openjdk-11-jre-headless && \
# apt-get clean;

Nifi stuck at "Launched Apache NiFi with Process ID 41"

I have created a docker image for Nifi 1.14.0 with Alpine OS and jdk-8. I have been able to successfully build it but when I execute "./nifi.sh run" command in docker to run Nifi but it gets stuck at "Launched Apache NiFi with Process ID 41" and doesn't move beyond this unless I do Ctrl+C which shutdowns Nifi. Upon checking the nifi-app.log, it is seen that nar files have been unpacked and no errors are shown. Nifi-bootstrap.log and nifi-user.log don't show any errors eithers. What could be the possible reason and solution for this such that Nifi launches properly?
Thanks in advance!!
My Dockerfile is:
#getting base image Alpine
FROM alpine
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk/jre
ENV PATH $JAVA_HOME/bin:$PATH
ENV NIFI_VERSION 1.14.0
ENV NIFI_HOME /opt/nifi
RUN apk --update add bash git wget ca-certificates sudo openssh rsync openjdk8 zip && \
rm -rf /var/cache/apk/* && \
rm -rf /opt && \
mkdir -p /opt
RUN wget https://dlcdn.apache.org/nifi/$NIFI_VERSION/nifi-$NIFI_VERSION-bin.tar.gz && \
tar xzf nifi-$NIFI_VERSION-bin.tar.gz -C /opt/ && \
ln -s /opt/nifi-$NIFI_VERSION $NIFI_HOME && \
rm nifi-$NIFI_VERSION-bin.tar.gz
RUN apk update --no-cache && apk upgrade --no-cache && apk add --no-cache bash libstdc++ libc6-compat
VOLUME ["$NIFI_HOME/conf"]
EXPOSE 8080
WORKDIR $NIFI_HOME

Run Maven project inside Docker Container with run time parameters

I have a Testng Selenium Project that is build using Maven. I am running this maven project using Maven Surefire plugin like this:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.0.0-M3</version>
<configuration>
<forkMode>never</forkMode>
<useFile>true</useFile>
<testFailureIgnore>true</testFailureIgnore>
<!-- Suite testng xml file to consider for test execution-->
<suiteXmlFiles>
<suiteXmlFile>${suiteXmlFile}</suiteXmlFile>
</suiteXmlFiles>
</configuration>
</plugin>
What I need to do?
I need to run this Selenium project inside Docker Container. I need to move the complete source code to the container and run it from there. While running, I will be passing the path of testng xml file, that particular test alone should run. Post run, I need to take the result from docker image to local system (which we can do using docker cp ...).
What I have done so far?
I have created a docker image with maven, chrome, chromedriver. At run time, I am passing the testng XML file path and as expected that particular test case alone is running. But....
Once the program gets completed, the docker container is getting closed. docker ps shows no running containers. So, am not able to see the report.
What I want?
So, I want a way to avoid container from getting closed after the execution so that I can go into the docker container and see the report.
My Dockerfile:
FROM kshivaprasad/java
RUN apt-get update
RUN apt-get upgrade --fix-missing -y
RUN apt-get install -y curl
RUN apt-get install -y p7zip \
p7zip-full \
unace \
zip \
unzip
# Install Chrome for Selenium
RUN curl http://dl.google.com/linux/chrome/deb/pool/main/g/google-chrome-stable/google-chrome-stable_83.0.4103.116-1_amd64.deb -o /chrome.deb
RUN dpkg -i /chrome.deb || apt-get install -yf
RUN rm /chrome.deb
# Install chromedriver for Selenium
RUN mkdir -p /app/bin
RUN curl https://chromedriver.storage.googleapis.com/83.0.4103.39/chromedriver_linux64.zip -o /tmp/chromedriver.zip \
&& unzip /tmp/chromedriver.zip -d /app/bin/ \
&& rm /tmp/chromedriver.zip
ARG MAVEN_VERSION=3.6.3
# 2- Define a constant with the working directory
ARG USER_HOME_DIR="/root"
# 3- Define the SHA key to validate the maven download
ARG SHA=c35a1803a6e70a126e80b2b3ae33eed961f83ed74d18fcd16909b2d44d7dada3203f1ffe726c17ef8dcca2dcaa9fca676987befeadc9b9f759967a8cb77181c0
# 4- Define the URL where maven can be downloaded from
ARG BASE_URL=http://apachemirror.wuchna.com/maven/maven-3/${MAVEN_VERSION}/binaries
# 5- Create the directories, download maven, validate the download, install it, remove downloaded file and set links
RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
&& echo "Downlaoding maven" \
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
\
&& echo "Checking download hash" \
&& echo "${SHA} /tmp/apache-maven.tar.gz" | sha512sum -c - \
\
&& echo "Unziping maven" \
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
\
&& echo "Cleaning and setting links" \
&& rm -f /tmp/apache-maven.tar.gz \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
# 6- Define environmental variables required by Maven, like Maven_Home directory and where the maven repo is located
ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
COPY src /app/src
COPY pom.xml /app
COPY testng /app/testng
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /app/bin/chromedriver
#RUN mvn -f /app/pom.xml clean package
ENTRYPOINT ["/entrypoint.sh"]
That entrypoint.sh file is used to send the argument at run time. It consists of:
#!/bin/sh
mvn -f /app/pom.xml clean install -DsuiteXmlFile=$1
How I run this?
docker build -t my_image .
docker run -it my_image module/testng.xml
This process would be very time consuming when it comes to the quick time to market. I would suggest use zalenium https://opensource.zalando.com/zalenium/ in order to run your selenium script in the docker container and it has various features too like viewing the ongoing execution and retrieving the reports.
I just need to need bash command in the shell file (entrypoint.sh). It worked fine for me.
#!/bin/sh
mvn -f /app/pom.xml clean install -DsuiteXmlFile=$1
/bin/bash

create a dockerfile to run python and groovy app

I am working on a project which is using both python and groovy to scrape data from websites and do some engineering on that data.
I want to create a dockerfile which should have a python(3.6.5) as base image and java8 and groovy should be installed on it to run my code.
the dockerfile I have right now is working for all the python codes(image : FROM python:3.6.5) but failing for groovy script and I cant find a solution which I can use to install groovy in dockerfile.
is there anyone who has a dockerfile solving this part problem ?
##########docker file below#############
FROM python:3.6.5
RUN sh -c "ls /usr/local/lib"
RUN sh -c "cat /etc/*-release"
# Contents of requirements.txt each on a separate line for incremental builds
RUN pip install SQLAlchemy==1.2.7
RUN pip install pandas==0.23.0
RUN pip uninstall bson
RUN pip install pymongo
RUN pip install openpyxl==2.5.3
RUN pip install joblib
RUN pip install impyla
RUN sh -c "mkdir -p /src/dateng"
ADD . /src/dateng
RUN sh -c "ls /src/dateng"
WORKDIR /src/dateng/
ENTRYPOINT ["python", "/src/dateng/_aws/trigger.py"]
You don't need to use sh -c command, just RUN command and we should not use a RUN instruction per command, intead we should group them in only one RUN, because each RUN is a separated layer in the docker image, thus increasing the final size of it.
Possible Solution
Inspired in this Dockerfile I use for a Python demo:
FROM python:3.6.5
ARG CONTAINER_USER="python"
ARG CONTAINER_UID="1000"
# Will not prompt for questions
ENV DEBIAN_FRONTEND=noninteractive \
CONTAINER_USER=python \
CONTAINER_UID=1000
RUN apt update && \
apt -y upgrade && \
apt -y install \
ca-certificates \
locales \
tzdata \
inotify-tools \
python3-pip \
groovy && \
locale-gen en_GB.UTF-8 && \
dpkg-reconfigure locales && \
#https://github.com/guard/listen/wiki/Increasing-the-amount-of-inotify-watchers
printf "fs.inotify.max_user_watches=524288\n" >> /etc/sysctl.conf && \
useradd -m -u ${CONTAINER_UID} -s /bin/bash ${CONTAINER_USER}
ENV LANG=en_GB.UTF-8 \
LANGUAGE=en_GB:en \
LC_ALL=en_GB.UTF-8
USER ${CONTAINER_USER}
RUN pip3 install \
fSQLAlchemy==1.2.7 \
pandas==0.23.0 \
pymongo \
openpyxl==2.5.3 \
joblib \
impyla && \
pip3 uninstall bson
# pip install will put the executables under ~/.local/bin
ENV PATH=/home/"${CONTAINER_USER}"/.local/bin:$PATH
WORKDIR /home/${CONTAINER_USER}/workspace
ADD . /home/${CONTAINER_USER}/dataeng
EXPOSE 5000
ENTRYPOINT ["python", "/home/python/dateng/_aws/trigger.py"]
NOTE: I am behind a corporate firewall, therefore I cannot test building this image as it is now, because I would need to add stuff to it that you don't need. Let me know if something doesn't work for you and I will work it out from home.

How to split docker file from monolithic deployment

In a single docker file I use to run Node and Java for the following:
run test script
run java executable (to bundle screenshots)
run my application
My understanding from the documentation is that this is bad practice, in the sense that this is a 'monolithic' deployment. I am supposed to split this in separate images based on the individual tasks. So, presumably, I would have 3 docker files.
Dockerfile 1: test script
FROM node:8
RUN node --version
RUN apt-get update && apt-get install -yq libgconf-2-4
# Note: this installs the necessary libs to make the bundled version of Chromium work
RUN apt-get update && apt-get install -y wget --no-install-recommends \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst ttf-freefont \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get purge --auto-remove -y curl \
&& rm -rf /src/*.deb
# project repo
RUN git clone ...
WORKDIR /myApp/
RUN npm install
RUN npm i puppeteer
CMD ["node", "test.ts"]
Dockerfile 2: Java executable (bundle images)
FROM openjdk
RUN git clone -b ...
WORKDIR /myApp/
CMD ["java -jar", "ImageTester.jar"]
Dockerfile 3: Run app
FROM node:8
RUN node --version
RUN git clone -b ...
WORKDIR /myApp/
RUN npm install
EXPOSE 9999
CMD ["npm", "start"]
The question is, how does one exactly do this? How is a non monolithic deployment implemented in my case? How does one run 3 docker images inside one project?
Use docker-compose. In your docker-compose.yml file, you'll have 3 services pointing to the 3 different dockerfiles.
Ex:
version: '3'
services:
test:
build:
context: .
dockerfile: Dockerfile.test
images:
build:
context: .
dockerfile: Dockerfile.images
app:
build:
context: .
dockerfile: Dockerfile.app

Categories