How to run xmlstarlet commands on Heroku - java

I have build.xml where a shell script fires xmlstarlet ed -L -s "/Package/types[name='$TYPENAME']" -t elem -n members -v "$ENTITY" $SCRIPTFILE command to create xml nodes.
if [[ "$2" != *"destructive"* ]]
then
xmlstarlet ed -L -s /Package -t elem -n version -v "43.0" $SCRIPTFILE
fi
xmlstarlet ed -L -i /Package -t attr -n xmlns -v "http://soap.sforce.com/2006/04/metadata" $SCRIPTFILE
This works fine when running locally via springboot app as I have xmlstartlet installed. But when I host this in Heroku it fails with generate_package_unix6199938543217323388.sh: line 193: xmlstarlet: command not found. Is there a workaround or a way to install xmlstartlet on heroku ?

You can install the xmlstarlet apt package by using the apt buildpack. To do so, create an Aptfile in the root directory of your app with the following contents:
xmlstarlet
Make sure you add it to Git with git add and git commit.
Then add the apt buildpack to your app by running:
$ heroku buildpacks:add -i 1 heroku-community/apt
When you redeploy, the xmlstarlet command should be installed.

Related

Update gitlab JDK to JDK17 (VM doesn't have internet connection)

I am using Gitlab to build a Java tool using ant
The tool requires JDK 17, but ant JDK version is 11, and I'm trying to change it.
So I tried a lot of solutions using a remote repository or remote download site, but after some tries I found out that the VM used to build the tool is not connected to internet (trying to ping google or my IP address doesn't work).
So I tried to upload in the same package with the tool source code the JDK 17 (openjdk-17_linux-x64_bin.tar.gz) and install it there.
Here is the problem, I am not sure how to do this since I don't work with linux, but I tried almost everything on the internet.
Every of these commands are used in a .gitlab-ci.yml file, used for gitlab pipeline.
Here are some examples of what I've tried so far:
- sudo cp /builds/project/openjdk-17_linux-x64_bin.tar.gz /usr/lib/jvm
- sudo tar zxvf "/usr/lib/jvm/openjdk-17_linux-x64_bin.tar.gz" -C /usr/lib/jvm
- echo "JAVA_HOME=/usr/lib/jvm/jdk-17" | sudo tee -a /etc/profile
- echo "PATH=${PATH}:${HOME}/bin:${JAVA_HOME}/bin" | sudo tee -a /etc/profile
- echo "export JAVA_HOME" | sudo tee -a /etc/profile
- echo "export JRE_HOME" | sudo tee -a /etc/profile
- echo "export PATH" | sudo tee -a /etc/profile
- sudo cat /etc/profile
- echo "JAVA_HOME=/usr/lib/jvm/jdk-17" | sudo tee -a /.bashrc
- echo "PATH=${PATH}:${JAVA_HOME}/bin" | sudo tee -a /.bashrc
- echo "JAVA_HOME='/usr/lib/jvm/jdk-17' | sudo tee -a /etc/environment"
- export JAVA_HOME=/usr/lib/jvm/jdk-17
- export PATH=$PATH:$JAVA_HOME/bin
After a lot of combinations of these commands the output of sudo update-alternatives --config java is still:
openjdk version "11.0.12" 2021-07-20
OpenJDK Runtime Environment (build 11.0.12+7-post-Debian-2deb10u1)
OpenJDK 64-Bit Server VM (build 11.0.12+7-post-Debian-2deb10u1, mixed mode, sharing)
But if I try /usr/lib/jvm/jdk-17/bin/java -version it prints 17.
What would be the solution of making the default Java version to be 17. (Also a solution for ant to use the JDK-17 without installing it would be great too, since I need the JDK-17 for ant)
Since you've already found a way to change the jdk on-the-go, you may really want to consider change the base image of your CI to save yourself a lot of time. This step will boost your CI speed. The steps to do that is fairly simple too.
Compose your own Dockerfile
This following is just a pesudo code. You may look into the description of dockerfile builder
FROM your-original-image. This is what you have in your image tag in the gitlab-ci file.
COPY jdk-17-linux-x64.tar.gz /usr/lib/jvm
RUN sudo tar zxvf "/usr/lib/jvm/jdk-17-linux-x64.tar.gz" -C /usr/lib/jvm \
&& sudo \cp -r /usr/lib/jvm/jdk-17 /usr/lib/jvm/java-1.11.0-openjdk-amd64 \
&& sudo \cp -r /usr/lib/jvm/jdk-17 /usr/lib/jvm/default-java \
&& sudo \cp -r /usr/lib/jvm/jdk-17 /usr/lib/jvm/java-11-openjdk-amd64 \
&& sudo \cp -r /usr/lib/jvm/jdk-17 /usr/lib/jvm/openjdk-11 \
&& sudo update-alternatives --remove-all java \
&& sudo update-alternatives --remove-all javac \
&& sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk-17/bin/java 1
Build the docker image
If you are using docker hub, then you need to login to docker and get a dockerId which matches the dockerId in the snippet.
If you are using a private repo like harbor or artifactory, you may need the permission to push to it.
docker build . -t dockerId/Name-of-your-image-you-want:latest
Upload the docker image using docker push
docker push dockerId/Name-of-your-image-you-want:latest
change the image tag in your gitlab-ci.yaml to dockerId/Name-of-your-image-you-want:latest
I found a solution.
- sudo cp jdk-17-linux-x64.tar.gz /usr/lib/jvm
- sudo tar zxvf "/usr/lib/jvm/jdk-17-linux-x64.tar.gz" -C /usr/lib/jvm
- sudo \cp -r /usr/lib/jvm/jdk-17 /usr/lib/jvm/java-1.11.0-openjdk-amd64
- sudo \cp -r /usr/lib/jvm/jdk-17 /usr/lib/jvm/default-java
- sudo \cp -r /usr/lib/jvm/jdk-17 /usr/lib/jvm/java-11-openjdk-amd64
- sudo \cp -r /usr/lib/jvm/jdk-17 /usr/lib/jvm/openjdk-11
- sudo update-alternatives --remove-all java
- sudo update-alternatives --remove-all javac
- sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk-17/bin/java 1
What I did here was to copy the JDK-17 content to all folders from /usr/lib/jvm folder. So even though the docker image uses JDK-11, I'm rewriting it using JDK-17 uploaded with the source code, and now the tool is built using JKD-17.
PS: I know this is slower and not professional, but in my case, it's easier and more convinient than trying to get help from those who setup the docker container.

Run Maven project inside Docker Container with run time parameters

I have a Testng Selenium Project that is build using Maven. I am running this maven project using Maven Surefire plugin like this:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.0.0-M3</version>
<configuration>
<forkMode>never</forkMode>
<useFile>true</useFile>
<testFailureIgnore>true</testFailureIgnore>
<!-- Suite testng xml file to consider for test execution-->
<suiteXmlFiles>
<suiteXmlFile>${suiteXmlFile}</suiteXmlFile>
</suiteXmlFiles>
</configuration>
</plugin>
What I need to do?
I need to run this Selenium project inside Docker Container. I need to move the complete source code to the container and run it from there. While running, I will be passing the path of testng xml file, that particular test alone should run. Post run, I need to take the result from docker image to local system (which we can do using docker cp ...).
What I have done so far?
I have created a docker image with maven, chrome, chromedriver. At run time, I am passing the testng XML file path and as expected that particular test case alone is running. But....
Once the program gets completed, the docker container is getting closed. docker ps shows no running containers. So, am not able to see the report.
What I want?
So, I want a way to avoid container from getting closed after the execution so that I can go into the docker container and see the report.
My Dockerfile:
FROM kshivaprasad/java
RUN apt-get update
RUN apt-get upgrade --fix-missing -y
RUN apt-get install -y curl
RUN apt-get install -y p7zip \
p7zip-full \
unace \
zip \
unzip
# Install Chrome for Selenium
RUN curl http://dl.google.com/linux/chrome/deb/pool/main/g/google-chrome-stable/google-chrome-stable_83.0.4103.116-1_amd64.deb -o /chrome.deb
RUN dpkg -i /chrome.deb || apt-get install -yf
RUN rm /chrome.deb
# Install chromedriver for Selenium
RUN mkdir -p /app/bin
RUN curl https://chromedriver.storage.googleapis.com/83.0.4103.39/chromedriver_linux64.zip -o /tmp/chromedriver.zip \
&& unzip /tmp/chromedriver.zip -d /app/bin/ \
&& rm /tmp/chromedriver.zip
ARG MAVEN_VERSION=3.6.3
# 2- Define a constant with the working directory
ARG USER_HOME_DIR="/root"
# 3- Define the SHA key to validate the maven download
ARG SHA=c35a1803a6e70a126e80b2b3ae33eed961f83ed74d18fcd16909b2d44d7dada3203f1ffe726c17ef8dcca2dcaa9fca676987befeadc9b9f759967a8cb77181c0
# 4- Define the URL where maven can be downloaded from
ARG BASE_URL=http://apachemirror.wuchna.com/maven/maven-3/${MAVEN_VERSION}/binaries
# 5- Create the directories, download maven, validate the download, install it, remove downloaded file and set links
RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
&& echo "Downlaoding maven" \
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
\
&& echo "Checking download hash" \
&& echo "${SHA} /tmp/apache-maven.tar.gz" | sha512sum -c - \
\
&& echo "Unziping maven" \
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
\
&& echo "Cleaning and setting links" \
&& rm -f /tmp/apache-maven.tar.gz \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
# 6- Define environmental variables required by Maven, like Maven_Home directory and where the maven repo is located
ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
COPY src /app/src
COPY pom.xml /app
COPY testng /app/testng
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /app/bin/chromedriver
#RUN mvn -f /app/pom.xml clean package
ENTRYPOINT ["/entrypoint.sh"]
That entrypoint.sh file is used to send the argument at run time. It consists of:
#!/bin/sh
mvn -f /app/pom.xml clean install -DsuiteXmlFile=$1
How I run this?
docker build -t my_image .
docker run -it my_image module/testng.xml
This process would be very time consuming when it comes to the quick time to market. I would suggest use zalenium https://opensource.zalando.com/zalenium/ in order to run your selenium script in the docker container and it has various features too like viewing the ongoing execution and retrieving the reports.
I just need to need bash command in the shell file (entrypoint.sh). It worked fine for me.
#!/bin/sh
mvn -f /app/pom.xml clean install -DsuiteXmlFile=$1
/bin/bash

When building a Maven project in Docker, all target files are rooted

I have a Maven project which I want to build in a Docker container, so that I don't have to install Java, Maven, ... on my Jenkins system.
I build the Maven project as following:
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro
-v $(pwd):/opt/maven
-w /opt/maven
maven
mvn clean install
This works great, the build is correctly made. However, when doing this as my regular user, all files in the target/ directory are owned by root and no by jester, which is my current user.
Is there an easy way to fix this?
You chan check out the option -u, as in this gist, which wraps docker run in a script:
#!/bin/sh
set -e
test ":$DEBUG" != :true || set -x
# set image
set -- debian:jessie "$#"
# use current user and its groups at host
for v in /etc/group /etc/passwd; do
[ ! -r "$v" ] || set -- -v $v:$v:ro "$#"
done
set -- --user "`id -u`:`id -g`" "$#"
for g in `id -G`; do
set -- --group-add "$g" "$#"
done
set -- -v "$HOME":"$HOME" "$#"
exec docker run --rm -it "$#"
The goal is to make sure the container mounts the local user definition, and uses the same uid/gid.

Unable to stop java process using SIGINT on Linux

I am building a virtual machine for some tests that needs to run among other things DynamoDb web service. VM is based on stock Ubuntu 16.04 64bit.
This is the part of provisioning script that installs DynamoDB:
# Install local instance of DynamoDB
DARCHFILE='/tmp/dynamodb_local_latest.zip'
wget -P /tmp/ -nv https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.zip
sudo unzip -q "$DARCHFILE" -d /usr/local/lib/dynamodb
sudo mkdir -pv /var/lib/dynamodb/
sudo mkdir -pv /var/log/dynamodb/
sudo apt-get install -y openjdk-8-jdk
This is the script that executes DynamoDb:
#!/bin/bash
# -*- mode: sh -*-
# vi: set ft=sh :
java -Djava.library.path=/usr/local/lib/dynamodb/DynamoDBLocal_lib/ \
-jar /usr/local/lib/dynamodb/DynamoDBLocal.jar \
-dbPath /var/lib/dynamodb/ \
-optimizeDbBeforeStartup \
-port 8000 > /var/log/dynamodb/trace.log 2> /var/log/dynamodb/error.log &
echo $! > /var/run/dynamodb.pid
Now if I try to stop the process using the "nice one" SIGINT (equivalent of Ctrl+C) it does nothing. Only sending SIGTERM works.
So this does nothing:
sudo kill -s SIGINT $(< /var/run/dynamodb.pid)
As we can confirm by executing this:
sudo ps -x | grep $(< /var/run/dynamodb.pid)
While this works just fine:
sudo kill -s SIGTERM $(< /var/run/dynamodb.pid)
On the other hand if I start dynamodb manually and don't send it to the background Ctrl+C does work.
So what gives? Is it my mistake or behavior by design?
Thanks

Jenkins Slave can't read settings.xml

I have created a jenkins slave image for Docker, which I want to use to build all of my Java projects, however, I can't work out how to reference the .m2/settings.xml file to tell it where to pull from.
My Dockerfile is:
FROM openjdk:8
MAINTAINER Chris Hudson <chudson#amelco.co.uk>
RUN apt-get -qqy update && \
apt-get -y install openssh-server sudo
RUN useradd -m -u 1000 -s /bin/bash jenkins && \
mkdir -p /home/jenkins/.ssh && \
mkdir -p /home/jenkins/.m2 && \
echo jenkins:jenkins | chpasswd && \
mkdir -p /etc/sudoers.d/ && \
echo "jenkins ALL=(root) NOPASSWD: ALL" > /etc/sudoers.d/jenkins && \
chmod 440 /etc/sudoers.d/jenkins
COPY id_rsa.pub /home/jenkins/.ssh/authorized_keys
COPY settings.xml /home/jenkins/.m2/
RUN chown -R jenkins:jenkins /home/jenkins
RUN mkdir -p /var/run/sshd
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
But when I run the build, it attempts to pull from maven central, and not our local Artifactory instance, which is configured in the settings file.
This works when I run it on Jenkins Master, but I want to offload the builds to the slaves, but I can't work out how to configure Maven correctly.
I think that your workspace is mounted from the slave. and it's not read the .m2 from your container
you can try use his plugin - https://wiki.jenkins-ci.org/display/JENKINS/Config+File+Provider+Plugin to create the settings.xml and configure your MVN build step to use it.

Categories