I'm trying to create a CI/CD Pipeline for a simple java/maven project.
The runner that I'm using is a docker runner.
I'm using a dockerfile to create a container which installs maven/java/etc.. and in this container the program should be tested.
Sorry for the question but I am new to CI/CD Pipelines in GitLab.
GitHub works just fine have a look: https://github.com/ni920/CICD-Test
Thank you
Here are the CI logs
...
Executing "step_script" stage of the job script
$ docker build --build-arg JAVA_VERSION=openjdk7
/bin/sh: eval: line 95: docker: not found
Cleaning up file based variables
ERROR: Job failed: exit code 127
Thats the .gitlab-ci.yml
stages:
- java7
# - java11
# - deploy
java7:
stage: java7
script:
- docker build --build-arg JAVA_VERSION=openjdk7
# tags:
# - docker
#java11:
# stage: java11
# script:
# - docker build --build-arg JAVA_VERSION=openjdk11
# tags:
# - docker
Thats the dockerfile
# Pull base image.
FROM alpine as build
ARG MAVEN_VERSION=3.6.1
ARG USER_HOME_DIR="/root"
ARG JAVA_VERSION=openjdk7
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries
ENV HTTP_PROXY=#comment
ENV HTTPS_PROXY=#comment
# Install Java.
RUN apk --update --no-cache add JAVA_VERSION curl
RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
&& rm -f /tmp/apache-maven.tar.gz \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
# Define working directory.
WORKDIR /data
# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/default-jvm/
# Define default command.
CMD ["mvn", "--version"]
Running your pipelines using the Docker executor means that your jobs will run in a Docker container, but not that you will be able to execute docker commands.
If you need to run docker commands inside a GitLab CI job (read "inside a container") you will need Docker-in-Docker (often abbreviated DinD). It is a vast topic on itself but you can get started with GitLab CI's documentation: Use Docker to build Docker images
I always use DinD and have a minimal setup in my gitlab-ci.yml.
Using a docker image as a default:
image: docker:19.03.13
Define a default variable for TLS certificates:
variables:
DOCKER_TLS_CERTDIR: "/certs"
Then use a docker image as a service to enable DinD:
services:
- name: docker:19.03.13-dind
alias: docker
I wrote a few posts about using Docker-in-Docker on GitLab CI that you may find useful, but I still recommend to extensively read GitLab's documentation before reading them.
Related
So i am quite new to the whole CI topic and im trying to run tests after my merge request in my pipeline. Except that when my command try to run the container i have an error.
So this is my gitlab-ci file
image: docker:18
variables:
REPO_URL: "registry.gitlab.com/xxxxx/xxxxxxx"
PROD_SRV: "xxx"
DEV_SRV: "xxx"
services:
- docker:dind
stages:
- build jar
- build docker image
- tests
maven-build:
only:
- work
- test
image: maven:3-jdk-8
stage: build jar
script: "mvn -Dmaven.test.skip=true package -B"
artifacts:
paths:
- target/*.jar
expire_in: 7 days
docker build:
stage: build docker image
only:
- work
- test
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login --username "$CI_REGISTRY_USER" "$CI_REGISTRY" --password-stdin
- chmod a+x ./mvnw
script:
- docker info
- docker build -t $REPO_URL:$CI_COMMIT_SHA .
- docker images
- docker push $REPO_URL:$CI_COMMIT_SHA
after_script:
- docker logout registry.gitlab.com 2>/dev/null
- rm /root/.docker/config.json 2>/dev/null
tests job branch test:
stage: tests
only:
- test
variables:
GIT_STRATEGY: none
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login --username "$CI_REGISTRY_USER" "$CI_REGISTRY" --password-stdin
script:
- docker pull $REPO_URL:$CI_COMMIT_SHA
- docker run -t $REPO_URL:$CI_COMMIT_SHA .
- docker push $REPO_URL:$CI_COMMIT_SHA
after_script:
- docker logout registry.gitlab.com 2>/dev/null
- rm /root/.docker/config.json 2>/dev/null
and this is my dockerfile
# syntax=docker/dockerfile:1
FROM openjdk:16-alpine3.13 as base
WORKDIR /amsn-counter
COPY .mvn/ .mvn
COPY mvnw pom.xml ./
RUN ./mvnw dependency:go-offline
COPY src ./src
FROM base as test
RUN ["./mvnw", "test"]
FROM base as development
CMD ["./mvnw", "spring-boot:run", "-Dspring-boot.run.profiles=mysql", "-Dspring-boot.run.jvmArguments='-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000'"]
FROM base as build
RUN ./mvnw package
FROM openjdk:11-jre-slim as production
EXPOSE 8080
ADD /target/linkedin-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
and the error in the pipeline as well:
If you need any precision i will gladly answer, and like i said im new so don't hesitate to point if something is weird in the files or in my description in general
Edit: My branch are named {work}(local branch) and {test}
It looks as if there is no target specified when you are running docker build. Try this:
script:
- docker info
- image_name="$REPO_URL:$CI_COMMIT_SHA"
- docker build
--target test
--tag $image_name
.
- docker push $image_name
Docker build works on my laptop but on GitLab I get
Error: Could not find or load main class org.gradle.wrapper.GradleWrapperMain
Tried a lot of different setups but nothing works...fails at gradlew build...Any ideas are welcome
My .gitlab-ci.yaml
....variables here
publish:
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
- export GRADLE_USER_HOME=`pwd`/gradle
- export CLASSPATH=`pwd`/gradle/wrapper
cache:
paths:
- gradle/wrapper
- .gradle/wrapper
- .gradle/caches
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:$CI_PIPELINE_IID .
My Dockerfile
FROM openjdk:11
ENV wdir=code
ENV MY_SERVICE_PORT=8080
WORKDIR /$wdir
COPY . /code
RUN echo "Running build"
RUN ["/code/gradlew", "build"]
EXPOSE $MY_SERVICE_PORT
# Run the service
CMD ["java", "-jar", "build/libs/code-1.0-SNAPSHOT.jar"]
Created a workaround by installing gradle...works now:
FROM openjdk:11
ENV wdir=code
ENV MY_SERVICE_PORT=8080
WORKDIR /code
# Install Gradle
RUN wget -q https://services.gradle.org/distributions/gradle-6.5-bin.zip \
&& unzip gradle-6.5-bin.zip -d /opt \
&& rm gradle-6.5-bin.zip
ENV GRADLE_HOME /opt/gradle-6.5
ENV PATH $PATH:/opt/gradle-6.5/bin
# Prepare by downloading dependencies
ADD build.gradle /code/build.gradle
ADD src /code/src
RUN echo "Running build"
RUN cd /code
RUN gradle --no-daemon build
EXPOSE $MY_SERVICE_PORT
# Run the service
CMD ["java", "-jar", "build/libs/code-1.0-SNAPSHOT.jar"]
I have two docker images: imageA and imageB.
ImageA Dockerfile
FROM openjdk:11-jre-slim
COPY ./target/java-app.jar /java-application/
ImageB Dockerfile
FROM imageA
# Install Python.
RUN \
apt-get update && \
apt-get install -y python python-dev python-pip python-virtualenv && \
rm -rf /var/lib/apt/lists/*
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
ENTRYPOINT ./startPythonServiceAndJavaApp.sh
startPythonServiceAndJavaApp.sh - is the scrip to start both java app and python app.
java -XX:+UseContainerSupport $JAVA_OPTIONS -jar ./java-application/java-app.jar & python app.py;
Then I build imageA - docker build -t imageA .. It builds successfully.
Then I build imageB and start the container. The python app starts successfully, but I get error
Error: Unable to access jarfile ./java-application/java-app.jar
When I ssh to the running container (note, it is running), I go into app directory. I ran ls and I saw these files:
C:\Users\user>docker exec -it 12345 bash
root#12345:/app# ls
Dockerfile app.py deploy.sh requirements.txt java-app.jar startPythonServiceAndJavaApp.sh
My question, why did java-app.jar end up in the app directory? In the Dockerfile of the imageA I told it to be in java-application directory:
COPY ./target/java-app.jar /java-application/
My question can be rephrased COPY and WORKDIR together. As a quick solution, I put the jar file into root of the docker context. Then I started both application from that context.
Here are the changes:
ImageA Dockerfile - just copy to the root of container.
COPY ./target/java-app.jar /
startPythonServiceAndJavaApp.sh
java -XX:+UseContainerSupport $JAVA_OPTIONS -jar ./java-app.jar & python app.py;
Both applications are running now in the single container. Hope this will help others. Please, correct me if I am wrong or share your ideas.
I have a Maven project which I want to build in a Docker container, so that I don't have to install Java, Maven, ... on my Jenkins system.
I build the Maven project as following:
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro
-v $(pwd):/opt/maven
-w /opt/maven
maven
mvn clean install
This works great, the build is correctly made. However, when doing this as my regular user, all files in the target/ directory are owned by root and no by jester, which is my current user.
Is there an easy way to fix this?
You chan check out the option -u, as in this gist, which wraps docker run in a script:
#!/bin/sh
set -e
test ":$DEBUG" != :true || set -x
# set image
set -- debian:jessie "$#"
# use current user and its groups at host
for v in /etc/group /etc/passwd; do
[ ! -r "$v" ] || set -- -v $v:$v:ro "$#"
done
set -- --user "`id -u`:`id -g`" "$#"
for g in `id -G`; do
set -- --group-add "$g" "$#"
done
set -- -v "$HOME":"$HOME" "$#"
exec docker run --rm -it "$#"
The goal is to make sure the container mounts the local user definition, and uses the same uid/gid.
I have a project with 2 maven plugin.
Plugin is in java.
Another Plugin is angular2.
Over these 2 maven plugins is docker.
In docker container, tomee and mysql runs. I want to debug java with the front-end which means by hitting http://localhost:8080/mywebapp , the system should stop at the break point I set in backend (java file). I am using IntelliJ.
Does somebody know how to do it?
Since you are running you application in docker container remote debugging is the only way. you could,
Attach remote debugging information by following these steps. This will means that you have to expose additional port apart from tomcat 8080 port.
Expose port in DockerFile and map it on the host. This could be done using -p flag or doing this.
In intellij-idea, do remote debugging. This is when you hit http://localhost:8080/mywebapp.
with Gauraj's suggestions I have modified the 3 files:
1. docker-compose.yml:
version: '2.0'
services:
db:
build: 04.MySQL/
tomee:
build: .
depends_on:
- db
command: >
/bin/bash -c "
while ! nc -z db 3306;
do
echo sleeping;
sleep 5;
done;
echo Connected!;
catalina.sh run;
"
links:
- db:db
ports:
- "8080:8080"
- "8000:8000"
ssl:
build: 05.ProxySSL/
links:
- tomee
ports:
- "443:443"
- "80:80"
2. Dockerfile:
FROM java:8-jdk
MAINTAINER "Software Engineering, RWTH Aachen University"
ENV PATH /usr/local/tomee/bin:$PATH
RUN mkdir -p /usr/local/tomee
WORKDIR /usr/local/tomee
# curl -fsSL 'https://www.apache.org/dist/tomee/KEYS' | awk -F ' = ' '$1 ~ /^ +Key fingerprint$/ { gsub(" ", "", $2); print $2 }' | sort -u
ENV GPG_KEYS \
BDD0BBEB753192957EFC5F896A62FC8EF17D8FEF \
223D3A74B068ECA354DC385CE126833F9CF64915 \
7A2744A8A9AAF063C23EB7868EBE7DBE8D050EEF \
82D8419BA697F0E7FB85916EE91287822FDB81B1 \
9056B710F1E332780DE7AF34CBAEBE39A46C4CA1 \
A57DAF81C1B69921F4BA8723A8DE0A4DB863A7C1 \
B7574789F5018690043E6DD9C212662E12F3E1DD \
B8B301E6105DF628076BD92C5483E55897ABD9B9 \
DBCCD103B8B24F86FFAAB025C8BB472CD297D428 \
F067B8140F5DD80E1D3B5D92318242FE9A0B1183 \
FAA603D58B1BA4EDF65896D0ED340E0E6D545F97
RUN set -xe \
&& for key in $GPG_KEYS; do \
gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
done
RUN set -x \
&& curl -fSL https://dist.apache.org/repos/dist/release/tomee/tomee-1.7.4/apache-tomee-1.7.4-plus.tar.gz.asc -o tomee.tar.gz.asc \
&& curl -fSL http://apache.rediris.es/tomee/tomee-1.7.4/apache-tomee-1.7.4-plus.tar.gz -o tomee.tar.gz \
&& gpg --batch --verify tomee.tar.gz.asc tomee.tar.gz \
&& tar -zxf tomee.tar.gz \
&& mv apache-tomee-plus-1.7.4/* /usr/local/tomee \
&& rm -Rf apache-tomee-plus-1.7.4 \
&& rm bin/*.bat \
&& rm tomee.tar.gz*
RUN apt update && apt install -y netcat-openbsd
RUN mkdir -p /usr/local/tomee/webapps
COPY 03.dockerconfig/tomcat-users.xml /usr/local/tomee/conf/
COPY 01.Backend/lib/ /usr/local/tomee/lib/
COPY 01.Backend/lib/hibernate-jpa-2.1-api-1.0.0.Final.jar /usr/local/tomee/lib/hibernate-jpa-2.1-api-1.0.0.Final.jar
COPY 01.Backend/target/macoco-be.war /usr/local/tomee/webapps/macoco-be.war
COPY 02.Frontend/MaCoCoLive/target/MaCoCoLive.war /usr/local/tomee/webapps/MaCoCoLive.war
COPY 01.Backend/target/apache-tomee/bin/catalina.sh /usr/local/tomee/bin/catalina.sh
EXPOSE 8080 4200 8000
CMD ["catalina.sh", "jpda run"]
catalina.sh file:
JPDA_SUSPEND="y"
JDPA_OPTS -agentlib:jdwp=transport=dt_socket,
address=8000,server=y,suspend=$JPDA_SUSPEND
my remote debug configuration:
Then in I run "docker-compose build" and "docker-compose up". But when I debug it, IntelliJ shows error:
"Unable to open debugger port (localhost:8000):java.io.IOException "handshake failed - connection permaturally closed."
just now I have figured out how to make it in Netbeans!
I think remote debug TomEE inside Docker in IntelliJ is impossible because the IntelliJ try to access catalina.sh inside Docker image which is not permitted by docker. But in Netbeans the server is integrated with IDE so it doesn't need to access catalina.sh.
What I do is a little tricky: first run "mvn clean install docker:stop docker:start -DskipTests tomee:run" then I have the Tomee server inside docker started. Then add this TomEE server to Netbeans (each time after rerun the mvn command, I need to add this server again).
Then just make break point and debug the maven project.
After that port8080 is occupied by Netbeans so that I cannot access localhost:8080. But I can run "npm start" to start front end in Port4200 then by every action in front end it will stop at the break point in backend.
As far as i understand, you already found a solution for your remote debugging problem. If you are still interested in remote debugging from IntelliJ, feel free to use my GitHub demo. Just follow the instructions on my blog and it should work out-of-the-box.