I am getting an error while running tests on gitlab CI using the command:
./gradlew clean test
I am using test containers to run my tests: https://www.testcontainers.org/modules/docker_compose/
Here is the code I am using to load the docker compose file which is at src/test/resources.
#ClassRule
public static DockerComposeContainer container = new DockerComposeContainer(new File(BaseServiceTest.class.getClassLoader().getResource("docker-compose.yml").getFile()));
It runs fine when I run locally, buy when running ci on gitlab, I get the following error:
{"timestamp":"2019-06-17T18:47:44.903Z","level":"ERROR","thread":"Test worker","logger":"docker[docker/compose:1.8.0]","message":"Log output from the failed container:\n.IOError: [Errno 2] No such file or directory: '/builds/xxxx/xxxx/xxxx-xxxx/src/test/resources/docker-compose.yml'\n","context":"default"}
Following is my gitlab-ci.yml file:
include:
- project: 'xxxx/xxxx/sdlc'
file: '/sdlc.yml'
variables:
CONTAINER_NAME: xxxx-xxxx
test:
stage: test
image: registry.xxxx.com/xxxx-alpine-jdk8:v1_8_181
script:
- ls -la src/test/resources
- ./gradlew clean test -i
In my script, I have ls -la src/test/resources and I can see the docker-compose.yml file when that script is run. Not sure why it is not available when running the code.
Based on the Testcontainers docs here.
DinD service is required for Testcontainers on Gitlab CI
Here an example:
# DinD service is required for Testcontainers
services:
- docker:dind
variables:
# Instruct Testcontainers to use the daemon of DinD.
DOCKER_HOST: "tcp://docker:2375"
# Improve performance with overlayfs.
DOCKER_DRIVER: overlay2
test:
image: gradle:5.0
stage: test
script: ./gradlew test
Related
I have a cicd pipeline in gitlab that uses gitlab-secrets to publish them to a spring-boot application as follows:
docker-compose.yml:
services:
my-app:
image: my-app-image:latest
environment:
- SPRING_PROFILES_ACTIVE=test
- app.welcome.secret=$APP_WELCOME_SECRET
Of course I defined the $APP_WELCOME_SECRET as variable it gitlabs ci/cd variables configuration page.
Problem: running the gitlab-ci pipelines results in:
The APP_WELCOME_SECRET variable is not set. Defaulting to a blank string.
But why?
I could solve it by writing the secret value into a .env file as follows:
gitlab-ci.yml:
deploy:
stage: deploy
script:
- touch .env
- echo "APP_WELCOME_SECRET=$APP_WELCOME_SECRET" >> .env
- scp -i $SSH_KEY docker-compose.yml .env user#$MY_SERVER:~/deployment/app
While that works, I would still be interested in a better approach.
I am running sonar on my spring-boot-based service, previously coverage was 75%, but after migrating sonar runner to java 11, My code coverage is 15%. I tested in some other services when I run tests with jdk11 image and sonar using JDK 11, I get proper coverage, but in this service because of some issue I don't want to run tests using JDK 11, what might be missing?
Configuration giving proper coverage (Note this is a different service)
.test: &test
stage: test
image: maven:3.8.1-openjdk-11-slim
services:
- mongo
- docker:dind
script:
- mvn test -Dspring.profiles.active=$PROFILE
tags:
- build-devops
artifacts:
paths:
- target/jacoco.exec
- target/spotbugsXml.xml
- target/dependency-check-report.xml
- target/dependency-check-report.html
- target/checkstyle-result.xml
sonarqube:
stage: sonar
image: maven:3.8.1-openjdk-11-slim
script:
- mvn -q -U -B verify --settings settings.xml sonar:sonar -DskipTests=true -Dsonar.host.url=$SONAR_URL -Dsonar.login=$SONAR_LOGIN
Configuration of service having issue
.test: &test
stage: test
image: maven:3.8.1-openjdk-8-slim
services:
- mongo
- docker:dind
script:
- mvn test -Dspring.profiles.active=local
- mvn test -Dspring.profiles.active=central
tags:
- build-devops
artifacts:
paths:
- target/jacoco.exec
- target/spotbugsXml.xml
- target/dependency-check-report.xml
- target/dependency-check-report.html
- target/checkstyle-result.xml
sonarqube:
stage: sonar
image: maven:3.8.1-openjdk-11-slim
script:
- mvn -q -U -B verify sonar:sonar --settings settings.xml -DskipTests=true -Dsonar.host.url=$SONAR_URL -Dsonar.login=$SONAR_LOGIN
I'm deploying Java 11 REST API to GKE using GitHub, Gradle, and Docker.
The following errors are only happened on Google Cloud Build, not on the local environment. According to the error, it seems the app can't find the DB server(Google Cloud SQL) from Google Cloud Build. I tried both public and private IP, but the results were the same:
...
Step #0 - "Build": 2021-03-11 04:12:04.644 INFO 115 --- [ Test worker] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
Step #0 - "Build": 2021-03-11 04:12:35.855 ERROR 115 --- [ Test worker] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
Step #0 - "Build":
Step #0 - "Build": com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
Step #0 - "Build":
Step #0 - "Build": The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
...
Step #0 - "Build": Caused by: java.net.SocketTimeoutException: connect timed out
...
This happened after I added integration tests. The app deployed successfully after I removed the tests. So, I can remove the integration tests to avoid this issue. The thing is, I want to keep the tests if possible because there are things that we can't test with unit tests.
This is the Dockerfile I'm using for deployments to GKE. RUN gradle build --no-daemon -i --stacktrace is where the error occurs during the test task:
ARG APP_NAME=test-api
ARG GRADLE_USER_HOME_PATH=/home/gradle/cache_home/
#cache dependencies to reduce downloads
FROM gradle:6.8-jdk11 AS cache
ARG APP_NAME
ARG GRADLE_USER_HOME_PATH
WORKDIR /${APP_NAME}/
RUN mkdir -p ${GRADLE_USER_HOME_PATH}
ENV GRADLE_USER_HOME ${GRADLE_USER_HOME_PATH}
COPY --chown=gradle:gradle build.gradle /${APP_NAME}/
RUN gradle clean build --no-daemon -i --stacktrace -x bootJar
#build
FROM gradle:6.8-jdk11 AS build
ARG APP_NAME
ARG GRADLE_USER_HOME_PATH
WORKDIR /${APP_NAME}/
#Copies cached dependencies
COPY --from=cache ${GRADLE_USER_HOME_PATH} /home/gradle/.gradle/
#Copies the Java source code inside the container
COPY --chown=gradle:gradle . /${APP_NAME}/
#Compiles the code and runs unit tests (with Gradle build)
RUN gradle build --no-daemon -i --stacktrace
#Discards the Gradle image with all the compiled classes/unit test results etc.
#Starts again from the JRE image and copies only the JAR file created before
FROM openjdk:11-jre-slim
ARG APP_NAME
COPY --from=build /${APP_NAME}/build/libs/${APP_NAME}.jar /${APP_NAME}/${APP_NAME}.jar
ENTRYPOINT ["java","-jar","/test-api/test-api.jar"]
How to implement integration tests that using DB to GKE? Or maybe I need to change my approach?
I managed to solve the problem referencing this Q&A: Run node.js database migrations on Google Cloud SQL during Google Cloud Build
I had to add 2 steps(Cloud SQL Proxy and Test) on cloudbuild.yaml to use Cloud SQL Proxy. The other steps were auto-generated by GKE:
steps:
- name: gradle:6.8.3-jdk11
entrypoint: sh
args:
- '-c'
- |-
apt-get update && apt-get install -y wget \
&& wget "https://storage.googleapis.com/cloudsql-proxy/v1.21.0/cloud_sql_proxy.linux.amd64" -O cloud_sql_proxy \
&& chmod +x cloud_sql_proxy \
|| exit 1
id: Cloud SQL Proxy
- name: gradle:6.8.3-jdk11
entrypoint: sh
args:
- '-c'
- |-
(./cloud_sql_proxy -instances=<CONNECTION_NAME>=tcp:<PORT> & sleep 2) \
&& gradle test --no-daemon -i --stacktrace \
|| exit 1
id: Test
- name: gcr.io/cloud-builders/docker
args:
- build
- '-t'
- '$_IMAGE_NAME:$COMMIT_SHA'
- .
- '-f'
- $_DOCKERFILE_NAME
dir: $_DOCKERFILE_DIR
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_IMAGE_NAME:$COMMIT_SHA'
id: Push
- name: gcr.io/cloud-builders/gke-deploy
args:
- prepare
- '--filename=$_K8S_YAML_PATH'
- '--image=$_IMAGE_NAME:$COMMIT_SHA'
- '--app=$_K8S_APP_NAME'
- '--version=$COMMIT_SHA'
- '--namespace=$_K8S_NAMESPACE'
- '--label=$_K8S_LABELS'
- '--annotation=$_K8S_ANNOTATIONS,gcb-build-id=$BUILD_ID'
- '--create-application-cr'
- >-
--links="Build
details=https://console.cloud.google.com/cloud-build/builds/$BUILD_ID?project=$PROJECT_ID"
- '--output=output'
id: Prepare deploy
- name: gcr.io/cloud-builders/gsutil
args:
- '-c'
- |-
if [ "$_OUTPUT_BUCKET_PATH" != "" ]
then
gsutil cp -r output/suggested gs://$_OUTPUT_BUCKET_PATH/config/$_K8S_APP_NAME/$BUILD_ID/suggested
gsutil cp -r output/expanded gs://$_OUTPUT_BUCKET_PATH/config/$_K8S_APP_NAME/$BUILD_ID/expanded
fi
id: Save configs
entrypoint: sh
- name: gcr.io/cloud-builders/gke-deploy
args:
- apply
- '--filename=output/expanded'
- '--cluster=$_GKE_CLUSTER'
- '--location=$_GKE_LOCATION'
- '--namespace=$_K8S_NAMESPACE'
id: Apply deploy
...
And Dockerfile:
ARG APP_NAME=test-api
ARG APP_HOME=/test-api
FROM openjdk:11-jdk-slim AS build
USER root
ARG APP_HOME
WORKDIR ${APP_HOME}/
COPY . .
# test is performed from Test step from cloudbuild.yaml
RUN ./gradlew build --no-daemon -i --stacktrace -x test
FROM openjdk:11-jdk-slim
ARG APP_NAME
ARG APP_HOME
WORKDIR ${APP_HOME}/
COPY --from=build ${APP_HOME}/build/libs/${APP_NAME}.jar ./${APP_NAME}.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/test-api/test-api.jar"]
While I solved the questioned problem, this script has a little problem: there will be 2 separate Gradle dependencies downloads(Test and Build). I couldn't manage to use Cloud SQL Proxy on gcr.io/cloud-builders/docker, so I workaround by using the Test step instead of the Build step. Maybe this can be solved using either docker run --network="host" or host.docker.internal, but I didn't try.
I want to configure the gitlab pipeline to run my integration tests against a Postgres DB using Maven. I tried following this documentation but afterwards I noticed that this works only with the shared gitlab runners but I am using my own gitlab runner which runs in Kubernetes.
My gitlab-ci.yml:
cache:
key: "$CI_COMMIT_REF_NAME"
untracked: true
paths:
- .m2/repository/
variables:
MAVEN_CLI_OPTS: "-s .m2/settings.xml "
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository/"
POSTGRES_DB: postgres
POSTGRES_USER: runner
POSTGRES_PASSWORD: runner
stages:
- build
- verify
build:
image: maven:3.6.0-jdk-8
stage: build
script:
- "mvn $MAVEN_CLI_OPTS --quiet clean package -Dmaven.test.skip=true"
artifacts:
paths:
- "target/*"
test:
image: maven:3.6.0-jdk-8
services:
- postgres:latest
stage: verify
script:
- "mvn $MAVEN_CLI_OPTS --quiet -Dspring.profiles.active=dev clean test"
Using a shared runner this configuration works fine, but I have to use the runner from Kubernetes. Is there any way to execute my tests against a postgres DB without using the shared runner?
You're hitting a difference in the way network is handled on docker executor and on Kubernetes executor.
The docker executor work pretty much like a docker-compose upping all your containers in the same network. Each container get an IP and a DNS is created: if your service is named postgres the command nc postgres will resolve the postgres container IP and contact it (172.17.0.15:5432 for example).
The kubernetes executor will create a pod runner. All your containers will start in the same pod with only one IP address. Network between containers in the same pod is done by contacting 127.0.0.1. If you want to contact the postgres container you'll likely want to contact 127.0.0.1:5432. So if you use 127.0.0.1 instead of postgres it should work.
In order to get your pipeline working on both executors you can either:
Detect on which kind of runner you're using runner tags $CI_RUNNER_TAGS
Define a custom variable $POSTGRES_URL on all your executors
Try to resolve postgres and fall back to 127.0.0.1
I have a simple java project with HelloWorld class , I want to add the GitLab CI/CD for this project by using the .gitlab-ci.yml and runnable configuration files , I am trying to find the reference but am unable to find for simple java project , any help in configuring the above project is appreciated.
Thanks
Rama
I used this as a simple setup for my .gitlab-ci.yml
build:
stage: build
image: openjdk:8-jdk
script: javac src/javaTest/JavaTest.java
artifacts:
untracked: true
https://gitlab.com/peterfromearth/java-test/blob/master/.gitlab-ci.yml
That will build & test your application using gradle :
image: gradle:alpine
stages:
- build
- test
variables:
GRADLE_OPTS: "-Dorg.gradle.daemon=false"
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
build:
stage: build
script:
gradle --build-cache assemble
artifacts:
paths:
- build/libs/*.jar
expire_in: 1 week
test:
stage: test
script:
gradle check
after_script:
- echo "End CI"