SonarQube shows less coverage for code when java 8 is used - java

I am running sonar on my spring-boot-based service, previously coverage was 75%, but after migrating sonar runner to java 11, My code coverage is 15%. I tested in some other services when I run tests with jdk11 image and sonar using JDK 11, I get proper coverage, but in this service because of some issue I don't want to run tests using JDK 11, what might be missing?
Configuration giving proper coverage (Note this is a different service)
.test: &test
stage: test
image: maven:3.8.1-openjdk-11-slim
services:
- mongo
- docker:dind
script:
- mvn test -Dspring.profiles.active=$PROFILE
tags:
- build-devops
artifacts:
paths:
- target/jacoco.exec
- target/spotbugsXml.xml
- target/dependency-check-report.xml
- target/dependency-check-report.html
- target/checkstyle-result.xml
sonarqube:
stage: sonar
image: maven:3.8.1-openjdk-11-slim
script:
- mvn -q -U -B verify --settings settings.xml sonar:sonar -DskipTests=true -Dsonar.host.url=$SONAR_URL -Dsonar.login=$SONAR_LOGIN
Configuration of service having issue
.test: &test
stage: test
image: maven:3.8.1-openjdk-8-slim
services:
- mongo
- docker:dind
script:
- mvn test -Dspring.profiles.active=local
- mvn test -Dspring.profiles.active=central
tags:
- build-devops
artifacts:
paths:
- target/jacoco.exec
- target/spotbugsXml.xml
- target/dependency-check-report.xml
- target/dependency-check-report.html
- target/checkstyle-result.xml
sonarqube:
stage: sonar
image: maven:3.8.1-openjdk-11-slim
script:
- mvn -q -U -B verify sonar:sonar --settings settings.xml -DskipTests=true -Dsonar.host.url=$SONAR_URL -Dsonar.login=$SONAR_LOGIN

Related

DB connection during integration tests works on local, but not working on Google Cloud Build

I'm deploying Java 11 REST API to GKE using GitHub, Gradle, and Docker.
The following errors are only happened on Google Cloud Build, not on the local environment. According to the error, it seems the app can't find the DB server(Google Cloud SQL) from Google Cloud Build. I tried both public and private IP, but the results were the same:
...
Step #0 - "Build": 2021-03-11 04:12:04.644 INFO 115 --- [ Test worker] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
Step #0 - "Build": 2021-03-11 04:12:35.855 ERROR 115 --- [ Test worker] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
Step #0 - "Build":
Step #0 - "Build": com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
Step #0 - "Build":
Step #0 - "Build": The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
...
Step #0 - "Build": Caused by: java.net.SocketTimeoutException: connect timed out
...
This happened after I added integration tests. The app deployed successfully after I removed the tests. So, I can remove the integration tests to avoid this issue. The thing is, I want to keep the tests if possible because there are things that we can't test with unit tests.
This is the Dockerfile I'm using for deployments to GKE. RUN gradle build --no-daemon -i --stacktrace is where the error occurs during the test task:
ARG APP_NAME=test-api
ARG GRADLE_USER_HOME_PATH=/home/gradle/cache_home/
#cache dependencies to reduce downloads
FROM gradle:6.8-jdk11 AS cache
ARG APP_NAME
ARG GRADLE_USER_HOME_PATH
WORKDIR /${APP_NAME}/
RUN mkdir -p ${GRADLE_USER_HOME_PATH}
ENV GRADLE_USER_HOME ${GRADLE_USER_HOME_PATH}
COPY --chown=gradle:gradle build.gradle /${APP_NAME}/
RUN gradle clean build --no-daemon -i --stacktrace -x bootJar
#build
FROM gradle:6.8-jdk11 AS build
ARG APP_NAME
ARG GRADLE_USER_HOME_PATH
WORKDIR /${APP_NAME}/
#Copies cached dependencies
COPY --from=cache ${GRADLE_USER_HOME_PATH} /home/gradle/.gradle/
#Copies the Java source code inside the container
COPY --chown=gradle:gradle . /${APP_NAME}/
#Compiles the code and runs unit tests (with Gradle build)
RUN gradle build --no-daemon -i --stacktrace
#Discards the Gradle image with all the compiled classes/unit test results etc.
#Starts again from the JRE image and copies only the JAR file created before
FROM openjdk:11-jre-slim
ARG APP_NAME
COPY --from=build /${APP_NAME}/build/libs/${APP_NAME}.jar /${APP_NAME}/${APP_NAME}.jar
ENTRYPOINT ["java","-jar","/test-api/test-api.jar"]
How to implement integration tests that using DB to GKE? Or maybe I need to change my approach?
I managed to solve the problem referencing this Q&A: Run node.js database migrations on Google Cloud SQL during Google Cloud Build
I had to add 2 steps(Cloud SQL Proxy and Test) on cloudbuild.yaml to use Cloud SQL Proxy. The other steps were auto-generated by GKE:
steps:
- name: gradle:6.8.3-jdk11
entrypoint: sh
args:
- '-c'
- |-
apt-get update && apt-get install -y wget \
&& wget "https://storage.googleapis.com/cloudsql-proxy/v1.21.0/cloud_sql_proxy.linux.amd64" -O cloud_sql_proxy \
&& chmod +x cloud_sql_proxy \
|| exit 1
id: Cloud SQL Proxy
- name: gradle:6.8.3-jdk11
entrypoint: sh
args:
- '-c'
- |-
(./cloud_sql_proxy -instances=<CONNECTION_NAME>=tcp:<PORT> & sleep 2) \
&& gradle test --no-daemon -i --stacktrace \
|| exit 1
id: Test
- name: gcr.io/cloud-builders/docker
args:
- build
- '-t'
- '$_IMAGE_NAME:$COMMIT_SHA'
- .
- '-f'
- $_DOCKERFILE_NAME
dir: $_DOCKERFILE_DIR
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_IMAGE_NAME:$COMMIT_SHA'
id: Push
- name: gcr.io/cloud-builders/gke-deploy
args:
- prepare
- '--filename=$_K8S_YAML_PATH'
- '--image=$_IMAGE_NAME:$COMMIT_SHA'
- '--app=$_K8S_APP_NAME'
- '--version=$COMMIT_SHA'
- '--namespace=$_K8S_NAMESPACE'
- '--label=$_K8S_LABELS'
- '--annotation=$_K8S_ANNOTATIONS,gcb-build-id=$BUILD_ID'
- '--create-application-cr'
- >-
--links="Build
details=https://console.cloud.google.com/cloud-build/builds/$BUILD_ID?project=$PROJECT_ID"
- '--output=output'
id: Prepare deploy
- name: gcr.io/cloud-builders/gsutil
args:
- '-c'
- |-
if [ "$_OUTPUT_BUCKET_PATH" != "" ]
then
gsutil cp -r output/suggested gs://$_OUTPUT_BUCKET_PATH/config/$_K8S_APP_NAME/$BUILD_ID/suggested
gsutil cp -r output/expanded gs://$_OUTPUT_BUCKET_PATH/config/$_K8S_APP_NAME/$BUILD_ID/expanded
fi
id: Save configs
entrypoint: sh
- name: gcr.io/cloud-builders/gke-deploy
args:
- apply
- '--filename=output/expanded'
- '--cluster=$_GKE_CLUSTER'
- '--location=$_GKE_LOCATION'
- '--namespace=$_K8S_NAMESPACE'
id: Apply deploy
...
And Dockerfile:
ARG APP_NAME=test-api
ARG APP_HOME=/test-api
FROM openjdk:11-jdk-slim AS build
USER root
ARG APP_HOME
WORKDIR ${APP_HOME}/
COPY . .
# test is performed from Test step from cloudbuild.yaml
RUN ./gradlew build --no-daemon -i --stacktrace -x test
FROM openjdk:11-jdk-slim
ARG APP_NAME
ARG APP_HOME
WORKDIR ${APP_HOME}/
COPY --from=build ${APP_HOME}/build/libs/${APP_NAME}.jar ./${APP_NAME}.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/test-api/test-api.jar"]
While I solved the questioned problem, this script has a little problem: there will be 2 separate Gradle dependencies downloads(Test and Build). I couldn't manage to use Cloud SQL Proxy on gcr.io/cloud-builders/docker, so I workaround by using the Test step instead of the Build step. Maybe this can be solved using either docker run --network="host" or host.docker.internal, but I didn't try.

How to hit a localhost url from rest assured in a bitbucket pipeline?

I've an application where I'm Integrating test cases in the bitbucket pipeline.
My test code should hit the url which I wrote in java using rest assured.
It is working normally without pipeline.
here is the pipeline I've created. 1st step is for running the backend and step 2 is starting the test.
pipelines:
default:
- step:
name: setup
image: elixir
script:
- apt-get update
- apt-get install gcc make
- cp apps/api/config/dev.secret.exs.txt apps/api/config/dev.secret.exs
- mix local.hex --force
- mix local.rebar --force
- mix deps.get
- mix ecto.create --quiet
- cd apps/terminator
- mix deps.get
- mix ecto.create --quiet
- mix ecto.migrate
- cd ..
- cd api
- mix ecto.migrate
- mix run priv/repo/seeds.exs
- nohup mix phx.server &
- sleep 15s
services:
- postgres
- step:
name: Sanity Test
image: maven:3.3.9
caches:
- maven
script:
- cd Test/TestAPIFramework
- mvn -B clean install
services:
- postgres
definitions:
services:
postgres:
image: postgres
variables:
POSTGRES_DB: '***_**'
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'root'
everytime I'm getting connection time out...I think It is not able to make the connection
11:42:55.664 [main] INFO ApiFramework.User - Hitting URL: http://192.168.1.69:4000
java.net.ConnectException: Connection timed out (Connection timed out)
at java.net.PlainSocketImpl.socketConnect(Native Method)
Every step runs in the new container. If you want to access server, you should so this withing the same step in which you've started this server.

Reading a file from resources not working in gitlab CI

I am getting an error while running tests on gitlab CI using the command:
./gradlew clean test
I am using test containers to run my tests: https://www.testcontainers.org/modules/docker_compose/
Here is the code I am using to load the docker compose file which is at src/test/resources.
#ClassRule
public static DockerComposeContainer container = new DockerComposeContainer(new File(BaseServiceTest.class.getClassLoader().getResource("docker-compose.yml").getFile()));
It runs fine when I run locally, buy when running ci on gitlab, I get the following error:
{"timestamp":"2019-06-17T18:47:44.903Z","level":"ERROR","thread":"Test worker","logger":"docker[docker/compose:1.8.0]","message":"Log output from the failed container:\n.IOError: [Errno 2] No such file or directory: '/builds/xxxx/xxxx/xxxx-xxxx/src/test/resources/docker-compose.yml'\n","context":"default"}
Following is my gitlab-ci.yml file:
include:
- project: 'xxxx/xxxx/sdlc'
file: '/sdlc.yml'
variables:
CONTAINER_NAME: xxxx-xxxx
test:
stage: test
image: registry.xxxx.com/xxxx-alpine-jdk8:v1_8_181
script:
- ls -la src/test/resources
- ./gradlew clean test -i
In my script, I have ls -la src/test/resources and I can see the docker-compose.yml file when that script is run. Not sure why it is not available when running the code.
Based on the Testcontainers docs here.
DinD service is required for Testcontainers on Gitlab CI
Here an example:
# DinD service is required for Testcontainers
services:
- docker:dind
variables:
# Instruct Testcontainers to use the daemon of DinD.
DOCKER_HOST: "tcp://docker:2375"
# Improve performance with overlayfs.
DOCKER_DRIVER: overlay2
test:
image: gradle:5.0
stage: test
script: ./gradlew test

Configuring gitlab-ci.yml for simple java project

I have a simple java project with HelloWorld class , I want to add the GitLab CI/CD for this project by using the .gitlab-ci.yml and runnable configuration files , I am trying to find the reference but am unable to find for simple java project , any help in configuring the above project is appreciated.
Thanks
Rama
I used this as a simple setup for my .gitlab-ci.yml
build:
stage: build
image: openjdk:8-jdk
script: javac src/javaTest/JavaTest.java
artifacts:
untracked: true
https://gitlab.com/peterfromearth/java-test/blob/master/.gitlab-ci.yml
That will build & test your application using gradle :
image: gradle:alpine
stages:
- build
- test
variables:
GRADLE_OPTS: "-Dorg.gradle.daemon=false"
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
build:
stage: build
script:
gradle --build-cache assemble
artifacts:
paths:
- build/libs/*.jar
expire_in: 1 week
test:
stage: test
script:
gradle check
after_script:
- echo "End CI"

CircleCI + Gradle + Heroku deployment

I'm trying to provide a continuous deployment with Gradle and Heroku but for some reason, the deployment step is not running.
CircleCI Pipeline result
I've already updated the circle ci with the Heroku key.
version: 2
jobs:
build:
docker:
- image: circleci/openjdk:8-jdk
working_directory: ~/repo
environment:
JVM_OPTS: -Xmx3200m
TERM: dumb
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "build.gradle" }}
- v1-dependencies-
- run: gradle dependencies
- save_cache:
paths:
- ~/.m2
key: v1-dependencies-{{ checksum "build.gradle" }}
# run tests!
- run: gradle test
deployment:
staging:
branch: master
heroku:
appname: my-heroku-app
Could you guys help me, please? Is the deployment step in the right place?
You are using deployment configuration for CircleCI 1.0 but you are using CircleCI 2.0.
From the documentation for CircleCI 2.0:
The built-in Heroku integration through the CircleCI UI is not
implemented for CircleCI 2.0. However, it is possible to deploy to
Heroku manually.
To deploy on heroku with CircleCI 2.0, you need :
add environment variables HEROKU_LOGIN, HEROKU_API_KEY, HEROKU_APP_NAME to your CircleCI project settings https://circleci.com/gh/<account>/<project>/edit#env-vars
create a private ssh key without passphrase and add it to your CircleCI project settings https://circleci.com/gh/https://circleci.com/gh/<account>/<project>/edit#ssh for hostname git.heroku.com
add steps in the .circleci/config.yml file with the fingerprint of your ssh key
- run:
name: Setup Heroku
command: |
ssh-keyscan -H heroku.com >> ~/.ssh/known_hosts
cat > ~/.netrc << EOF
machine api.heroku.com
login $HEROKU_LOGIN
password $HEROKU_API_KEY
EOF
cat >> ~/.ssh/config << EOF
VerifyHostKeyDNS yes
StrictHostKeyChecking no
EOF
- add_ssh_keys:
fingerprints:
- "<SSH KEY fingerprint>"
- deploy:
name: "Deploy to Heroku"
command: git push --force git#heroku.com:$HEROKU_APP_NAME.git HEAD:refs/heads/master

Categories