I'm running the hivemq mqtt broker community edition and wanted to add the prometheus extension for monitoring.
Both come precompiled from the hivemq marketplace and the github project page.
I download both components as a zip file, unzip them and copy them into a java 11 docker container using this dockerfile:
FROM alpine:3.10 AS TOOLCHAIN
ADD https://github.com/hivemq/hivemq-community-edition/releases/download/2019.1/hivemq-ce-2019.1.zip /opt/
ADD https://www.hivemq.com/releases/extensions/hivemq-prometheus-extension-4.0.1.zip /opt/
WORKDIR /opt
RUN unzip hivemq-ce-* -d ./
RUN unzip hivemq-prometheus-extension* -d ./
RUN rm -rf hivemq-ce-*.zip
RUN rm -rf hivemq-prometheus-extension*.zip
RUN mv ./hivemq-ce-* ./hivemq
FROM openjdk:11-jdk-slim
COPY --from=TOOLCHAIN /opt/hivemq /opt/hivemq
COPY --from=TOOLCHAIN /opt/hivemq-prometheus-extension /opt/hivemq/extensions/hivemq-prometheus-extension
WORKDIR /opt/hivemq/
CMD ["chmod","755","./bin/run.sh"]
CMD ["./bin/run.sh"]
I think I got the steps from the how to's right, but when I start the container with docker build -t hive-test .; docker run -p 1883:1883 -p 9399:9399 -t hive-test I get an error.
2019-07-24 13:19:57,125 INFO - Starting HiveMQ Community Edition Server
2019-07-24 13:19:57,127 INFO - HiveMQ version: 2019.1
2019-07-24 13:19:57,127 INFO - HiveMQ home directory: /opt/hivemq
2019-07-24 13:19:57,162 INFO - Log Configuration was overridden by /opt/hivemq/conf/logback.xml
2019-07-24 13:19:57,356 INFO - This HiveMQ ID is mwDbQ
2019-07-24 13:20:14,353 INFO - Created user preferences directory.
2019-07-24 13:20:14,873 INFO - Starting HiveMQ extension system.
2019-07-24 13:20:14,925 INFO - Starting TCP listener on address 0.0.0.0 and port 1883
2019-07-24 13:20:14,998 INFO - Started TCP Listener on address 0.0.0.0 and on port 1883
2019-07-24 13:20:14,999 INFO - Started HiveMQ in 17877ms
2019-07-24 13:20:15,040 ERROR - Extension with id "hivemq-prometheus-extension" cannot be started because of an uncaught exception thrown by the extension. Extension will be disabled.
java.lang.NoClassDefFoundError: javax/servlet/ServletContextListener
at org.eclipse.jetty.server.handler.ContextHandler.<clinit>(ContextHandler.java:114)
at com.hivemq.extensions.prometheus.export.PrometheusServer.start(PrometheusServer.java:64)
at com.hivemq.extensions.prometheus.PrometheusMainClass.extensionStart(PrometheusMainClass.java:65)
at com.hivemq.extensions.HiveMQExtensionImpl.start(HiveMQExtensionImpl.java:133)
at com.hivemq.extensions.HiveMQPlugins.pluginStart(HiveMQPlugins.java:209)
at com.hivemq.extensions.loader.PluginLifecycleHandlerImpl.lambda$startPlugin$0(PluginLifecycleHandlerImpl.java:82)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.ClassNotFoundException: javax.servlet.ServletContextListener
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
at com.hivemq.extensions.classloader.IsolatedPluginClassloader.loadClass(IsolatedPluginClassloader.java:123)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 9 common frames omitted
I also downloaded the broker and extension source code and tried to compile it by myself with maven/gradle and java 11. But that had the exact same result.
The broker runs without any errors.
Does anyone know what went wrong here?
Actually no dependency from the hivemq broker community edition contains ServletContextListener. I downloaded the source code from github and modified the build.gradle file.
Add the last line of the following snippet to the build.gradle file:
/* javax */
[group: 'javax.activation', name: 'activation', version: '1.1.1'],
[group: 'javax.validation', name: 'validation-api', version: '1.1.0.Final'],
[group: 'javax.annotation', name: 'javax.annotation-api', version: '1.3.2'],
[group: 'javax.servlet', name: 'javax.servlet-api', version: '4.0.1'],
After compiling the broker unzip the result and add the precompiled extension to the extension directory.
The error is gone and the extension seems to be working.
Related
I'm deploying Java 11 REST API to GKE using GitHub, Gradle, and Docker.
The following errors are only happened on Google Cloud Build, not on the local environment. According to the error, it seems the app can't find the DB server(Google Cloud SQL) from Google Cloud Build. I tried both public and private IP, but the results were the same:
...
Step #0 - "Build": 2021-03-11 04:12:04.644 INFO 115 --- [ Test worker] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
Step #0 - "Build": 2021-03-11 04:12:35.855 ERROR 115 --- [ Test worker] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
Step #0 - "Build":
Step #0 - "Build": com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
Step #0 - "Build":
Step #0 - "Build": The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
...
Step #0 - "Build": Caused by: java.net.SocketTimeoutException: connect timed out
...
This happened after I added integration tests. The app deployed successfully after I removed the tests. So, I can remove the integration tests to avoid this issue. The thing is, I want to keep the tests if possible because there are things that we can't test with unit tests.
This is the Dockerfile I'm using for deployments to GKE. RUN gradle build --no-daemon -i --stacktrace is where the error occurs during the test task:
ARG APP_NAME=test-api
ARG GRADLE_USER_HOME_PATH=/home/gradle/cache_home/
#cache dependencies to reduce downloads
FROM gradle:6.8-jdk11 AS cache
ARG APP_NAME
ARG GRADLE_USER_HOME_PATH
WORKDIR /${APP_NAME}/
RUN mkdir -p ${GRADLE_USER_HOME_PATH}
ENV GRADLE_USER_HOME ${GRADLE_USER_HOME_PATH}
COPY --chown=gradle:gradle build.gradle /${APP_NAME}/
RUN gradle clean build --no-daemon -i --stacktrace -x bootJar
#build
FROM gradle:6.8-jdk11 AS build
ARG APP_NAME
ARG GRADLE_USER_HOME_PATH
WORKDIR /${APP_NAME}/
#Copies cached dependencies
COPY --from=cache ${GRADLE_USER_HOME_PATH} /home/gradle/.gradle/
#Copies the Java source code inside the container
COPY --chown=gradle:gradle . /${APP_NAME}/
#Compiles the code and runs unit tests (with Gradle build)
RUN gradle build --no-daemon -i --stacktrace
#Discards the Gradle image with all the compiled classes/unit test results etc.
#Starts again from the JRE image and copies only the JAR file created before
FROM openjdk:11-jre-slim
ARG APP_NAME
COPY --from=build /${APP_NAME}/build/libs/${APP_NAME}.jar /${APP_NAME}/${APP_NAME}.jar
ENTRYPOINT ["java","-jar","/test-api/test-api.jar"]
How to implement integration tests that using DB to GKE? Or maybe I need to change my approach?
I managed to solve the problem referencing this Q&A: Run node.js database migrations on Google Cloud SQL during Google Cloud Build
I had to add 2 steps(Cloud SQL Proxy and Test) on cloudbuild.yaml to use Cloud SQL Proxy. The other steps were auto-generated by GKE:
steps:
- name: gradle:6.8.3-jdk11
entrypoint: sh
args:
- '-c'
- |-
apt-get update && apt-get install -y wget \
&& wget "https://storage.googleapis.com/cloudsql-proxy/v1.21.0/cloud_sql_proxy.linux.amd64" -O cloud_sql_proxy \
&& chmod +x cloud_sql_proxy \
|| exit 1
id: Cloud SQL Proxy
- name: gradle:6.8.3-jdk11
entrypoint: sh
args:
- '-c'
- |-
(./cloud_sql_proxy -instances=<CONNECTION_NAME>=tcp:<PORT> & sleep 2) \
&& gradle test --no-daemon -i --stacktrace \
|| exit 1
id: Test
- name: gcr.io/cloud-builders/docker
args:
- build
- '-t'
- '$_IMAGE_NAME:$COMMIT_SHA'
- .
- '-f'
- $_DOCKERFILE_NAME
dir: $_DOCKERFILE_DIR
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_IMAGE_NAME:$COMMIT_SHA'
id: Push
- name: gcr.io/cloud-builders/gke-deploy
args:
- prepare
- '--filename=$_K8S_YAML_PATH'
- '--image=$_IMAGE_NAME:$COMMIT_SHA'
- '--app=$_K8S_APP_NAME'
- '--version=$COMMIT_SHA'
- '--namespace=$_K8S_NAMESPACE'
- '--label=$_K8S_LABELS'
- '--annotation=$_K8S_ANNOTATIONS,gcb-build-id=$BUILD_ID'
- '--create-application-cr'
- >-
--links="Build
details=https://console.cloud.google.com/cloud-build/builds/$BUILD_ID?project=$PROJECT_ID"
- '--output=output'
id: Prepare deploy
- name: gcr.io/cloud-builders/gsutil
args:
- '-c'
- |-
if [ "$_OUTPUT_BUCKET_PATH" != "" ]
then
gsutil cp -r output/suggested gs://$_OUTPUT_BUCKET_PATH/config/$_K8S_APP_NAME/$BUILD_ID/suggested
gsutil cp -r output/expanded gs://$_OUTPUT_BUCKET_PATH/config/$_K8S_APP_NAME/$BUILD_ID/expanded
fi
id: Save configs
entrypoint: sh
- name: gcr.io/cloud-builders/gke-deploy
args:
- apply
- '--filename=output/expanded'
- '--cluster=$_GKE_CLUSTER'
- '--location=$_GKE_LOCATION'
- '--namespace=$_K8S_NAMESPACE'
id: Apply deploy
...
And Dockerfile:
ARG APP_NAME=test-api
ARG APP_HOME=/test-api
FROM openjdk:11-jdk-slim AS build
USER root
ARG APP_HOME
WORKDIR ${APP_HOME}/
COPY . .
# test is performed from Test step from cloudbuild.yaml
RUN ./gradlew build --no-daemon -i --stacktrace -x test
FROM openjdk:11-jdk-slim
ARG APP_NAME
ARG APP_HOME
WORKDIR ${APP_HOME}/
COPY --from=build ${APP_HOME}/build/libs/${APP_NAME}.jar ./${APP_NAME}.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/test-api/test-api.jar"]
While I solved the questioned problem, this script has a little problem: there will be 2 separate Gradle dependencies downloads(Test and Build). I couldn't manage to use Cloud SQL Proxy on gcr.io/cloud-builders/docker, so I workaround by using the Test step instead of the Build step. Maybe this can be solved using either docker run --network="host" or host.docker.internal, but I didn't try.
I'm trying to setup SonarQube 7.8 version. Once i start sonar.sh file it is running but after that sonar stops.
root#automation:/opt/sonarqube-7.8/bin/linux-x86-64# ./sonar.sh start
Starting SonarQube...
Started SonarQube.
root#automation:/opt/sonarqube-7.8/bin/linux-x86-64# ./sonar.sh status
SonarQube is not running.
I checked the logs and this is what i get:
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2019.10.15 21:01:37 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube-7.8/temp
2019.10.15 21:01:37 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2019.10.15 21:01:37 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube-7.8/elasticsearch]: /opt/sonarqube-7.8/elasticsearch/bin/elasticsearch
2019.10.15 21:01:37 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2019.10.15 21:01:38 INFO app[][o.e.p.PluginsService] no modules loaded
2019.10.15 21:01:38 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
2019.10.15 21:01:41 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 1
2019.10.15 21:01:41 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
2019.10.15 21:01:41 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
<-- Wrapper Stopped
es.log file is here :
2019.10.15 21:01:41 ERROR es[][o.e.b.Bootstrap] Exception
java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:103) ~[elasticsearch-6.8.0.jar:6.8.0]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170) ~[elasticsearch-6.8.0.jar:6.8.0]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) [elasticsearch-6.8.0.jar:6.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) [elasticsearch-6.8.0.jar:6.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) [elasticsearch-6.8.0.jar:6.8.0]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-6.8.0.jar:6.8.0]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) [elasticsearch-cli-6.8.0.jar:6.8.0]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-6.8.0.jar:6.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:116) [elasticsearch-6.8.0.jar:6.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) [elasticsearch-6.8.0.jar:6.8.0]
I'm not sure why SonarQube stops. Could you help me with that please?
SOLVED:
First, don't run sonarqube as the root user
Create user
command: useradd username (put username as sonaradmin(if you don't want to change any command)
Create a password command: passwd username
goto /opt/ directory
3.1) Rename sonarqube.x.x.x to sonarqube command:sudo mv sonarqube.x.x.x sonarqube(X.X.X you should change directory name)
Change permissions command:chmod 775 -R sonarqube(folder name may change Sonarqube )
Add Created user to sonaradmin group command: chown -R sonaradmin:sonaradmin sonarqube
goto : cd /opt/sonarqube/bin/linux-x86-64/
Command: su sonaradmin
Enter password
command:./sonar.sh start
Command:./sonar.sh status
Now goto browser check http://localhost:9000/sonar
Hoping this helps.... I have gone through the same issue,solved
ok, i found the solution. All i had to do is to change #RUN_AS_USER= in /opt/sonarqube-7.8/bin/linux-x86-64/sonar.sh line 48 to RUN_AS_USER=sonar
Your solution for me doesn't works, cause console show me this messagge (after edit to sonar.sh file)
groups: "sonar": no such user
chown: utente non valido: "sonar:sonar"
su: user sonar does not exist
if baya prakash reddy's answer doesn't work, so you can look at the documentation https://docs.sonarqube.org/latest/requirements/requirements/ where it says that you must ensure that:
vm.max_map_count is greater than or equal to 524288
yo can set it like this:
sysctl -w vm.max_map_count=524288
Ensure your sonarQube server has at least 4GB of RAM. I had this issue after installing SonarQube with 1GB of RAM. Running SonarQube as sonar did not resolve the issue. Once I installed on RHL with 4GB RAM, issue was resolved
I am implementing a lambda function with the tool of continuous integrations of aws . CodeSource , CodeBuild CodePipeLine.
After set up all, when i test the lambda the result is
{
"errorMessage": "Class not found: com.ad.client.App",
"errorType": "java.lang.ClassNotFoundException"
}
Class not found: com.ad.client.App: java.lang.ClassNotFoundException
java.lang.ClassNotFoundException: com.ad.client.App
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
All stage of Pipeline are succeeded(Source , Build , Deploy)
If a load the jar directly in the lambda console the result is the correct
I review the log of the build and found this:
[Container] 2019/06/13 13:09:38 Running command echo THE PATH WORK IS !!!
THE PATH WORK IS !!!
[Container] 2019/06/13 13:09:38 Running command pwd
/codebuild/output/src748698927/src
[Container] 2019/06/13 13:09:38 Running command echo The list of file is !!
The list of file is !!
[Container] 2019/06/13 13:09:38 Running command ls
Readme.md
buildspec.yml
dependency-reduced-pom.xml
ftc-client.iml
outputtemplate.yaml
pom.xml
src
target
template.yaml
[Container] 2019/06/13 13:09:38 Running command echo CODE BUILD SRC DIRECTORY
CODE BUILD SRC DIRECTORY
[Container] 2019/06/13 13:09:38 Running command echo $CODEBUILD_SRC_DIR
/codebuild/output/src748698927/src
INFO] skip non existing resourceDirectory /codebuild/output/src748698927/src/src/main/resources
In some portion of code show me that the path src is duplicated, i don't know if it has something to related with the problem
My config files are:
template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Ftc-client
Resources:
FtcClientFunction:
Type: AWS::Serverless::Function
Properties:
Handler: com.ad.client.App::handleRequest
Runtime: java8
CodeUri: ./
Events:
MyFtcClientApi:
Type: Api
Properties:
Path: /client
Method: GET
buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
build:
commands:
- echo Build started on `date`
- mvn test
- export BUCKET=my-bucket-for-test
- aws cloudformation package --template-file template.yaml --s3-bucket $BUCKET --output-template-file outputtemplate.yaml
finally:
- echo THE PATH WORK IS !!!
- pwd
- echo The list of file is !!
- ls
- echo CODE BUILD SRC DIRECTORY
- echo $CODEBUILD_SRC_DIR
post_build:
commands:
- echo Build completed on `date`
- mvn package
artifacts:
files:
- target/ftc-client-1.0-SNAPSHOT.jar
- template.yaml
- outputtemplate.yaml
discard-paths: yes
The source code structure is :
/fclient/src/main/java/com/ad/App.java
/tclient/buildspec.yml
/fclient/pom.xml
/fclient/template.yaml
I want to make this but with Java : https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
thank for everyone whom can give me a cue
This is the solution - it was necessary to unzip the jar in root of my code:
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
pre_build:
commands:
- echo Test started on `date`
- mvn clean compile test
build:
commands:
- echo Build started on `date`
- export BUCKET=my-bucket-for-test
- mvn package shade:shade
- mv target/ftc-client-1.0-SNAPSHOT.jar
- unzip ftc-client-1.0-SNAPSHOT.jar
- rm -rf target tst src buildspec.yml pom.xml ftc-client-1.0-SNAPSHOT.jar
- aws cloudformation package --template-file template.yaml --s3-bucket $BUCKET --output-template-file outputtemplate.yaml
post_build:
commands:
- echo Build completed on `date` !!!
artifacts:
files:
- target/ftc-client-1.0-SNAPSHOT.jar
- template.yaml
- outputtemplate.yaml
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html
Running the latest version of sonarcube 7.1.2 and getting the following error.:
Command executed: sudo ./sonar.sh
Wrapper Started as Console
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.07.01 18:36:05 INFO app[][o.s.a.AppFileSystem] Cleaning or
creating temp directory /Users/aneeshgoel/Downloads/sonarqube-7.2.1/temp
2018.07.01 18:36:05 INFO app[][o.s.a.es.EsSettings] Elasticsearch
listening on /127.0.0.1:9001
2018.07.01 18:36:05 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch
process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from
[/Users/aneeshgoel/Downloads/sonarqube-7.2.1/elasticsearch]:
/Users/aneeshgoel/Downloads/sonarqube-
7.2.1/elasticsearch/bin/elasticsearch -
Epath.conf=/Users/aneeshgoel/Downloads/sonarqube-7.2.1/temp/conf/es
2018.07.01 18:36:05 INFO app[][o.s.a.SchedulerImpl] Waiting for
Elasticsearch to be up and running
2018.07.01 18:36:10 INFO app[][o.e.p.PluginsService] no modules loaded
2018.07.01 18:36:10 INFO app[][o.e.p.PluginsService] loaded plugin
[org.elasticsearch.transport.Netty4Plugin]
2018.07.01 18:36:16 WARN app[][o.s.a.p.AbstractProcessMonitor] Process
exited with exit value [es]: 1
2018.07.01 18:36:16 INFO app[][o.s.a.SchedulerImpl] Process [es] is
stopped
2018.07.01 18:36:16 INFO app[][o.s.a.SchedulerImpl] SonarQube is
stopped
<-- Wrapper Stopped
Then I tried with non sudo user. Command ./sonar.sh
Error got is :
--> Wrapper Started as Console
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.07.01 18:18:16 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /Users/aneeshgoel/Downloads/sonarqube-7.2.1/temp
WrapperSimpleApp: Encountered an error running main: java.nio.file.AccessDeniedException: /Users/aneeshgoel/Downloads/sonarqube-7.2.1/temp/conf/es/elasticsearch.yml
java.nio.file.AccessDeniedException: /Users/aneeshgoel/Downloads/sonarqube-7.2.1/temp/conf/es/elasticsearch.yml
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)
at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at org.sonar.process.FileUtils2$DeleteRecursivelyFileVisitor.visitFile(FileUtils2.java:170)
I have tried giving write access also to the directory , but still no luck. Please can someone help in debugging the issue.
SonarQube installation guide, unfortunately, doesn’t say a thing about configuring user for the analysis server. People that are installing it can later forget about it, leaving SonarQube running with root rights for a while.
It is, however pretty simple and straightforward. Prepare sonar system user and change installation directory rights:
You have to run the sonar in the context of sonar user. To create a user called sonar, follow these steps:
groupadd sonar useradd -c "Sonar System User" -d /opt/sonarqube -g
sonar -s /bin/bash sonar chown -R sonar:sonar /opt/sonarqube
Then edit the file present here:
/opt/sonarqube/bin/sonar.sh
Find the line which reads RUN_AS_USER=sonar which will be commented, and then change as sonar and try to run the app now.
I'm trying to provide a continuous deployment with Gradle and Heroku but for some reason, the deployment step is not running.
CircleCI Pipeline result
I've already updated the circle ci with the Heroku key.
version: 2
jobs:
build:
docker:
- image: circleci/openjdk:8-jdk
working_directory: ~/repo
environment:
JVM_OPTS: -Xmx3200m
TERM: dumb
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "build.gradle" }}
- v1-dependencies-
- run: gradle dependencies
- save_cache:
paths:
- ~/.m2
key: v1-dependencies-{{ checksum "build.gradle" }}
# run tests!
- run: gradle test
deployment:
staging:
branch: master
heroku:
appname: my-heroku-app
Could you guys help me, please? Is the deployment step in the right place?
You are using deployment configuration for CircleCI 1.0 but you are using CircleCI 2.0.
From the documentation for CircleCI 2.0:
The built-in Heroku integration through the CircleCI UI is not
implemented for CircleCI 2.0. However, it is possible to deploy to
Heroku manually.
To deploy on heroku with CircleCI 2.0, you need :
add environment variables HEROKU_LOGIN, HEROKU_API_KEY, HEROKU_APP_NAME to your CircleCI project settings https://circleci.com/gh/<account>/<project>/edit#env-vars
create a private ssh key without passphrase and add it to your CircleCI project settings https://circleci.com/gh/https://circleci.com/gh/<account>/<project>/edit#ssh for hostname git.heroku.com
add steps in the .circleci/config.yml file with the fingerprint of your ssh key
- run:
name: Setup Heroku
command: |
ssh-keyscan -H heroku.com >> ~/.ssh/known_hosts
cat > ~/.netrc << EOF
machine api.heroku.com
login $HEROKU_LOGIN
password $HEROKU_API_KEY
EOF
cat >> ~/.ssh/config << EOF
VerifyHostKeyDNS yes
StrictHostKeyChecking no
EOF
- add_ssh_keys:
fingerprints:
- "<SSH KEY fingerprint>"
- deploy:
name: "Deploy to Heroku"
command: git push --force git#heroku.com:$HEROKU_APP_NAME.git HEAD:refs/heads/master