Integrate Apache storm with zookeeper in windows - java

Can anyone help me how to integrate storm with zookeeper in windows.
I try to find the find a good installation steps for production in windows but I can't.
Write now I have installed stand alone zookeeper and I am trying to configure it in the storm.yaml.
sample code I tried :
storm.zookeeper.servers:
- "127.0.0.1"
- "server2"
storm.zookeeper.port: 2180
nimbus.host: "localhost"
If any body knows please help me.

Please follow the below steps for complete installation of storm with zookeeper in windows
1. create a zoo.cfg file under /conf directory of zookeeper.
here are the configuration entries
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=D:\\workspace\\zk-data
# the port at which the clients will connect
clientPort=2181
2.run the zookeeper by executing the zkServer.bat in /bin directory
3.create the storm.yaml file in /conf directory of the apache-storm
here are the configuration entries
########### These MUST be filled in for a storm configuration
storm.zookeeper.servers:
- "localhost"
#supervisor.slots.ports:
# - 6700
# - 6701
# - 6702
# - 6703
storm.local.dir: D:\\workspace\\storm-data
nimbus.host: "localhost"
#
#
4.run the below commands from the command prompt
a. bin\storm nimbus
b. bin\storm supervisor
c. bin\storm ui
hit the browser with localhost:8080 to see the nimbus ui
6.make sure your JAVA_HOME path does not have any space. otherwise storm command
will not run.

Related

Dockerized Spring Application with environment variables

I am currently trying to run a Java Spring application in a Docker container. This has worked without any problems so far. But now I want that in the application.properties there are no fixed values but only placeholders, which are passed as environment variables to the container. According to the Spring Docs this should be possible but when I try this I always get the error that no database connection can be established to the placeholder ${DB_CONFIG} (Caused by: java.lang.RuntimeException: Driver com.mysql.cj.jdbc.Driver claims to not accept jdbcUrl, ${DB_CONFIG}).
I already tried to pass the application.properties externally, instead of an env file pass the variables directly as Docker environment variable and instead of a custom variable (DB_CONFIG) for the JDBC url pass the Spring variable (SPRING_DATASOURCE_URL). Neither variant worked for me.
I hope I got my point across correctly and you guys can help me out.
For your information: I don't maintain the source code, I just get it from a CI/CD, copy the changed application.properties to its place and compile the project.
KR,
BlackRose01
docker-compose file:
version: '3'
services:
arrowhead-serviceregistry:
container_name: arrowhead-serviceregistry
image: 'openjdk:11-jre-slim-buster'
restart: always
env_file: '.env'
ports:
- 8443:8443/tcp
volumes:
- ./arrowhead-serviceregistry-4.3.0.jar:/service.jar
depends_on:
- mysql
command:
-java -noverify -XX:TieredStopAtLevel=1 -jar /service.jar
application.properties file:
############################################
### APPLICATION PARAMETERS ###
############################################
# Database connection (mandatory)
# Change the server timezone if neccessary
spring.datasource.url=${DB_CONFIG}
spring.datasource.username=${SERVICEREGISTRY_DB_USERNAME}
spring.datasource.password=${SERVICEREGISTRY_DB_PASSWORD}
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.jpa.database-platform=org.hibernate.dialect.MySQL5InnoDBDialect
# use true only for debugging
spring.jpa.show-sql=false
spring.jpa.properties.hibernate.format_sql=true
spring.jpa.hibernate.ddl-auto=none
# Service Registry web-server parameters
server.address=0.0.0.0
server.port=${SERVICEREGISTRY_PORT}
domain.name=${SERVICEREGISTRY_ADDRESS}
domain.port=${SERVICEREGISTRY_PORT}
############################################
### CUSTOM PARAMETERS ###
############################################
# Name of the core system
core_system_name=SERVICE_REGISTRY
# Show all request/response in debug log
log_all_request_and_response=${SERVICEREGISTRY_LOG_REQUESTS}
# Service Registry has an optional feature to ping service providers in a fixed time interval,
# and remove service offerings where the service provider was not available
# use this feature (true/false)
ping_scheduled=${SERVICEREGISTRY_PING_ENABLE}
# how much time the Service Registry should wait for the ping response (in milliseconds)
ping_timeout=${SERVICEREGISTRY_PING_TIMEOUT}
# how frequently should the ping happen, in minutes
ping_interval=${SERVICEREGISTRY_PING_INTERVAL}
# Service Registry has an optional feature to automatically remove service offerings, where the endOfValidity
# timestamp field is in the past, meaning the offering expired
# use this feature (true/false)
ttl_scheduled=${SERVICEREGISTRY_TTL_ENABLE}
# how frequently the database should be checked for expired services, in minutes
ttl_interval=${SERVICEREGISTRY_TTL_INTERVAL}
# Interface names has to follow this format <PROTOCOL>-<SECURITY>-<FORMAT>, where security can be SECURE or INSECURE and protocol and format must be a sequence of letters, numbers and underscore.
# A regexp checker will verify that. If this setting is set to true then the PROTOCOL and FORMAT must come from a predefined set.
use_strict_service_intf_name_verifier=${SERVICEREGISTRY_STRICT_INTERFACE_NAMES}
############################################
### SECURE MODE ###
############################################
# configure secure mode
# Set this to false to disable https mode
server.ssl.enabled=${SSL_ENABLED}
server.ssl.key-store-type=${SERVICEREGISTRY_SSL_KEYSTORE_TYPE}
server.ssl.key-store=${SERVICEREGISTRY_SSL_KEYSTORE}
server.ssl.key-store-password=${SERVICEREGISTRY_SSL_KEYSTORE_PASSWORD}
server.ssl.key-alias=${SERVICEREGISTRY_SSL_KEY_ALIAS}
server.ssl.key-password=${SERVICEREGISTRY_SSL_KEY_PASSWORD}
server.ssl.client-auth=${SERVICEREGISTRY_SSL_CLIENT_AUTH}
server.ssl.trust-store-type=${SERVICEREGISTRY_SSL_TRUSTSTORE_TYPE}
server.ssl.trust-store=${SERVICEREGISTRY_SSL_TRUSTSTORE_PATH}
server.ssl.trust-store-password=${SERVICEREGISTRY_SSL_TRUSTSTORE_PASSWORD}
#If true, http client does not check whether the hostname is match one of the server's SAN in its certificate
#Just for testing, DO NOT USE this feature in production environment
disable.hostname.verifier=${SERVICEREGISTRY_DISABLE_HOSTNAME_VERIFIER}
Environment file:
# Basic settings
DB_CONFIG=jdbc:mysql://127.0.0.1:3306/arrowhead?serverTimezone=Europe/Berlin
SSL_ENABLED=false
# Service Registry settings
SERVICEREGISTRY_ADDRESS=127.0.0.1
SERVICEREGISTRY_PORT=8443
SERVICEREGISTRY_DB_USERNAME=myUser
SERVICEREGISTRY_DB_PASSWORD=xxx
SERVICEREGISTRY_LOG_REQUESTS=false
SERVICEREGISTRY_PING_ENABLE=false
SERVICEREGISTRY_PING_TIMEOUT=5000
SERVICEREGISTRY_PING_INTERVAL=60
SERVICEREGISTRY_TTL_ENABLE=false
SERVICEREGISTRY_TTL_INTERVAL=10
SERVICEREGISTRY_STRICT_INTERFACE_NAMES=false
SERVICEREGISTRY_DISABLE_HOSTNAME_VERIFIER=false
SERVICEREGISTRY_SSL_KEYSTORE_TYPE=PKCS12
SERVICEREGISTRY_SSL_KEYSTORE=classpath:certificates/service_registry.p12
SERVICEREGISTRY_SSL_KEYSTORE_PASSWORD=123456
SERVICEREGISTRY_SSL_KEY_ALIAS=service_registry
SERVICEREGISTRY_SSL_KEY_PASSWORD=123456
SERVICEREGISTRY_SSL_CLIENT_AUTH=need
SERVICEREGISTRY_SSL_TRUSTSTORE_TYPE=PKCS12
SERVICEREGISTRY_SSL_TRUSTSTORE_PATH=classpath:certificates/truststore.p12
SERVICEREGISTRY_SSL_TRUSTSTORE_PASSWORD=123456
While searching for a solution to my problem, I came across the Spring Cloud Config Server (also just a Spring application that is self-hostable). This provides an interface between the configuration file storage location and the Spring application. The application is set up so that instead of a static application.(properties|yml) file, a bootstrap.yml is stored with the URL to the config server. When the application starts up, it then connects to the Config Server and receives it via a REST interface. The configuration files can be stored in Git/a DB or as a file.
Spring Docs - Cloud Config Server
Spring - Instruction for Implementation
Baeldung - Tutorial
Baeldung - Tutorial Bootstrapping

How to reference local Kafka and Zookeeper config on Spring Cloud Dataflow "Cloudfoundry" server start

Here's is what I have successfully done so far on SCDF Local Server
I have successfully deployed SCDF server on my local and also I have used Kafka and Zookeeper config parameters with it i.e
mymac$ java -jar spring-cloud-dataflow-server-local-1.3.0.RELEASE.jar
--spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=localhost:9092
--spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=localhost:2181
I was able to create my stream
ingest = producer-app > :broker1
filter = :broker1 > filter-app > :broker2
Now I need help to do the exact same thing on PCFDev
I have my PCFDEv running
I have to deploy SCDF-Cloudfoundry jar with my local kafka and zookeeper parameters to pcfDev but when I do the following steps it gives me an error that its
1.1) cf push -f manifest-scdf.yml --no-start -p /XXX/XXX/XXX/spring-cloud-dataflow-server-cloudfoundry-1.3.0.BUILD-SNAPSHOT.jar -k 1500M
this runs good...no problem. but 1.2
1.2) cf start dataflow-server --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=host.pcfdev.io:9092 --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=host.pcfdev.io:2181
gives me this error:--
Incorrect Usage: unknown flag `spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers'
below is my manifest-scdf.yml file
---
instances: 1
memory: 2048M
applications:
- name: dataflow-server
host: dataflow-server
services:
- redis
- rabbit
env:
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL: https://api.local.pcfdev.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_ORG: pcfdev-org
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SPACE: pcfdev-space
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN: local.pcfdev.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_USERNAME: admin
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_PASSWORD: admin
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SKIP_SSL_VALIDATION: true
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_SERVICES: rabbit
MAVEN_REMOTE_REPOSITORIES_REPO1_URL: https://repo.spring.io/libs-snapshot
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DISK: 512
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_BUILDPACK: java_buildpack
spring.cloud.deployer.cloudfoundry.stream.memory: 400
spring.cloud.dataflow.features.tasks-enabled: true
spring.cloud.dataflow.features.streams-enabled: true
Please help me. Thank you.
There are few options to supply Kafka credentials to Stream-apps in PCF.
1. Kafka CUPs
This option allows you to create CUPs for an external Kafka-service. While deploying the stream, you can then supply the coordinates to each application either individually as described in the docs or you can supply them as global properties for all the stream-apps deployed by the SCDF-server.
2. Inline properties
Instead of extracting from CUPs, you can also directly supply the HOST/PORT while deploying the stream. Again, this can be applied globally, too.
stream deploy myTest --properties "app.*.spring.cloud.stream.kafka.binder.brokers=<HOST>:9092,app.*.spring.cloud.stream.kafka.binder.zkNodes=<HOST>:2181
Note: The HOST must be reachable for the stream-apps; o'wise, it ill continue to connect to localhost and potentially fail since the apps are running inside a VM.
The error you're seeing is coming from the CF CLI, it's interpreting those (I'm assuming environment) variables you're providing as flags to the cf start command and failing.
You could either provide them in your manifest.yml or set their values manually using the CLI's cf set-env command by doing something like this:
cf set-env dataflow-server spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers host.pcfdev.io:9092
cf set-env dataflow-server spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes host.pcfdev.io:2181
After you've set them they should be picked up when you run cf start dataflow-server.
Relevant CLI docs:
http://cli.cloudfoundry.org/en-US/cf/set-env.html

Exception getting zoo instance in geomesa accumulo data store

Thanks for your attention. I am setting up Accumulo Data Store using geomesa and zookeper and have completed set up configuration changes and installed required instance like accumulo, java and maven.
When I am creating a new feature using command line interface using command geomesa create-schema -u root -p ****** \
-c device_ping \
-f feature \
-s uuid:String:index=true,dtg:Date,geom:Point:srid=4326 \
--dtg dtg
It fails giving
Exception getting zoo instance and terminate throwing error "Unable to create data store, please check your connection parameters."
I am unable to find solution to this problem and don't know which configuration parameters are wrong. Here is the details of attached screenshot
GeoMesa works to figure out the Zookeepers from a local copy of the Accumulo cluster's configuration. That configuration is likely in $ACCUMULO_HOME.
You can manually set the zookeepers with -z host1,host2,host3. If the hosts are correct (or you set them manually), you might check that zookeeper is running and can be accessed from your laptop.
To double check Zookeeper, you can do something like...
echo ruok | nc hostName portNumber
If Zookeeper is running, you'll receive an 'imok' message back.
Lastly, if Zookeeper is up and running, but just slow for some reason, you can increase the Zookeeper timeout by setting the Java system property "instance.zookeeper.timeout" higher. The timeout is currently set to 5 seconds.

Sonarqube integration and startSonar.bat failed Error(0x2)

Yesterday I started a new Unreal Engine Project on Visual Studio Team Service, I decided to learn by myself the art of videogames programming.
Anyway the best thing I wanted to use for this project was to integrate SonarQube in Visual Studio and get reports from it (I already used it at University and it was really useful for me), but I had some curious Problems:
First of all, I wanted to run the analisys on my Local PC, the problem is that today i ran the bat and i got this problem, I already checked somewhere for this problem but I believe It's not the %JAVA_HOME% Variable.
Wrapper Started as Console
Launching a JVM...
Unable to execute Java command. Impossibile trovare il file specificato. (0x2)
"java" -Djava.awt.headless=true -Xms3m -Xmx3m -Djava.library.path="./lib" -classpath "../../lib/jsw/wrapper-3.2.3.jar;../../lib/sonar-application-6.1.jar"
-Dwrapper.key="GMa9Ff98pvtqrHYZ" -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.pid=6352 -Dwrapper.version="3.2.3" -Dwrapper.native_library="wrapper" -Dwrapper.cpu.timeout="10" -Dwrapper.jvmid=1 org.tanukisoftware.wrapper.WrapperSimpleApp org.sonar.application.App
Critical error: wait for JVM process failed
If I'm able to solve this problem I have the necessity to understand how to integrate the code analysis in my Build definition on VSTS Locally (It requires a correct Endpoint and I don't know how to set it).
Last but not least, if i'm am not able to run this Build Definition Locally, is there any online free server that supports SonarQube to use it and finally get a report from SonarQube?
EDIT:
Wrapper Config
# Path to JVM executable. By default it must be available in PATH.
# Can be an absolute path, for example:
set Mypath=C:/Program Files
wrapper.java.command=%Mypath%\Java\jdk1.8.0_77\bin\java
wrapper.java.command=java
#
# DO NOT EDIT THE FOLLOWING SECTIONS
#
#********************************************************************
# Wrapper Java
#********************************************************************
wrapper.java.additional.1=-Djava.awt.headless=true
wrapper.java.mainclass=org.tanukisoftware.wrapper.WrapperSimpleApp
wrapper.java.classpath.1=../../lib/jsw/*.jar
wrapper.java.classpath.2=../../lib/*.jar
wrapper.java.library.path.1=./lib
wrapper.app.parameter.1=org.sonar.application.App
wrapper.java.initmemory=3
wrapper.java.maxmemory=3
#********************************************************************
# Wrapper Logs
#********************************************************************
wrapper.console.format=PM
wrapper.console.loglevel=INFO
wrapper.logfile=../../logs/sonar.log
wrapper.logfile.format=M
wrapper.logfile.loglevel=INFO
# Maximum size that the log file will be allowed to grow to before
# the log is rolled. Size is specified in bytes. The default value
# of 0, disables log rolling. May abbreviate with the 'k' (kb) or
# 'm' (mb) suffix. For example: 10m = 10 megabytes.
#wrapper.logfile.maxsize=0
# Maximum number of rolled log files which will be allowed before old
# files are deleted. The default value of 0 implies no limit.
#wrapper.logfile.maxfiles=0
# Log Level for sys/event log output. (See docs for log levels)
wrapper.syslog.loglevel=NONE
#********************************************************************
# Wrapper Windows Properties
#********************************************************************
# Title to use when running as a console
wrapper.console.title=SonarQube
# Disallow start of multiple instances of an application at the same time on Windows
wrapper.single_invocation=true
#********************************************************************
# Wrapper Windows NT/2000/XP Service Properties
#********************************************************************
# WARNING - Do not modify any of these properties when an application
# using this configuration file has been installed as a service.
# Please uninstall the service before modifying this section. The
# service can then be reinstalled.
# Name of the service
wrapper.ntservice.name=SonarQube
# Display name of the service
wrapper.ntservice.displayname=SonarQube
# Description of the service
wrapper.ntservice.description=SonarQube
# Service dependencies. Add dependencies as needed starting from 1
wrapper.ntservice.dependency.1=
# Mode in which the service is installed. AUTO_START or DEMAND_START
wrapper.ntservice.starttype=AUTO_START
# Allow the service to interact with the desktop.
wrapper.ntservice.interactive=false
#********************************************************************
# Forking Properties
#********************************************************************
wrapper.disable_restarts=TRUE
wrapper.ping.timeout=0
wrapper.shutdown.timeout=3000
wrapper.jvm_exit.timeout=3000
Sonar Config
# Property values can:
# - reference an environment variable, for example sonar.jdbc.url= ${env:SONAR_JDBC_URL}
# - be encrypted. See http://redirect.sonarsource.com/doc/settings-encryption.html
#--------------------------------------------------------------------------------------------------
# DATABASE
#
# IMPORTANT: the embedded H2 database is used by default. It is recommended for tests but not for
# production use. Supported databases are MySQL, Oracle, PostgreSQL and Microsoft SQLServer.
# User credentials.
# Permissions to create tables, indices and triggers must be granted to JDBC user.
# The schema must be created first.
#sonar.jdbc.username=
#sonar.jdbc.password=
#----- Embedded Database (default)
# H2 embedded database server listening port, defaults to 9092
#sonar.embeddedDatabase.port=9092
#----- MySQL 5.6 or greater
# Only InnoDB storage engine is supported (not myISAM).
# Only the bundled driver is supported. It can not be changed.
#sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
#----- Oracle 11g/12c
# - Only thin client is supported
# - Only versions 11.2.x and 12.x of Oracle JDBC driver are supported
# - The JDBC driver must be copied into the directory extensions/jdbc-driver/oracle/
# - If you need to set the schema, please refer to http://jira.sonarsource.com/browse/SONAR-5000
#sonar.jdbc.url=jdbc:oracle:thin:#localhost:1521/XE
#----- PostgreSQL 8.x/9.x
# If you don't use the schema named "public", please refer to http://jira.sonarsource.com/browse/SONAR-5000
#sonar.jdbc.url=jdbc:postgresql://localhost/sonar
#----- Microsoft SQLServer 2012/2014 and SQL Azure
# A database named sonar must exist and its collation must be case-sensitive (CS) and accent-sensitive (AS)
# Use the following connection string if you want to use integrated security with Microsoft Sql Server
# Do not set sonar.jdbc.username or sonar.jdbc.password property if you are using Integrated Security
# For Integrated Security to work, you have to download the Microsoft SQL JDBC driver package from
# http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=11774
# and copy sqljdbc_auth.dll to your path. You have to copy the 32 bit or 64 bit version of the dll
# depending upon the architecture of your server machine.
# This version of SonarQube has been tested with Microsoft SQL JDBC version 4.1
#sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar;integratedSecurity=true
# Use the following connection string if you want to use SQL Auth while connecting to MS Sql Server.
# Set the sonar.jdbc.username and sonar.jdbc.password appropriately.
#sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar
#----- Connection pool settings
# The maximum number of active connections that can be allocated
# at the same time, or negative for no limit.
# The recommended value is 1.2 * max sizes of HTTP pools. For example if HTTP ports are
# enabled with default sizes (50, see property sonar.web.http.maxThreads)
# then sonar.jdbc.maxActive should be 1.2 * (50) = 120.
#sonar.jdbc.maxActive=60
# The maximum number of connections that can remain idle in the
# pool, without extra ones being released, or negative for no limit.
#sonar.jdbc.maxIdle=5
# The minimum number of connections that can remain idle in the pool,
# without extra ones being created, or zero to create none.
#sonar.jdbc.minIdle=2
# The maximum number of milliseconds that the pool will wait (when there
# are no available connections) for a connection to be returned before
# throwing an exception, or <= 0 to wait indefinitely.
#sonar.jdbc.maxWait=5000
#sonar.jdbc.minEvictableIdleTimeMillis=600000
#sonar.jdbc.timeBetweenEvictionRunsMillis=30000
#--------------------------------------------------------------------------------------------------
# WEB SERVER
# Web server is executed in a dedicated Java process. By default heap size is 512Mb.
# Use the following property to customize JVM options.
# Recommendations:
#
# The HotSpot Server VM is recommended. The property -server should be added if server mode
# is not enabled by default on your environment:
# http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
# Startup can be long if entropy source is short of entropy. Adding
# -Djava.security.egd=file:/dev/./urandom is an option to resolve the problem.
# See https://wiki.apache.org/tomcat/HowTo/FasterStartUp#Entropy_Source
#
#sonar.web.javaOpts=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.web.javaAdditionalOpts=
# Binding IP address. For servers with more than one IP address, this property specifies which
# address will be used for listening on the specified ports.
# By default, ports will be used on all IP addresses associated with the server.
#sonar.web.host=0.0.0.0
# Web context. When set, it must start with forward slash (for example /sonarqube).
# The default value is root context (empty value).
#sonar.web.context=
# TCP port for incoming HTTP connections. Default value is 9000.
sonar.web.port=8888
# The maximum number of connections that the server will accept and process at any given time.
# When this number has been reached, the server will not accept any more connections until
# the number of connections falls below this value. The operating system may still accept connections
# based on the sonar.web.connections.acceptCount property. The default value is 50.
#sonar.web.http.maxThreads=50
# The minimum number of threads always kept running. The default value is 5.
#sonar.web.http.minThreads=5
# The maximum queue length for incoming connection requests when all possible request processing
# threads are in use. Any requests received when the queue is full will be refused.
# The default value is 25.
#sonar.web.http.acceptCount=25
# By default users are logged out and sessions closed when server is restarted.
# If you prefer keeping user sessions open, a secret should be defined. Value is
# HS256 key encoded with base64. It must be unique for each installation of SonarQube.
# Example of command-line:
# echo -n "type_what_you_want" | openssl dgst -sha256 -hmac "key" -binary | base64
#sonar.auth.jwtBase64Hs256Secret=
#--------------------------------------------------------------------------------------------------
# COMPUTE ENGINE
# The Compute Engine is responsible for processing background tasks.
# Compute Engine is executed in a dedicated Java process. Default heap size is 512Mb.
# Use the following property to customize JVM options.
# Recommendations:
#
# The HotSpot Server VM is recommended. The property -server should be added if server mode
# is not enabled by default on your environment:
# http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
#sonar.ce.javaOpts=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.ce.javaAdditionalOpts=
# The number of workers in the Compute Engine. Value must be greater than zero.
# By default the Compute Engine uses a single worker and therefore processes tasks one at a time.
# Recommendations:
#
# Using N workers will require N times as much Heap memory (see property
# sonar.ce.javaOpts to tune heap) and produce N times as much IOs on disk, database and
# Elasticsearch. The number of workers must suit your environment.
#sonar.ce.workerCount=1
#--------------------------------------------------------------------------------------------------
# ELASTICSEARCH
# Elasticsearch is used to facilitate fast and accurate information retrieval.
# It is executed in a dedicated Java process. Default heap size is 1Gb.
# JVM options of Elasticsearch process
# Recommendations:
#
# Use HotSpot Server VM. The property -server should be added if server mode
# is not enabled by default on your environment:
# http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
#sonar.search.javaOpts=-Xmx1G -Xms256m -Xss256k -Djna.nosys=true \
# -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 \
# -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.search.javaAdditionalOpts=
# Elasticsearch port. Default is 9001. Use 0 to get a free port.
# As a security precaution, should be blocked by a firewall and not exposed to the Internet.
#sonar.search.port=9001
# Elasticsearch host. The search server will bind this address and the search client will connect to it.
# Default is 127.0.0.1.
# As a security precaution, should NOT be set to a publicly available address.
#sonar.search.host=127.0.0.1
#--------------------------------------------------------------------------------------------------
# UPDATE CENTER
# Update Center requires an internet connection to request https://update.sonarsource.org
# It is enabled by default.
#sonar.updatecenter.activate=true
# HTTP proxy (default none)
#http.proxyHost=
#http.proxyPort=
# HTTPS proxy (defaults are values of http.proxyHost and http.proxyPort)
#https.proxyHost=
#https.proxyPort=
# NT domain name if NTLM proxy is used
#http.auth.ntlm.domain=
# SOCKS proxy (default none)
#socksProxyHost=
#socksProxyPort=
# Proxy authentication (used for HTTP, HTTPS and SOCKS proxies)
#http.proxyUser=
#http.proxyPassword=
#--------------------------------------------------------------------------------------------------
# LOGGING
# Level of logs. Supported values are INFO(default), DEBUG and TRACE (DEBUG + SQL + ES requests)
#sonar.log.level=INFO
# Path to log files. Can be absolute or relative to installation directory.
# Default is <installation home>/logs
#sonar.path.logs=logs
# Rolling policy of log files
# - based on time if value starts with "time:", for example by day ("time:yyyy-MM-dd")
# or by month ("time:yyyy-MM")
# - based on size if value starts with "size:", for example "size:10MB"
# - disabled if value is "none". That needs logs to be managed by an external system like logrotate.
#sonar.log.rollingPolicy=time:yyyy-MM-dd
# Maximum number of files to keep if a rolling policy is enabled.
# - maximum value is 20 on size rolling policy
# - unlimited on time rolling policy. Set to zero to disable old file purging.
#sonar.log.maxFiles=7
# Access log is the list of all the HTTP requests received by server. If enabled, it is stored
# in the file {sonar.path.logs}/access.log. This file follows the same rolling policy as for
# sonar.log (see sonar.log.rollingPolicy and sonar.log.maxFiles).
#sonar.web.accessLogs.enable=true
# Format of access log. It is ignored if sonar.web.accessLogs.enable=false. Possible values are:
# - "common" is the Common Log Format, shortcut to: %h %l %u %user %date "%r" %s %b
# - "combined" is another format widely recognized, shortcut to: %h %l %u [%t] "%r" %s %b "%i{Referer}" "%i{User-Agent}"
# - else a custom pattern. See http://logback.qos.ch/manual/layouts.html#AccessPatternLayout.
# The login of authenticated user is not implemented with "%u" but with "%reqAttribute{LOGIN}" (since version 6.1).
# The value displayed for anonymous users is "-".
# If SonarQube is behind a reverse proxy, then the following value allows to display the correct remote IP address:
#sonar.web.accessLogs.pattern=%i{X-Forwarded-For} %l %u [%t] "%r" %s %b "%i{Referer}" "%i{User-Agent}"
# Default value is:
#sonar.web.accessLogs.pattern=combined
#--------------------------------------------------------------------------------------------------
# OTHERS
# Delay in seconds between processing of notification queue. Default is 60 seconds.
#sonar.notifications.delay=60
# Paths to persistent data files (embedded database and search index) and temporary files.
# Can be absolute or relative to installation directory.
# Defaults are respectively <installation home>/data and <installation home>/temp
#sonar.path.data=data
#sonar.path.temp=temp
#--------------------------------------------------------------------------------------------------
# DEVELOPMENT - only for developers
# The following properties MUST NOT be used in production environments.
# Dev mode allows to reload web sources on changes and to restart server when new versions
# of plugins are deployed.
#sonar.web.dev=false
# Path to webapp sources for hot-reloading of Ruby on Rails, JS and CSS (only core,
# plugins not supported).
#sonar.web.dev.sources=/path/to/server/sonar-web/src/main/webapp
# Elasticsearch HTTP connector, for example for KOPF:
# http://lmenezes.com/elasticsearch-kopf/?location=http://localhost:9010
#sonar.search.httpPort=-1
Seems that your update of wrapper.conf is wrong, its first lines after correct edit should look like:
# Path to JVM executable. By default it must be available in PATH.
# Can be an absolute path, for example:
wrapper.java.command=C:\Program Files\Java\jdk1.8.0_77\bin\java
#wrapper.java.command=java
The command java is not in the PATH. Another option if you don't want to edit the PATH variable is to define the absolute path to "java" in conf/wrapper.conf (see property wrapper.java.command).
I have updated JDK path in wrapper.conf file and it worked. Now, Wrapper.conf file looks like below:
# Path to JVM executable. By default it must be available in PATH.
# Can be an absolute path, for example:
wrapper.java.command=C:\Program Files\Java\jdk-11.0.7\bin\java
#wrapper.java.command=java
#Davide Donadio: I have experienced the same issue today. Even after making the change suggested by #Godin. When checked further, the error says:
jvm 1 | Error: missing server' JVM at C:\Program Files (x86)\Java\jre1.8.0_271\bin\server\jvm.dll'.
StartSonar.bat is looking for jvm.dll inside 'server' folder. Then I provided the jdk path to wrapper.conf and it started to work!
wrapper.java.command=C:\Program Files\Java\jdk-11.0.2\bin\java
I hope this will help someone -Thanks!
In my case, I didn't have JDK installed on my Windows 10, so just did one by installing it though MSI from below link and it worked
https://www.oracle.com/java/technologies/downloads/#jdk17-windows

JMX doesn't work in Tomcat

I have the following configuration:
I deployed a sample SymmetricDS engine in Tomcat 8. It should have a JMX MBean I have to connect to. The configuration file symmetric-server.properties has the following values:
# Enable Java Management Extensions (JMX) web console.
#
jmx.http.enable=true
# Port number for Java Management Extensions (JMX) web console.
#
jmx.http.port=31417
# Enable Java Management Extensions (JMX) remote agent.
#
jmx.agent.enable=true
# Port number for the Java Management Extensions (JMX) remote agent.
#
jmx.agent.port=31418
And yet when I go to localhost:31417 I get 404 and when I launch the JConsole this application is nowhere to be found.
But when I start SymmetricDS with command bin\sym and it launches using the embedded jetty server, I can see the HTTP Adaptor on localhost:31417 and can connect via JConsole to the local application, yet I cannot connect remotely to localhost:31418:
I downloaded the sources of the SymmetricDS and in the file
symmetric-server\src\main\java\org\jumpmind\symmetric\SymmetricWebServer.java
there are only three configuguration taken from file symmetric-server.properties -- from default values it seems that they are jmx.http.port for HTTP Adaptor, https.port for HTTPS and http.port for SymmetricWebServer.
I also tried changing jmx.agent.enable to false and manually overriding java command line options in sym_service.conf by adding:
wrapper.java.additional.13=-Dcom.sun.management.jmxremote
wrapper.java.additional.14=-Dcom.sun.management.jmxremote.port=31417
wrapper.java.additional.15=-Dcom.sun.management.jmxremote.authenticate=false
wrapper.java.additional.16=-Dcom.sun.management.jmxremote.ssl=false
to no avail.
Could you please help me, what am I doing wrong?
Update
After greping sources I found SystemConstants.java, in which again there were ports for http, https and jmx.http, but none for remote agent

Categories