Thanks for your attention. I am setting up Accumulo Data Store using geomesa and zookeper and have completed set up configuration changes and installed required instance like accumulo, java and maven.
When I am creating a new feature using command line interface using command geomesa create-schema -u root -p ****** \
-c device_ping \
-f feature \
-s uuid:String:index=true,dtg:Date,geom:Point:srid=4326 \
--dtg dtg
It fails giving
Exception getting zoo instance and terminate throwing error "Unable to create data store, please check your connection parameters."
I am unable to find solution to this problem and don't know which configuration parameters are wrong. Here is the details of attached screenshot
GeoMesa works to figure out the Zookeepers from a local copy of the Accumulo cluster's configuration. That configuration is likely in $ACCUMULO_HOME.
You can manually set the zookeepers with -z host1,host2,host3. If the hosts are correct (or you set them manually), you might check that zookeeper is running and can be accessed from your laptop.
To double check Zookeeper, you can do something like...
echo ruok | nc hostName portNumber
If Zookeeper is running, you'll receive an 'imok' message back.
Lastly, if Zookeeper is up and running, but just slow for some reason, you can increase the Zookeeper timeout by setting the Java system property "instance.zookeeper.timeout" higher. The timeout is currently set to 5 seconds.
Related
We were using the cf-uaa's gradle tasks to create a docker image but those have been removed in the latest version. I've loaded the war in a recent version, but the service does not seem to be starting correctly.
I've been building the war from the v74 tag, adding it to tomcat:8.5.45-jdk12-openjdk-oracle or tomcat:9.0.24-jdk12-openjdk-oracle, and setting the various env vars that we were passing in to the previous image. I'm not seeing any log entries after the initial tomcat output stating that my war has been deployed and the server startup time.
The Dockerfile is basically just an adaptation of what was being passed in the previous image:
FROM tomcat:8.5.45-jdk12-openjdk-oracle
#FROM tomcat:9.0.24-jdk12-openjdk-oracle
ENV LOGIN_CONFIG_URL WEB-INF/classes/required_configuration.yml
ENV UAA_CONFIG_PATH /uaa
RUN bash -c "rm -r /usr/local/tomcat/webapps/ROOT"
RUN bash -c "rm -r /usr/local/tomcat/webapps/host-manager"
RUN bash -c "rm -r /usr/local/tomcat/webapps/manager"
RUN bash -c "rm -r /usr/local/tomcat/webapps/examples"
RUN bash -c "rm -r /usr/local/tomcat/webapps/docs"
ADD *.war /usr/local/tomcat/webapps/uaa.war
RUN bash -c "echo $LOGIN_CONFIG_URL"
EXPOSE 8080
I would expect to see the service responding to my requests, or some errors in the log indicating that the war failed to deploy. I am not currently getting any log output generated from the application code. When I send a request to the service, the response is a 500 with the an error header from the service.
X-Cf-Uaa-Error:Server failed to start. Possible configuration error.
update: I've located the uaa logs within .../tomcat/logs/uaa.log I'm not seeing anything indicating that the service failed to deploy, but I am also not seeing anything to indicate that it is picking up the env vars I have set in the container. I recreated the service using the war from the original setup which started successfully using the uaa.yml which I mounted as a volume. Comparing the logs, the original setup's first log entry is YamlProcessor which does not show up in the v75 logs at all. In fact, no debug entries show up at all, which suggests to me that my LOG_LEVEL env var is not propagating either.
Update 2: We reverted the image base to FROM tomcat:8.5-jre8 and started seeing flyway errors in the uaa.log. Our previous datasource url format was url: jdbc:postgresql://${POSTGRES_NAME}:5432/${DB}?currentSchema=uaa which caused a flyway exception. After removing the schema reference, it created the tables in the public schema. By creating the uaa schema manually before starting the service, it was able to run with the original format. The flyway version has updated, so perhaps there something new that needs to be set.
The application seems to be running, but when I try to get a token at /uaa/oauth/token I get a 500 with this error in the logs: Caused by: java.lang.NoSuchMethodError: java.nio.CharBuffer.limit(I)Ljava/nio/CharBuffer;
Since Jan 2021, UAA server docker images is now be available on cloudfoundry/uaa dockerhub repository.
docker pull cloudfoundry/uaa:75.0.0
See its Dockerfile for more details.
Can you try following ?
https://github.com/hortonworks/docker-cloudbreak-uaa
This works very well.
Here's is what I have successfully done so far on SCDF Local Server
I have successfully deployed SCDF server on my local and also I have used Kafka and Zookeeper config parameters with it i.e
mymac$ java -jar spring-cloud-dataflow-server-local-1.3.0.RELEASE.jar
--spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=localhost:9092
--spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=localhost:2181
I was able to create my stream
ingest = producer-app > :broker1
filter = :broker1 > filter-app > :broker2
Now I need help to do the exact same thing on PCFDev
I have my PCFDEv running
I have to deploy SCDF-Cloudfoundry jar with my local kafka and zookeeper parameters to pcfDev but when I do the following steps it gives me an error that its
1.1) cf push -f manifest-scdf.yml --no-start -p /XXX/XXX/XXX/spring-cloud-dataflow-server-cloudfoundry-1.3.0.BUILD-SNAPSHOT.jar -k 1500M
this runs good...no problem. but 1.2
1.2) cf start dataflow-server --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=host.pcfdev.io:9092 --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=host.pcfdev.io:2181
gives me this error:--
Incorrect Usage: unknown flag `spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers'
below is my manifest-scdf.yml file
---
instances: 1
memory: 2048M
applications:
- name: dataflow-server
host: dataflow-server
services:
- redis
- rabbit
env:
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL: https://api.local.pcfdev.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_ORG: pcfdev-org
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SPACE: pcfdev-space
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN: local.pcfdev.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_USERNAME: admin
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_PASSWORD: admin
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SKIP_SSL_VALIDATION: true
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_SERVICES: rabbit
MAVEN_REMOTE_REPOSITORIES_REPO1_URL: https://repo.spring.io/libs-snapshot
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DISK: 512
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_BUILDPACK: java_buildpack
spring.cloud.deployer.cloudfoundry.stream.memory: 400
spring.cloud.dataflow.features.tasks-enabled: true
spring.cloud.dataflow.features.streams-enabled: true
Please help me. Thank you.
There are few options to supply Kafka credentials to Stream-apps in PCF.
1. Kafka CUPs
This option allows you to create CUPs for an external Kafka-service. While deploying the stream, you can then supply the coordinates to each application either individually as described in the docs or you can supply them as global properties for all the stream-apps deployed by the SCDF-server.
2. Inline properties
Instead of extracting from CUPs, you can also directly supply the HOST/PORT while deploying the stream. Again, this can be applied globally, too.
stream deploy myTest --properties "app.*.spring.cloud.stream.kafka.binder.brokers=<HOST>:9092,app.*.spring.cloud.stream.kafka.binder.zkNodes=<HOST>:2181
Note: The HOST must be reachable for the stream-apps; o'wise, it ill continue to connect to localhost and potentially fail since the apps are running inside a VM.
The error you're seeing is coming from the CF CLI, it's interpreting those (I'm assuming environment) variables you're providing as flags to the cf start command and failing.
You could either provide them in your manifest.yml or set their values manually using the CLI's cf set-env command by doing something like this:
cf set-env dataflow-server spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers host.pcfdev.io:9092
cf set-env dataflow-server spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes host.pcfdev.io:2181
After you've set them they should be picked up when you run cf start dataflow-server.
Relevant CLI docs:
http://cli.cloudfoundry.org/en-US/cf/set-env.html
I have created an R package that makes use of the sparklyr capabilities within a dummy hello function. My package does a very simple thing as connection to a spark cluster, print the spark version and disconnect. The package is successfully clean and build and is successfully executed from R and Rstudio.
# Connect to Spark cluster
spark_conn <- sparklyr::spark_connect(master = "spark://elenipc.home:7077", spark_home = '/home/eleni/spark-2.2.0-bin-hadoop2.7/')
# Print the version of Spark
sv<- sparklyr::spark_version(spark_conn)
print(sv)
# Disconnect from Spark
sparklyr::spark_disconnect(spark_conn)
It is very important for me to be able to execute the hello function from OpenCpu rest api. (I have used opencpu api for executing many other custom created packages.)
When invoking opencpu api like:
curl http://localhost/ocpu/user/rstudio/library/myFirstBigDataPackage/R/hello/print -X POST
i get the following response:
Failed while connecting to sparklyr to port (8880) for sessionid (89615): Gateway in port (8880) did not respond.
Path: /home/eleni/spark-2.2.0-bin-hadoop2.7/bin/spark-submit
Parameters: --class, sparklyr.Shell, '/home/rstudio/R/x86_64-pc-linux-gnu-library/3.4/sparklyr/java/sparklyr-2.2-2.11.jar', 8880, 89615
Log: /tmp/ocpu-temp/file26b165c92166_spark.log
---- Output Log ----
Error occurred during initialization of VM
Could not allocate metaspace: 1073741824 bytes
---- Error Log ----
In call:
force(code)
Of course allocate more memory to both java & spark executor does not resolve the issue. permission issues are also discarded as i already configured the etc/apparmor.d/opencpu.d/custom file so as to permit opencpu to have rwx privileges on spark. It seems to be a connectivity issue that i do not know how to face. During method invocation via opencpu api spark logs do not even print something.
For you info my environment configuration is as follows:
java version "1.8.0_65"
R version 3.4.1
RStudio version 1.0.153
spark-2.2.0-bin-hadoop2.7
opencpu 1.5 (compatible with my Ubuntu 14.04.3 LTS)
Thank you very much for you support and time!!!
I have set up an h2 cluster but cannot connect via the console or using a datasource all I get is this:
IO Exception: "java.io.IOException: The filename, directory name, or volume label syntax is incorrect"; "E:/baseDirDefinedInServerConnection/myDB,localhost:1112/myDB" [90031-176] 90031/90031 (Help)
I have configured 2 servers thus:
java -cp h2-1.3.167.jar org.h2.tools.Server -tcp -tcpPort 1111 -tcpAllowOthers -baseDir E:\myBaseDir
at tcp://myIp:1111 (others can connect)
java -cp h2-1.3.167.jar org.h2.tools.Server -tcp -tcpPort 1112 -tcpAllowOthers -baseDir E:\myBaseDir\server
at tcp://myIp:1112 (others can connect)
So you see I have one database in a directory (this has been created) and another database in another directory. Both are up and running.
I have run the cluster tool thus:
java -cp h2-1.3.167.jar org.h2.tools.CreateCluster -urlSource jdbc:h2:tcp://localhost:1111/myDB -urlTarget jdbc:h2
:tcp://localhost:1112/myDB -user username -password pass -serverList localhost:1111,localhost:1112
And it all looks good. If I try to connect thorugh the console without the cluster list I get this message, which proves we are in clustered mode, which is good:
Clustering error - database currently runs in cluster mode, server list: 'localhost:1111,localhost:1112'" [
I have checked the permissions on the directories and all has read/write access.
Yes this is a windows machine.
Using H2 version:
Bundle-Vendor: H2 Group
Bundle-Version: 1.3.167
Any ideas what I might have done wrong?
Thanks for reading.
Guess you already found out that one should connect like this
jdbc:h2:tcp://localhost:1111,localhost:1112/myDB
i want to monitor c3p0 connection pool parameters with Icinga.
So i found for this the nagios plugin jmxquery.
There will be a patch for wildcard queries.
I've patched the plugin like described here,
but after that i'll get NullPointerException's on every query i run.
[root#hostname target]# ./check_jmx -U service:jmx:rmi:///jndi/rmi://<HOSTNAME>:9001/jmxrmi -O com.mchange.v2.c3p0:type=PooledDataSource[2rw2h791t5s2b210jnofo\|2ab68416] -A numConnectionsAllUsers -I numConnectionsAllUsers -vvvv -username monitorRole -password *******************
JMX CRITICAL - NullPointerException: null connecting to com.mchange.v2.c3p0:type=PooledDataSource[2rw2h791t5s2b210jnofo|2ab68416] by URL service:jmx:rmi:///jndi/rmi://<HOSTNAME>:9001/jmxrmijava.lang.NullPointerException
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1008)
at java.lang.Double.parseDouble(Double.java:540)
at jmxquery.JMXQuery.compare(JMXQuery.java:199)
at jmxquery.JMXQuery.report(JMXQuery.java:147)
at jmxquery.JMXQuery.main(JMXQuery.java:93)
Any Ideas??
Alternate tool to access jmx beans.
Jmxterm is a command line based interactive JMX client. It's designed to allow user to access a Java MBean server from command line without graphical environment. If this is useful please check.
JMXTerm