h2 cluster with file based database - java

I have set up an h2 cluster but cannot connect via the console or using a datasource all I get is this:
IO Exception: "java.io.IOException: The filename, directory name, or volume label syntax is incorrect"; "E:/baseDirDefinedInServerConnection/myDB,localhost:1112/myDB" [90031-176] 90031/90031 (Help)
I have configured 2 servers thus:
java -cp h2-1.3.167.jar org.h2.tools.Server -tcp -tcpPort 1111 -tcpAllowOthers -baseDir E:\myBaseDir
at tcp://myIp:1111 (others can connect)
java -cp h2-1.3.167.jar org.h2.tools.Server -tcp -tcpPort 1112 -tcpAllowOthers -baseDir E:\myBaseDir\server
at tcp://myIp:1112 (others can connect)
So you see I have one database in a directory (this has been created) and another database in another directory. Both are up and running.
I have run the cluster tool thus:
java -cp h2-1.3.167.jar org.h2.tools.CreateCluster -urlSource jdbc:h2:tcp://localhost:1111/myDB -urlTarget jdbc:h2
:tcp://localhost:1112/myDB -user username -password pass -serverList localhost:1111,localhost:1112
And it all looks good. If I try to connect thorugh the console without the cluster list I get this message, which proves we are in clustered mode, which is good:
Clustering error - database currently runs in cluster mode, server list: 'localhost:1111,localhost:1112'" [
I have checked the permissions on the directories and all has read/write access.
Yes this is a windows machine.
Using H2 version:
Bundle-Vendor: H2 Group
Bundle-Version: 1.3.167
Any ideas what I might have done wrong?
Thanks for reading.

Guess you already found out that one should connect like this
jdbc:h2:tcp://localhost:1111,localhost:1112/myDB

Related

"Unavaliable io exception" when connecting to remote Bazel master on bazel-buildfarm

I want to setup a small POC remote area with 1x master (192.168.60.99) and 1x worker (192.168.60.98) using bazel-buildfarm. Both are CentOS 7 machines provisioned with Vagrant. When connection from a Ubuntu workstation (third machine) in the network, the following error occurs:
$ bazel build --verbose_failures //projects/myproj:app
Starting local Bazel server and connecting to it...
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=229
INFO: Reading rc options for 'build' from /home/user/tests/ecommerce/.bazelrc:
'build' options: --strategy=TypeScriptCompile=worker --strategy=AngularTemplateCompile=worker --symlink_prefix=dist/ --define=compile=legacy --incompatible_strict_action_env --experimental_allow_incremental_repository_updates --distdir=third_party/_distdir
INFO: Reading rc options for 'build' from /home/user/.bazelrc:
'build' options: --spawn_strategy=remote --genrule_strategy=remote --strategy=Javac=remote --strategy=Closure=remote --remote_executor=192.168.60.99:8980
INFO: Writing tracer profile to '/home/user/.cache/bazel/_bazel_user/24700f1ad3e201a00a1c26bd59dc6502/command.profile.gz'
INFO: Invocation ID: 569b59ca-edcb-4922-92a0-b6f0b5ca2819
ERROR: Failed to query remote execution capabilities: UNAVAILABLE: io exception
The network connection is working and I even can connect to Bazel using telnet:
telnet 192.168.60.99 8980
Trying 192.168.60.99...
Connected to 192.168.60.99.
Escape character is '^]'.
.bazelrc file of the third Ubuntu machine:
$ cat ~/.bazelrc
build --spawn_strategy=remote --genrule_strategy=remote --strategy=Javac=remote --strategy=Closure=remote --remote_executor=192.168.60.99:8980
Buildfarm setup
Both got a clon of the buildfarm git repo. The example config files were used. Just on the server I replaced localhost by 192.168.60.99 (master server ip).
I know that bazel run is not recommended. But in lack of better alternatives that works, my idea is to get the documented way working first (Bazel itself doesn't mention any alternative). Since not even bazel run works, I think that something is wrong with my installation.
All machines use version 1.1.0, which is the latest stable one at the time of writing. It's definitely an issue with bazel-buildfarm, since the local build works fine on the Ubuntu machine.
Master server
bazel run //src/main/java/build/buildfarm:buildfarm-server $(pwd)/examples/server.config.example
Worker
bazel run //src/main/java/build/buildfarm:buildfarm-operationqueue-worker $(pwd)/examples/worker.config.example --distdir ~/distdir/
The distdir is a workaround for our company proxy, that manipulates files with MITM attacks. Since Bazel doesn't allow this, I downloaded the affected file for it's jdk manually:
[vagrant#localhost bazel-buildfarm]$ l ~/distdir/
total 188M
-rw-rw-r--. 1 vagrant vagrant 188M Jan 17 2019 zulu11.2.3-jdk11.0.1-linux_x64.tar.gz
If Bazel >= 1.0 is used , we need to specify the protocol grpc in .bazelrc like this:
--remote_executor=grpc://192.168.60.99:8980
Without the protocol, the UNAVAILABLE: io exception occurs. There is currently no documentation about this issue.

How to reference local Kafka and Zookeeper config on Spring Cloud Dataflow "Cloudfoundry" server start

Here's is what I have successfully done so far on SCDF Local Server
I have successfully deployed SCDF server on my local and also I have used Kafka and Zookeeper config parameters with it i.e
mymac$ java -jar spring-cloud-dataflow-server-local-1.3.0.RELEASE.jar
--spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=localhost:9092
--spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=localhost:2181
I was able to create my stream
ingest = producer-app > :broker1
filter = :broker1 > filter-app > :broker2
Now I need help to do the exact same thing on PCFDev
I have my PCFDEv running
I have to deploy SCDF-Cloudfoundry jar with my local kafka and zookeeper parameters to pcfDev but when I do the following steps it gives me an error that its
1.1) cf push -f manifest-scdf.yml --no-start -p /XXX/XXX/XXX/spring-cloud-dataflow-server-cloudfoundry-1.3.0.BUILD-SNAPSHOT.jar -k 1500M
this runs good...no problem. but 1.2
1.2) cf start dataflow-server --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=host.pcfdev.io:9092 --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=host.pcfdev.io:2181
gives me this error:--
Incorrect Usage: unknown flag `spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers'
below is my manifest-scdf.yml file
---
instances: 1
memory: 2048M
applications:
- name: dataflow-server
host: dataflow-server
services:
- redis
- rabbit
env:
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL: https://api.local.pcfdev.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_ORG: pcfdev-org
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SPACE: pcfdev-space
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN: local.pcfdev.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_USERNAME: admin
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_PASSWORD: admin
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SKIP_SSL_VALIDATION: true
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_SERVICES: rabbit
MAVEN_REMOTE_REPOSITORIES_REPO1_URL: https://repo.spring.io/libs-snapshot
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DISK: 512
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_BUILDPACK: java_buildpack
spring.cloud.deployer.cloudfoundry.stream.memory: 400
spring.cloud.dataflow.features.tasks-enabled: true
spring.cloud.dataflow.features.streams-enabled: true
Please help me. Thank you.
There are few options to supply Kafka credentials to Stream-apps in PCF.
1. Kafka CUPs
This option allows you to create CUPs for an external Kafka-service. While deploying the stream, you can then supply the coordinates to each application either individually as described in the docs or you can supply them as global properties for all the stream-apps deployed by the SCDF-server.
2. Inline properties
Instead of extracting from CUPs, you can also directly supply the HOST/PORT while deploying the stream. Again, this can be applied globally, too.
stream deploy myTest --properties "app.*.spring.cloud.stream.kafka.binder.brokers=<HOST>:9092,app.*.spring.cloud.stream.kafka.binder.zkNodes=<HOST>:2181
Note: The HOST must be reachable for the stream-apps; o'wise, it ill continue to connect to localhost and potentially fail since the apps are running inside a VM.
The error you're seeing is coming from the CF CLI, it's interpreting those (I'm assuming environment) variables you're providing as flags to the cf start command and failing.
You could either provide them in your manifest.yml or set their values manually using the CLI's cf set-env command by doing something like this:
cf set-env dataflow-server spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers host.pcfdev.io:9092
cf set-env dataflow-server spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes host.pcfdev.io:2181
After you've set them they should be picked up when you run cf start dataflow-server.
Relevant CLI docs:
http://cli.cloudfoundry.org/en-US/cf/set-env.html

Getting Java IO exception when using jenkins cli , jdk 8.144 , jenkins v2.93

This is the error.
root#myserver#java -jar
/opt/tomcat/webapps/ROOT/WEB-INF/jenkins-cli.jar -s
http://localhost:8181 -auth ****:**** help
java.io.IOException: Bogus chunk size
at sun.net.www.http.ChunkedInputStream.processRaw(ChunkedInputStream.java:319)
at sun.net.www.http.ChunkedInputStream.readAheadBlocking(ChunkedInputStream.java:572)
at sun.net.www.http.ChunkedInputStream.readAhead(ChunkedInputStream.java:609)
at sun.net.www.http.ChunkedInputStream.read(ChunkedInputStream.java:696)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3375)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3368)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3356)
at hudson.cli.CLI$1ClientSideImpl.<init>(CLI.java:658)
at hudson.cli.CLI.plainHttpConnection(CLI.java:684)
at hudson.cli.CLI._main(CLI.java:612)
at hudson.cli.CLI.main(CLI.java:426)
On further troubleshooting I found the issue to be linked to reverse-proxy .
“The HTTP(S) connection mode of the CLI in Jenkins 2.54 and newer does not work correctly behind an Apache HTTP reverse proxy server using mod_proxy. Workarounds include using a different reverse proxy such as Nginx or HAProxy, or using the SSH connection mode where possible.”
I used ssh instead .
https://issues.jenkins-ci.org/browse/JENKINS-47279

Connecting to postgres from a docker container

I'm a little lost as to why my java application can't connect to my postgres database. I'm aiming to connect to a postgres database through jdbc. The application is to run inside a docker container.
this.connection = `DriverManager.getConnection("jdbc:postgresql://<myip>:5432/databasename", "usr", "password");`
I'm getting the exception:
Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
When I run the application from my desktop, it connects as expected. When I run it from within the docker container, it fails.
I've just installed docker this afternoon and ran through the getting started for windows, so my setup state is just after running that. Here's the contents of my Dockerfile:
FROM java:8
ADD VaultServer /
EXPOSE 3971
EXPOSE 3972
ENTRYPOINT ["java", "-jar", "VaultServer.jar"]
Inside the data folder there is a file called pg_hba.conf you have to configure it to accept the connections. So your pg_hba.conf file should have a line like this
host all all YourDockerip/24 md5.
After that configure the postgresql.conf file. You have to update the listen_addresses to all and make sure to uncomment that line by removing the # mark. So your listen_addresses should look like this listen_addresses = '*'.

Where to see console log in Openshift?

Recently, I've deployed my JSP project into Openshift server. Now my wish is to see the Console log.
Suppose, if I print System.out.println("Message"); into my JSP project, how do I see that message printed into Console log in Openshift server?
EDITED:
DL is deprecated, please use Fiddle
rajendra # http://code-programmersplace.rhcloud.com/
(uuid: 54d19a5be0b8cd9bf9000082)
-------------------------------------------------
Domain: programmersplace
Created: Feb 04 9:34 AM
Gears: 1 (defaults to small)
Git URL: ssh://54d19a5be0b8cd9bf9000082#code-programmersplace.rhcloud.com/~/gi
t/rajendra.git/
SSH: 54d19a5be0b8cd9bf9000082#code-programmersplace.rhcloud.com
Deployment: auto (on git push)
jbossews-2.0 (Tomcat 7 (JBoss EWS 2.0))
---------------------------------------
Gears: 1 small
You have access to 1 application.
C:\Users\rajendra>
The first thing you need is to connect via SSH to your application on OpenShift. If the name of your app is awesome, run the following command:
rhc ssh -a awesome
If you've forgotten the name of your application, execute rhc apps in order to see your current apps. See the lines with something similar to hereisthename # http://... or .../~/git/hereisthename.git/.
Once you're connected via SSH, you can see the log using the tail command:
Tomcat 7 (JBoss EWS 2.0)
tail -f -n 100 app-root/logs/jbossews.log
JBoss Application Server 7
tail -f -n 100 app-root/logs/jbossas.log
The OpenShift Client Tools are required. See Installing the OpenShift Client Tools. See also Getting Started with OpenShift Online.
RELATED: rhc ssh [No system SSH available] error

Categories