Error Can not connect to Ryuk in CircleCi - java

There is config for CircleCI.
On the local machine, when you run CircleCI, everything passes. In this case, the server is a lot of mistakes, one of them is
java.lang.IllegalStateException: Can not connect to Ryuk
At the same time in the future there is an error connecting tests in containers launched earlier in test-containers, I think this is due to an error connecting to Ryuk. Confuses that fact that on the local machine everything works and on the server everything falls.

The reason for the problem is here: https://gist.github.com/OlegGorj/52ca84624503a5e85624c6eb38df4590
where it says:
Separation of Environments The job and remote docker run in separate environments. Therefore, Docker containers cannot directly communicate with the containers running in remote docker.
Accessing Services It’s impossible to start a service in remote docker and ping it directly from a primary container (and vice versa).
There appear to be three options:
Do your entire build in another remote docker container.
Use a dedicate VM for the build (https://www.testcontainers.org/supported_docker_environment/continuous_integration/circle_ci/)
If you can get away with creating the test container at the start then do that and don't use testcontainers within circleci (https://circleci.com/docs/2.0/executor-types/#using-multiple-docker-images). Just remember that each test case will be interacting with the same instance of the service.
More details on option 3
Basically, don't use testcontainers (one word) when using circleci.
In your circleci/config.yaml do something like this:
jobs:
build:
docker:
- image: circleci/openjdk:14.0.1-jdk-buster
- image: rabbitmq:3.8-alpine
environment:
So circleci runs the rabbit container on the same host as your image.
You can then communicate with it on localhost on whatever ports it opens, and circleci will close these secondary containers when your build (which is always in the first container) finishes.
There are a few downsides to this:
testcontainers lets you start and stop containers, this approach doesn't so you fundamentally cannot test the restart of a container.
all of your tests will run with the same instance so, in the rabbit instance, each test should use a unique exchange and queue.
if, like me, you need to build in circleci and on the desktop (and in Jenkins) then you need to have circleci conditional logic in your tests (just check for System.getenv("CIRCLECI")) to determine which approach to take.

I had the same error, fixed it by turning off Experimental Features in Docker.
You can find them in Preferences.

Related

Cannot connect to H2 database

I have been getting struggle to connect H2 database from a Spring Boot app by using the following connection string as mentioned on Database URL Overview section:
spring.datasource.url=jdbc:h2:tcp://localhost:9092/~/test-db
I also tried many different combination for tcp (server mode) connection, but still get error e.g. "Connection is broken: "java.net.SocketTimeoutException: connect timed out: localhost:9092" when running Spring Boot app.
#SpringBootApplication
public class Application {
// code omitted
#Bean(initMethod = "start", destroyMethod = "stop")
public Server h2Server() throws SQLException {
return Server.createTcpServer("-tcp", "-tcpAllowOthers", "-tcpPort", "9092");
}
}
So, how can I fix this problem and connect to H2 database via server mode?
You seem to be a little confused.
H2 can run in two different 'modes'.
Local mode
Local mode means H2 'just works', and you access this mode with the file: thing in the JDBC connect URL. The JDBC driver itself does all the database work, as in, it opens files, writes data, it does it all. There is no 'database server' at all. Or, if you prefer, the JDBC driver is its own server though it opens no ports.
Server mode
In this case you need a (separate) JVM and separately fire up H2 in server mode and then you can use the same library (still h2.jar) to serve as a JDBC server. In this mode, the two things are completely separate - if you want, you can run h2.jar on one machine to be the server, and run the same h2.jar on a completely different machine just to connect to the other H2 machine. The database server machine does the bulk of the work, with the 'client' H2 just being the JDBC driver. H2 is no different than e.g. mysql or postgres in such a mode: You have one 'app' / JVM process that runs as a database engine, allowing multiple different processes, even coming from completely different machines halfway around the world if you want to, to connect to it.
You access this mode with the tcp: thing in the JDBC string.
If you really want, you can run this mode and still have it all on a single machine, even a single JVM, but why would you want to? Whatever made you think this will 'solve lock errors' wouldn't be fixed by running all this stuff on a single JVM. There are only two options:
You're mis-analysing the problem.
You really do have multiple separate JVM processes (either one machine with 2 java processes in the activity monitor / ps auxww output / task manager, or 2+ machines) all trying to connect to a single database in which case you certainly do need this, yes.
How to do server mode right
You most likely want a separate JVM that starts before and that hosts the h2 database; it needs to run before the 'client' JVMs (the ones that will connect to it) start running. Catalina is not the 'server' you are looking for, it is org.h2.tools.Server, and if it says 'not found' you need to fix your maven imports. This needs be a separate JVM (you COULD write code that goes: Oh, hey, there isn't a separate JVM running with the h2 server so I'll start it in-process right here right now, but that means that process needs to stay in the air forever, which is just weird. Hence, you want a separate JVM process for this).
You haven't explained what you're doing. But, let's say what you're doing is this:
I have a CI script that fires up multiple separate JVMs, some in parallel even, which runs a bunch of integration and unit tests in parallel.
Even though they run in parallel (or perhaps intentionally so), you all want to run this off of a single DB. This is usually a really bad idea (you want tests to be isolated; that running them on their own continues to behave identically. You don't want a test to fail in a way that can only be reproduced if you run the same batch of 18 separate tests using the same run code, where one unrelated test fails in a specific fashion, whilst it's Tuesday, a full moon, and Beethoven is playing in your music player, and it's warmer than 24º in the room affecting the CPU's throttling, of course. Which is exactly what tends to happen if you try to re-use resources in multiple tests!) – still, you somehow really want this.
... then, edit the CI script to first Launch a JVM that hosts a H2 server, and once that's up and running, presumably run a process that fills this database with test data, and once that's done, then run all tests in parallel, and once those are all done, shut down the JVM, and delete the DB file.
Exactly how to do the third part is a separate question - if you need help with that, ask a new question and name the relevant tool(s) you are using to run this stuff, paste the config files, etc.

How to configure Java client connecting to AWS EMR spark cluster

I'm trying to write a simple spark application, and when i run it locally it works with setting the master as
.master("local[2]")
But after configuring spark cluster on AWS (EMR) i can't connet to the master url:
.master("spark://<master url>:7077")
Is this the way to do it? am i missing something here?
The cluster is up and running, and when i tried adding my application as a step jar, so it will run directly in the cluster it worked. But i want to be able to run it from a remote machine.
would appreciate some help here,
Thanks
To run from a remote machine, you will need to open the appropriate ports in the Security Group assigned to your EMR master node. You will need to add at least 7077.
If by "remote" you mean one that isn't in your AWS environment, you will also need to setup a way to route traffic to it from the outside.

Kubernetes service in java does not resolve restarted service/replicationcontroller

I have a kubernetes cluster where one service (java application) connects to another service to write data (elasticsearch).
When elasticsearch (service & replicationcontroller) is restarted/redeployed, the java-application looses it's connection, which can only be recovered by restarting the java-application (rc). This is not the desired behaviour and should be solved.
Using curl from the kubernetes pod of the application to query elasticsearch does work fine after restart, so it must be probably something java is doing.
It does work when only the replicationcontroller for elasticsearch is touched, leaving the service as it is. But why does curl work in that case, however this should not be the solution.
Using the same konfiguration in a local docker setup without kubernetes does also not lead to problems.
Promising solutions that did not worked:
Setting networkaddress.cache.ttlor networkaddress.cache.negative.ttl to zero (or other small positive values)
Hacking /etc/nsswitch.conf as described in https://stackoverflow.com/a/32550032/363281
I'm using kubernetes 1.1.3, OpenJDK 8u66, service Dockerfile is derived from java:8
Try java.security.Security.setProperty("networkaddress.cache.ttl" , "60");
This means sixty seconds and you should adapt to your needs.
One solution is not to restart your Service: a Service resolves the Pods by IPs and watches the Pods by selectors, so you don't need to restart the Service when you restart your Pods.
Now likely what is happening is that your application is resolving the Service at start up, and it then caches the IP. When you restart the Service it likely gets a new IP which messes up your application's behavior. You need to check how you can reset this cache or initiate some sort of restart of that App when the pods/services are changes.
If you don't restart the Service, the IP won't change, but it will still proxy to the Pods that are restarted.

Running a Junit test remotely, as if it were running locally, using Eclipse

I've search around, read relevant questions on this site and others, but have failed to find a solution. It strikes me odd that one does not exist so let me detail my question here:
I use Junit4 + Eclipse regularly to test my code. In some cases, certain tests can take a lot of CPU and/or memory, rendering my workstation unusable for the duration of the test. This is a pain I'm trying to solve.
I'm looking to get the exact same behavior but through a remote server. I want:
To still be able to set breakpoints and debug my app.
To see how the tests progress using the Junit view in Eclipse.
Click on a button have the tests started (build process and copying of files is allowed, but only if efficient).
In my mind I envision something that rsyncs the files to the remote server, starts the java process there with the exact same arguments as it would on my local machine, makes the debug port available (not just localhost) and has eclipse hook up to it to have both debug and junit view working.
How can I get this done?
Several leading questions that may help us find a solution:
How does Eclipse communicate with the java process when run locally (for both debug purposes AND the Junit view)?
How can I involve myself in the process of spawning the java process for the JUnit testing so I can copy the required files over to a remote server?
How can I make the process spawn remotely instead of locally?
How can I have Eclipse hook up to the remote host instead of the localhost?
The easiest would be to invoke command line JUnit runner on remote mashine using the following command:
java -Xdebug -Xrunjdwp:transport=dt_socket,address=8998,server=y,suspend=y -cp ... org.junit.runner.JUnitCore <test class name>
So, it will wait til you attach remote debugger from Eclipse at port 8998.
You can also use Eclipse's Target Management tools to transfer files to remote system and launch remote commands. There is several tutorials on the project page.
You could set up a jenkins CI server, sync your code via git (or just copy using ftp or something), execute the test in a jenkins job triggered by a git-hook or through some script. Then remote debug into the running test process like Eugene Kuleshov suggested. This process could be automated by an ant-script which you invoke from eclipse. There should be a mylin connector (for example) through which you can monitor the running tests. I don't know if it is possible using the standard JUnit view of eclipse to see the running tests without using some custom plugins (if any exist).
Try vs code remote ssh extension for this.

Java Hazelcast problems with multiple clusters

I am running a small system that relies on Hazelcast for clustering, distributed computing and messaging in a Multicast mode (Standard config as available in the download). I have a number of server modules that run as "Core" Hazelcast instances and a Java Swing application that is implemented as a Hazelcast "Native Client". This all works well and I would now like to commission the system in production and would hence need to run two separate clusters (dev + prod) and that is where I run into problems.
According to the documentation all you need to is to use separate group names + passwords for the two clusters and I get the impression that the two clusters should sort themselves out automatically!? This appears to work for the server modules but when I try to connect a "Client"-instance to the prod environment, I can see from the logs of one of the server modules in prod that the client appears to connect successfully:
INFO: [prod] received auth from Connection [/192.168.0.2:55863 -> null] live=true,
client=true, type=JAVA_CLIENT, this group name:prod, auth group name:prod,
successfully authenticated
But, the client never shows up as a member of prod. Instead, I find that the client has become a member of the dev environment even though the authentification took place against prod!
Involontary mixing of the two clusters is obviously a giant problem for me and a showstopper. Does anyone know if there is anything that I am doing wrong or if there are any configuration changes that I can do to resolve the problem?
When a client connects to the cluster it never becomes a member of the cluster.
So I suspect that your client did connected to the prod, but somehow in your code you have somewhere something like Hazelcat.getMap() which results in starting a member in that JVM and since the default configuration that this member will use will be same as the dev, this new member will join to your dev cluster.
So in fact you have one client, that is connected to prod and another member that is connected to the dev cluster.
Try to put something through client and see in which cluster those entries are appear?
Am i making sense?

Categories