ways to connect to coherence cluster - java

I made simple J2SE App join cluster with running coherence.cmd without running cache-server.cmd and I run same App with running both coherence.cmd and cache-server.cmd and this joining the cluster, so what is the differences?
I want to know the difference between running cache-server.cmd and running coherence.cmd.

I'll give you an overview, not going into details. In default configuration given by oracle when you install coherence, cache-server.cmd is default script which starts coherence storage node. When we want to run coherence we start several "cache-servers" = coherence storage nodes (by default it builds coherence cluster).
Coherence.cmd default script also starts coherence node which is connected to cluster as client. We can run some basic operations on coherence when we run it but this is not production tool.
I think that your problem is connected with "app that runs cache-server or coherence.cmd". This is not the way that it works. To work with coherence properly, you have to build app that uses coherence api. For example in Java easiest way is to build maven app, add coherence.jar dependency. Then you have to import classes:
import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;
then in one line of code you create cache test or conect to it if it exists:
NamedCache cache = CacheFactory.getCache("test")
Then you can work with cache. When app runs this line of code it become coherence-node. When you have coherence installed on your machine with default settings it'll join cluster (if you started cache-server).
This is an 1000 foot view.

Related

Cannot connect to H2 database

I have been getting struggle to connect H2 database from a Spring Boot app by using the following connection string as mentioned on Database URL Overview section:
spring.datasource.url=jdbc:h2:tcp://localhost:9092/~/test-db
I also tried many different combination for tcp (server mode) connection, but still get error e.g. "Connection is broken: "java.net.SocketTimeoutException: connect timed out: localhost:9092" when running Spring Boot app.
#SpringBootApplication
public class Application {
// code omitted
#Bean(initMethod = "start", destroyMethod = "stop")
public Server h2Server() throws SQLException {
return Server.createTcpServer("-tcp", "-tcpAllowOthers", "-tcpPort", "9092");
}
}
So, how can I fix this problem and connect to H2 database via server mode?
You seem to be a little confused.
H2 can run in two different 'modes'.
Local mode
Local mode means H2 'just works', and you access this mode with the file: thing in the JDBC connect URL. The JDBC driver itself does all the database work, as in, it opens files, writes data, it does it all. There is no 'database server' at all. Or, if you prefer, the JDBC driver is its own server though it opens no ports.
Server mode
In this case you need a (separate) JVM and separately fire up H2 in server mode and then you can use the same library (still h2.jar) to serve as a JDBC server. In this mode, the two things are completely separate - if you want, you can run h2.jar on one machine to be the server, and run the same h2.jar on a completely different machine just to connect to the other H2 machine. The database server machine does the bulk of the work, with the 'client' H2 just being the JDBC driver. H2 is no different than e.g. mysql or postgres in such a mode: You have one 'app' / JVM process that runs as a database engine, allowing multiple different processes, even coming from completely different machines halfway around the world if you want to, to connect to it.
You access this mode with the tcp: thing in the JDBC string.
If you really want, you can run this mode and still have it all on a single machine, even a single JVM, but why would you want to? Whatever made you think this will 'solve lock errors' wouldn't be fixed by running all this stuff on a single JVM. There are only two options:
You're mis-analysing the problem.
You really do have multiple separate JVM processes (either one machine with 2 java processes in the activity monitor / ps auxww output / task manager, or 2+ machines) all trying to connect to a single database in which case you certainly do need this, yes.
How to do server mode right
You most likely want a separate JVM that starts before and that hosts the h2 database; it needs to run before the 'client' JVMs (the ones that will connect to it) start running. Catalina is not the 'server' you are looking for, it is org.h2.tools.Server, and if it says 'not found' you need to fix your maven imports. This needs be a separate JVM (you COULD write code that goes: Oh, hey, there isn't a separate JVM running with the h2 server so I'll start it in-process right here right now, but that means that process needs to stay in the air forever, which is just weird. Hence, you want a separate JVM process for this).
You haven't explained what you're doing. But, let's say what you're doing is this:
I have a CI script that fires up multiple separate JVMs, some in parallel even, which runs a bunch of integration and unit tests in parallel.
Even though they run in parallel (or perhaps intentionally so), you all want to run this off of a single DB. This is usually a really bad idea (you want tests to be isolated; that running them on their own continues to behave identically. You don't want a test to fail in a way that can only be reproduced if you run the same batch of 18 separate tests using the same run code, where one unrelated test fails in a specific fashion, whilst it's Tuesday, a full moon, and Beethoven is playing in your music player, and it's warmer than 24º in the room affecting the CPU's throttling, of course. Which is exactly what tends to happen if you try to re-use resources in multiple tests!) – still, you somehow really want this.
... then, edit the CI script to first Launch a JVM that hosts a H2 server, and once that's up and running, presumably run a process that fills this database with test data, and once that's done, then run all tests in parallel, and once those are all done, shut down the JVM, and delete the DB file.
Exactly how to do the third part is a separate question - if you need help with that, ask a new question and name the relevant tool(s) you are using to run this stuff, paste the config files, etc.

Weird way servers are getting added in baseline topology

I have two development machines, both running Ignite in server mode on same network. Actually I started the server in the first machine and then started another machine. When the other machine starts, it is getting automatically added to the first one's topology.
Note:
when starting I've removed the work folder in both machines.
In config, I never mentioned any Ips of other machines.
Can anyone tell me what's wrong with this? My intention is each machine should've separate topology.
As described in discovery documentation, Apache Ignite will employ multicast to find all nodes in a local network, forming a cluster. This is default mode of operation.
Please note that we don't really recommend using this mode for either development or production deployment, use static discovery instead (see the same documentation).

How to connect multiple Java applications to same Ignite cluster?

I have three Java applications that will connect to the same Ignite node (running on a particular VM) to access the same cache store.
Is there a step-by-step procedure on how to run a node outside Java application (from command prompt, may be) and connect my Java apps to it?
Your Java applications should serve as client nodes in your cluster. More information about client/sever mode can be found in the documentation. Server node(s) could be started from command line, it's described here. Information about running with a custom configuration could be found there as well. You need to set up discovery in order to make the entire thing work. It should be done on every node (incl. client nodes). I'd recommend you to use static IP finder in the configuration.

Using Apache Ignite some expired data remains in memory with TTL enabled and after thread execution

Issue
Create an ignite client (in client mode false) and put some data (10k entries/values) to it with very small expiration time (~20s) and TTL enabled.
Each time the thread is running it'll remove all the entries that expired, but after few attempts this thread is not removing all the expired entries, some of them are staying in memory and are not removed by this thread execution.
That means we got some expired data in memory, and it's something we want to avoid.
Please can you confirm that is a real issue or just misuse/configuration of my setup?
Thanks for your feedback.
Test
I've tried in three different setups: full local mode (embedded server) on MacOS, remote server using one node in Docker, and also remote cluster using 3 nodes in kubernetes.
To reproduce
Git repo: https://github.com/panes/ignite-sample
Run MyIgniteLoadRunnerTest.run() to reproduce the issue described on top.
(Global setup: Writing 10k entries of 64octets each with TTL 10s)
It seems to be a known issue. Here's the link to track it https://issues.apache.org/jira/browse/IGNITE-11438. It's to be included into Ignite 2.8 release. As far as I know it has already been released as a part of GridGain Community Edition.

Websphere application server 5.1 DataSource no longer valid when DB is rebooted

First of all, we are running a Java Web application running on WAS 5.1. Behind that, we use an Oracle data base. The problem that we're faced to is really simple, but after a couple of hours of Google search, I decided to ask you.
We have an application that is running on WAS. When we start the server, WAS sets his DataSource so that it points to the data base. Everything works fine, expect when the DBAs have to reboot the data base server. When they do, the data source is no longer valid and we have to manually restart all server and we are currently trying to correct that, if possible. We need to find a way to do it because we have 3 pre-production environnement for for our application, and there are two servers associated with it, one for the application and the other is a report generator web service. So, when the DBAs wants to reboot the server (and they usually don't tell us!) we have to reboot six servers. I was wondering if in Java, there was a way to reset the data source so that we don't need to restart the servers.
For you information, WebSphere is v5.1 and Oracle is 9g with Java 1.4.2.17.
We also use RAD:
Version: 6.0.1
Build id: 20050725_1800
You should configure your application server to always test the connection before leasing it out to a client. I'm not familiar with Websphere that much, but in WebLogic, you can set a jdbc sql statement such as select 1 from dual and the container removes stale connections from the connection pool.
Here is a link on how to do it in Websphere
http://www-01.ibm.com/support/docview.wss?uid=swg21439688
Based on what i read from your note, you should receive Stale connection exception as WAS has stale handles (in its pool) as the DB has been restarted.
The Data Source configuration can be configured to purge the entire pool once a stale connection is detected. The default policy is to purge the individual connection.
Adopting this would prevent you from restarting your WAS Servers.
There are a number of resources in this space
http://www-01.ibm.com/support/docview.wss?uid=swg21063645
HTH
Manglu

Categories