How to connect multiple Java applications to same Ignite cluster? - java

I have three Java applications that will connect to the same Ignite node (running on a particular VM) to access the same cache store.
Is there a step-by-step procedure on how to run a node outside Java application (from command prompt, may be) and connect my Java apps to it?

Your Java applications should serve as client nodes in your cluster. More information about client/sever mode can be found in the documentation. Server node(s) could be started from command line, it's described here. Information about running with a custom configuration could be found there as well. You need to set up discovery in order to make the entire thing work. It should be done on every node (incl. client nodes). I'd recommend you to use static IP finder in the configuration.

Related

Weird way servers are getting added in baseline topology

I have two development machines, both running Ignite in server mode on same network. Actually I started the server in the first machine and then started another machine. When the other machine starts, it is getting automatically added to the first one's topology.
Note:
when starting I've removed the work folder in both machines.
In config, I never mentioned any Ips of other machines.
Can anyone tell me what's wrong with this? My intention is each machine should've separate topology.
As described in discovery documentation, Apache Ignite will employ multicast to find all nodes in a local network, forming a cluster. This is default mode of operation.
Please note that we don't really recommend using this mode for either development or production deployment, use static discovery instead (see the same documentation).

ways to connect to coherence cluster

I made simple J2SE App join cluster with running coherence.cmd without running cache-server.cmd and I run same App with running both coherence.cmd and cache-server.cmd and this joining the cluster, so what is the differences?
I want to know the difference between running cache-server.cmd and running coherence.cmd.
I'll give you an overview, not going into details. In default configuration given by oracle when you install coherence, cache-server.cmd is default script which starts coherence storage node. When we want to run coherence we start several "cache-servers" = coherence storage nodes (by default it builds coherence cluster).
Coherence.cmd default script also starts coherence node which is connected to cluster as client. We can run some basic operations on coherence when we run it but this is not production tool.
I think that your problem is connected with "app that runs cache-server or coherence.cmd". This is not the way that it works. To work with coherence properly, you have to build app that uses coherence api. For example in Java easiest way is to build maven app, add coherence.jar dependency. Then you have to import classes:
import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;
then in one line of code you create cache test or conect to it if it exists:
NamedCache cache = CacheFactory.getCache("test")
Then you can work with cache. When app runs this line of code it become coherence-node. When you have coherence installed on your machine with default settings it'll join cluster (if you started cache-server).
This is an 1000 foot view.

How to configure Java client connecting to AWS EMR spark cluster

I'm trying to write a simple spark application, and when i run it locally it works with setting the master as
.master("local[2]")
But after configuring spark cluster on AWS (EMR) i can't connet to the master url:
.master("spark://<master url>:7077")
Is this the way to do it? am i missing something here?
The cluster is up and running, and when i tried adding my application as a step jar, so it will run directly in the cluster it worked. But i want to be able to run it from a remote machine.
would appreciate some help here,
Thanks
To run from a remote machine, you will need to open the appropriate ports in the Security Group assigned to your EMR master node. You will need to add at least 7077.
If by "remote" you mean one that isn't in your AWS environment, you will also need to setup a way to route traffic to it from the outside.

Get Hadoop Cluster Details

I have created a pseudo cluster on a vm and I am trying to get my cluster information such as number of nodes, live node, dead etc. by using the Hadoop's API from a Java program.
Unfortunatelly, I am not able to use any of the methods of the FSNamesystem class. It's like I have to cluster discovery using Java client.
same information as we get using http port 50070.
If the following statement will work I mean if I can create the object which should not return null then I can get all the details regarding my cluster
FSNamesystem f= FSNamesystem.getFSNamesystem();
And also how I can inject dependency for NameNode?

Can a local Java program start jobs remotely via SSH using DRMAA?

How does DRMAA work? Can a local Java program using DRMAA start jobs on a remote cluster over SSH (so that nothing will need to be installed on the server-side)?
Background:
I'm developing a general (or as general as possible) HPC client in Java/Eclipse RCP, and
wanted to use DRMAA in order to support any resource manager as backend.
I have the SSH connection functionality through the Remote System Explorer (RSE) Eclipse plugin already.
Most of the DRMAA implementations use the native API calls (the same which are used inside the qsub/bsub/sbatch... commands). One can see DRMAA as the "ODBC for the batch systems".
Most of the DRMAA implementations require you to run it from submit host of your local cluster, there is no SSH inside. What you can try to do is to build a portable, DRMAA based "drmaa-run" command (example: http://apps.man.poznan.pl/trac/drmaa-misc/browser/drmaa_utils/trunk/drmaa_utils/drmaa_run.c) and run it via SSH.

Categories