Cassandra in Java - Same cluster name everytime - java

I was trying to write a basic java program in eclipse that uses Cassandra java driver to connect to a Cassandra node.
I found this repository https://github.com/datastax/java-driver.
When I tried to run using -
package com.example.cassandra;
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Session;
import com.datastax.driver.core.Host;
public class SampleConnection {
private Cluster cluster;
private Session session;
public void connect(String node){
cluster = Cluster.builder().addContactPoint(node).build();
session = cluster.connect("mykeyspace");
System.out.println(cluster.getClusterName());
}
public void close()
{
cluster.shutdown();
}
public static void main(String args[]) {
SampleConnection client = new SampleConnection();
client.connect("127.0.0.1");
client.close();
}
1) Encountered output in eclipse as
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/127.0.0.1])
Why it is refusing even to connect let alone create a table? (9042 port, configured in cassandra.yaml, is open & cassandra service is running)
2) Why, in my code, cluster.getClusterName(); is giving "cluster1" as cluster name everytime regardless of my cluster name in cassandra.yaml file?
However, when I tried to use the below code, it worked:
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Host;
import com.datastax.driver.core.Session;
import com.datastax.driver.core.policies.DefaultRetryPolicy;
public class CassConnector {
private static Cluster cluster;
private static Session session;
public static Cluster connect(String node) {
return Cluster.builder().addContactPoint(node)
.withRetryPolicy(DefaultRetryPolicy.INSTANCE).build();
}
public static void main(String[] arg) {
cluster = connect("localhost");
session = cluster.connect("mykeyspace");
session.execute("CREATE KEYSPACE myks WITH REPLICATION = "
+ "{ 'class' : 'SimpleStrategy', 'replication_factor' : 1};" );
session.execute("USE mykeyspace");
String query = "CREATE TABLE emp(emp_id int PRIMARY KEY, "
+ "emp_name text, "
+ "emp_city text );";
session.execute(query);
System.out.println("Table created!");
session.close();
cluster.close();
}
What's the logical difference between these two approaches?

I assume you're referring to Cluster.getClusterName(). From the javadoc:
Note that this is not the Cassandra cluster name, but rather a name assigned to this Cluster object. Currently, that name is only used for one purpose: to distinguish exposed JMX metrics when multiple Cluster instances live in the same JVM (which should be rare in the first place). That name can be set at Cluster building time (through Cluster.Builder.withClusterName(java.lang.String) for instance) but will default to a name like cluster1 where each Cluster instance in the same JVM will have a different number.

Related

How to read file only once when app deployed on two nodes

I'm going to read files from SFTP location line by line:
#Override
public void configure() {
from(sftpLocationUrl)
.routeId("route-name")
.split(body().tokenize("\n"))
.streaming()
.bean(service, "build")
.to(String.format("activemq:%s", queueName));
}
But this application will be deployed on two nodes, and I think that in this case, I can get an unstable and unpredictable application work because the same lines of the file can be read twice.
Is there a way to avoid such duplicates in this case?
Camel has some (experimental) clustering capabilities - see here.
In your particular case, you could model a route which is taking the leadership when starting the directory polling, preventing thereby other nodes from picking the (same or other) files.
soluion is active passive mode . “In active/passive mode, you have a single master instance polling for files, while all the other instances (slaves) are passive. For this strategy to work, some kind of locking mechanism must be in use to ensure that only the node holding the lock is the master and all other nodes are on standby.”
it can implement with hazelcast, consul or zookeper
public class FileConsumerRoute extends RouteBuilder {
private int delay;
private String name;
public FileConsumerRoute(String name, int delay) {
this.name = name;
this.delay = delay;
}
#Override
public void configure() throws Exception {
// read files from the shared directory
from("file:target/inbox" +
"?delete=true")
// setup route policy to be used
.routePolicyRef("myPolicy")
.log(name + " - Received file: ${file:name}")
.delay(delay)
.log(name + " - Done file: ${file:name}")
.to("file:target/outbox");
}}
ServerBar
public class ServerBar {
private Main main;
public static void main(String[] args) throws Exception {
ServerBar bar = new ServerBar();
bar.boot();
}
public void boot() throws Exception {
// setup the hazelcast route policy
ConsulRoutePolicy routePolicy = new ConsulRoutePolicy();
// the service names must be same in the foo and bar server
routePolicy.setServiceName("myLock");
routePolicy.setTtl(5);
main = new Main();
// bind the hazelcast route policy to the name myPolicy which we refer to from the route
main.bind("myPolicy", routePolicy);
// add the route and and let the route be named Bar and use a little delay when processing the files
main.addRouteBuilder(new FileConsumerRoute("Bar", 100));
main.run();
}
}
Server Foo
public class ServerFoo {
private Main main;
public static void main(String[] args) throws Exception {
ServerFoo foo = new ServerFoo();
foo.boot();
}
public void boot() throws Exception {
// setup the hazelcast route policy
ConsulRoutePolicy routePolicy = new ConsulRoutePolicy();
// the service names must be same in the foo and bar server
routePolicy.setServiceName("myLock");
routePolicy.setTtl(5);
main = new Main();
// bind the hazelcast route policy to the name myPolicy which we refer to from the route
main.bind("myPolicy", routePolicy);
// add the route and and let the route be named Bar and use a little delay when processing the files
main.addRouteBuilder(new FileConsumerRoute("Foo", 100));
main.run();
}}
Source : Camel In Action 2nd Edition

Can't connect to a testcontainer Postgres instance

I've created a Postgres instance using testcontainers. The container starts but I cannot access it.
I have tried connecting at the containerized DB using DBeaver.
In the eclipse console everything seems fine:
01:29:34.662 [main] DEBUG com.github.dockerjava.core.command.AbstrDockerCmd - Cmd: com.github.dockerjava.core.command.CreateContainerCmdImpl#73386d72[name=,hostName=,domainName=,user=,attachStdin=,attachStdout=,attachStderr=,portSpecs=,tty=,stdinOpen=,stdInOnce=,env={POSTGRES_USER=test,POSTGRES_PASSWORD=test,POSTGRES_DB=ASIGDB_TEST}
Here is my code:
public class CustomPostgresContainer extends PostgreSQLContainer<CustomPostgresContainer>{
private static final String IMAGE_VERSION = "postgres:9.6";
private static CustomPostgresContainer customPostgresContainer;
private static final int EXPOSED_PORT = 5555;
private static final String DB_NAME = "ASIGDB_TEST";
private static final String DB_USER= "test";
private static final String DB_PASSWORD= "test";
public CustomPostgresContainer() {
super(IMAGE_VERSION);
}
public static CustomPostgresContainer getCustomPostgresContainerInstance() {
if(customPostgresContainer == null) {
return extracted().withExposedPorts(EXPOSED_PORT)
.withDatabaseName(DB_NAME)
.withUsername(DB_USER)
.withPassword(DB_PASSWORD);
}
return customPostgresContainer;
}
private static CustomPostgresContainer extracted() {
return new CustomPostgresContainer();
}
#Override
public void start() {
super.start();
}
#Override
public void stop() {
//do nothing, JVM handles shut down
}
}
I get:
Connection to localhost:5555 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Does anyone know what is going on?
According to this link, withExposedPorts() --> this exposed port number is from the perspective of the container.
From the host's perspective Testcontainers actually exposes this on a random free port. This is by design, to avoid port collisions that may arise with locally running software or in between parallel test runs.
Because there is this layer of indirection, it is necessary to ask Testcontainers for the actual mapped port at runtime. This can be done using the getMappedPort method, which takes the original (container) port as an argument:
Integer firstMappedPort = container.getMappedPort(yourExposedPort);<br/>
Try to connect with DBeaver to the port that appears first.

How can I view the visual graph representation of the nodes created from a Neo4J Java Application?

I'm working with NEO4J and Java to create a prototype for an application that integrates a graph database that holds patient information (with fake, made up data).
I've created a simple two class program, and created nodes (haven't assigned relationships yet). However, I want to be able to view the nodes that I've created in order to make sure that my application is working properly, and to be able to see the results in the NEO4J Browser / Community Server.
How can I get the nodes to appear visually? I know I could test the fact that they're being created by querying them, but I also want to know how to visually display them.
What I've tried to do is go into the Neo4j.conf file, and change the active database from the default "graph.db" to "Users/andrew/eclipse-workspace/patients-database/target/patient-db", since in the Java class I've created, I use this line of code to set my database:
private static final File Patient_DB = new File("target/patient-db");
However, whenever I open the NEO4J browser at localhost:7474, after running my code there are no nodes visible.
Below, I'll paste the code to my PatientGraph class (the other class is just a Patient class that creates the Patients and their attributes)
package com.andrewhe.neo4j.Patients_Database;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import org.neo4j.graphdb.Direction;
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.graphdb.Node;
import org.neo4j.graphdb.Path;
import org.neo4j.graphdb.Relationship;
import org.neo4j.graphdb.RelationshipType;
import org.neo4j.graphdb.Transaction;
import org.neo4j.graphdb.factory.GraphDatabaseFactory;
import org.neo4j.graphdb.traversal.Evaluators;
import org.neo4j.graphdb.traversal.TraversalDescription;
import org.neo4j.graphdb.traversal.Traverser;
import org.neo4j.io.fs.FileUtils;
public class PatientGraph {
private static final File Patient_DB = new File("target/patient-db");
private ArrayList<Patient> patients = new ArrayList<Patient>();
private long patientZero;
private GraphDatabaseService graphDb;
public ArrayList<Patient> getPatients() { return patients; }
public void manualPatientSetUp() throws IOException {
Patient homeNode = new Patient("");
Patient jan = new Patient("Jan");
patients.add(jan);
Patient kim = new Patient("Kim");
patients.add(kim);
Patient ahmad = new Patient("Ahmad");
patients.add(ahmad);
Patient andrew = new Patient("Andrew");
patients.add(andrew);
}
public void createPatientNodes() throws IOException {
FileUtils.deleteRecursively(Patient_DB);
graphDb = new GraphDatabaseFactory().newEmbeddedDatabase(Patient_DB);
registerShutdownHook();
try (Transaction tx = graphDb.beginTx()) {
for (Patient patient : patients) {
Node patientNode = graphDb.createNode();
System.out.println("Node created");
setProperties(patientNode, patient);
}
tx.success();
}
}
//Method to create and set properties for node instead of using 5 set properties each time.
public void setProperties(Node node, Patient patient) {
node.setProperty("name", patient.getName());
node.setProperty("weight", patient.getWeight());
node.setProperty("pat_id", new String(patient.getPatientID()));
node.setProperty("age", patient.getAge());
//Don't worry about diagnoses yet;
//To get it to work, just turn the diagnoses ArrayList into a String separated by commas.
}
public void setUp() throws IOException {
//reads in patients using a file
}
public void shutdown()
{
graphDb.shutdown();
}
private void registerShutdownHook()
{
// Registers a shutdown hook for the Neo4j instance so that it
// shuts down nicely when the VM exits (even if you "Ctrl-C" the
// running example before it's completed)
Runtime.getRuntime().addShutdownHook(new Thread(() -> graphDb.shutdown()));
}
public static void main(String[] args) throws IOException {
PatientGraph pg = new PatientGraph();
pg.manualPatientSetUp();
pg.createPatientNodes();
for (int i = 0; i < pg.getPatients().size(); i++) {
pg.getPatients().get(i).printAllData();
}
pg.shutdown();
}
}
You did not provide sufficient information about how you are querying the nodes. You did not even elaborate as to what you are classes do in a brief detail before actually pasting the entire class contents. What is the relationship between these classes? Expecting someone to decode this for you from the code, is asking for too much.
You could use Neo4J Browser (comes built-in) or Neo4J Bloom (commercial tool) to visualize the graph nodes and their interconnections (relations).
To query a Neo4j database you can use Cypher, a pictorial graph query language, that represents patterns as Ascii Art.
A detailed Hands-On procedure for querying and visualizing the Graph Nodes is given in the article below.
https://medium.com/neo4j/hands-on-graph-data-visualization-bd1f055a492d

BinaryObjectException: Cannot find schema for object with compact footer

This is the scenario: I deploy my web application to two Tomcat servers, and I use Apache Ignite to cluster web sessions. The load balancer is put in the round robin fashion.
The software I use are:
JDK 1.8.0_66
Apache Tomcat 7.0.68
Apache Ignite 1.6.0
Crossroads load balancer version 2.65
Below is the data I put into the session:
import java.io.Serializable;
public class SessionData implements Serializable {
private static final long serialVersionUID = 1L;
private int counter;
public int getCounter() {
return counter;
}
public void setCounter(int counter) {
this.counter = counter;
}
public SessionData() {
}
}
And I can verify that the two applications do share the same data, and everything works perfectly.
Then I update the session data class to:
public class SessionData implements Serializable {
private static final long serialVersionUID = 1L;
private int counter;
private String ip;
public int getCounter() {
return counter;
}
public void setCounter(int counter) {
this.counter = counter;
}
public String getIp() {
return ip;
}
public void setIp(String ip) {
this.ip = ip;
}
public SessionData() {
}
}
And I deploy the new web application to one of the servers. Now when I refresh the web page which will in turn read and update the counter in the session data, I keep getting the following error from both servers, and the page never loads.
ERROR - root - Failed to update web session: null
class org.apache.ignite.binary.BinaryObjectException: Cannot find schema for object with compact footer [typeId=-2056860774, schemaId=1954049593]
at org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:1721)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.<init>(BinaryReaderExImpl.java:278)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.<init>(BinaryReaderExImpl.java:177)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.<init>(BinaryReaderExImpl.java:156)
at org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:298)
at org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal(BinaryMarshaller.java:109)
at org.apache.ignite.cache.websession.WebSessionV2.unmarshal(WebSessionV2.java:336)
at org.apache.ignite.cache.websession.WebSessionV2.getAttribute(WebSessionV2.java:200)
I believe this is a common senario. Imagine there are dozens of nodes in the cluster, and we need to redeploy an updated version of web application to all of the nodes one after another. And during the process of redeployment, this issue will surface, and the user will suffer from it.
Wonder if this is a real problem for Apache Ignite, or due to my misconfiguration/misunderstanding? And if it is problem, is there any work-around? Or I have to shut down all the servers in the worst case; and if we use a persistent store, do we need to purge all the data in the persistent store?
I'm not sure about the reasons, but this looks like incorrect behavior. Created a ticket: https://issues.apache.org/jira/browse/IGNITE-3194
As a workaround you can try to disable compact footers. To do this, add the following to you Ignite configuration:
<property name="binaryConfiguration">
<bean class="org.apache.ignite.configuration.BinaryConfiguration">
<property name="compactFooter" value="false"/>
</bean>
</property>

Do I need all classes on the client, server and registry for RMI to work?

I'm doing my first steps with RMI, and I have a simple question.
I have a .jar file which has the implementation of several methods from a library.
I want to call this methods in the .jar file using RMI.
What I'm trying is to create a kind of a wrapper to do it.
So, I'm working on something like this:
Interface class: This interface has the methods to be implemented by the remote object.
Implementation class: This class, has the implementation of the interface methods, each implementation calls the corresponding method in the .jar file. E.g., the jar file has a method called getDetails(), and it returns a "ResponseDetail" object. ResponseDetail is a response class I have in the .jar.
Server class: it binds the methods to the rmiregistry
Client class: it will consume the methods implemented in implementation.
So far so good? :)
Now, I have a lib folder where resides the .jar file.
In the server machine I have deployed the Interface, Implementation and Server classes. I've generated the stub, and I ran the rmiregistry successfully, but, with these details:
To start the rmiregistry, I had to set the classpath in the command line to reference the .jar files, otherwise I get the java.lang.NoClassDefFoundError. I did it with this .sh file:
THE_CLASSPATH=
for i in `ls ./lib/*.jar`
do
THE_CLASSPATH=${THE_CLASSPATH}:${i}
done
rmiregistry -J-classpath -J".:${THE_CLASSPATH}"
To start the server, I had to set the classpath as well to reference the .jar files, otherwise, I get the java.lang.NoClassDefFoundError. I've used something like this:
THE_CLASSPATH=
for i in `ls ./lib/*.jar` do
THE_CLASSPATH=${THE_CLASSPATH}:${i}
done
java -classpath ".:${THE_CLASSPATH}" Server
Client machine:
To run the Client.class file from the client machine, I had to copy the .jar files to it, and make reference to them in the classpath, because otherwise, it does not run and I get the java.lang.NoClassDefFoundError. I had to use this on the client machine:
THE_CLASSPATH=
for i in `ls ./lib/*.jar`
do
THE_CLASSPATH=${THE_CLASSPATH}:${i}
done
java -classpath ".:${THE_CLASSPATH}" HelloClient
Is this ok? I mean, do I have to copy the .jar files to the client machine to execute methods through RMI?
Prior to JDK v5 one had to generate the RMI stubc using the rmic (RMI Compiler). This is done automatically from JDK v5 on. Moreover, you can start the RMI Registry from within the Java code as well. To start with a simple RMI application you may want to follow the following steps:
Create the interface:
import java.rmi.*;
public interface SomeInterface extends Remote {
public String someMethod1() throws RemoteException;
public int someMethod2(float someParameter) throws RemoteException;
public SomeStruct someStructTest(SomeStruct someStruct) throws RemoteException;
}
Implement the interface:
import java.rmi.*;
import java.rmi.server.*;
public class SomeImpl extends UnicastRemoteObject implements SomeInterface {
public SomeImpl() throws RemoteException {
super();
}
public String someMethod1() throws RemoteException {
return "Hello World!";
}
public int someMethod2( float f ) throws RemoteException {
return (int)f + 1;
}
public SomeStruct someStructTest(SomeStruct someStruct) throws RemoteException {
int i = someStruct.getInt();
float f = someStruct.getFloat();
someStruct.setInt(i + 1);
someStruct.setFloat(f + 1.0F);
return someStruct;
}
}
Implement a non-primitive serializable object that is to be passed between a client and the server:
import java.io.*;
public class SomeStruct implements Serializable {
private int i = 0;
private float f = 0.0F;
public SomeStruct(int i, float f) {
this.i = i;
this.f = f;
}
public int getInt() {
return i;
}
public float getFloat() {
return f;
}
public void setInt(int i) {
this.i = i;
}
public void setFloat(float f) {
this.f = f;
}
}
Implement the server:
import java.rmi.*;
import java.rmi.server.*;
import java.rmi.registry.Registry;
import java.rmi.registry.LocateRegistry;
import java.net.*;
import java.io.*;
public class SomeServer {
public static void main(String args[]) {
String portNum = "1234", registryURL;
try{
SomeImpl exportedObj = new SomeImpl();
startRegistry( Integer.parseInt(portNum) );
// register the object under the name "some"
registryURL = "rmi://localhost:" + portNum + "/some";
Naming.rebind(registryURL, exportedObj);
System.out.println("Some Server ready.");
} catch (Exception re) {
System.out.println("Exception in SomeServer.main: " + re);
}
}
// This method starts a RMI registry on the local host, if it
// does not already exist at the specified port number.
private static void startRegistry(int rmiPortNum) throws RemoteException{
try {
Registry registry = LocateRegistry.getRegistry(rmiPortNum);
registry.list( );
// The above call will throw an exception
// if the registry does not already exist
} catch (RemoteException ex) {
// No valid registry at that port.
System.out.println("RMI registry is not located at port " + rmiPortNum);
Registry registry = LocateRegistry.createRegistry(rmiPortNum);
System.out.println("RMI registry created at port " + rmiPortNum);
}
}
}
Implement the client:
import java.io.*;
import java.rmi.*;
import java.rmi.registry.Registry;
import java.rmi.registry.LocateRegistry;
public class SomeClient {
public static void main(String args[]) {
try {
String hostName;
String portNum = "1234";
String registryURL = "rmi://localhost:" + portNum + "/some";
SomeInterface h = (SomeInterface)Naming.lookup(registryURL);
// invoke the remote method(s)
String message = h.someMethod1();
System.out.println(message);
int i = h.someMethod2(12344);
System.out.println(i);
SomeStruct someStructOut = new SomeStruct(10, 100.0F);
SomeStruct someStructIn = new SomeStruct(0, 0.0F);
someStructIn = h.someStructTest(someStructOut);
System.out.println( someStructIn.getInt() );
System.out.println( someStructIn.getFloat() );
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
A larger client-server application should be divided into three modules:client, server, and common (for classes shared between the server and client code, i.e. the remote interface and the non-primitive object in this example). The client application will then be created from client + common modules on the classpath and the server from server + common modules on the classpath.
I used this example many years ago to learn basics of RMI and it still works. However it is far from being perfect (default Java package used, incorrect exception handling, hostname and port parameters are hard-coded and not configurable, etc.)
Nevertheless, it is good for starters. All the files can be placed in one directory and compiled using the simple javac *.java command. The server application can then be started using the java SomeServer and the client one by launching the java SomeClient command.
I hope this helps to understand the Java RMI which is, in fact, far more complicated than just this.
You shouldn't be generating stubs (if you are following a tutorial, it is way old). you can run the client without necessarily having the jars locally (using remote classloading), but it's way easier to do it this with the jars available locally (i've personally done a fair bit of RMI and never actually deployed a system with remote classloading). typically, you want 2 jars, a "client" jar with just the remote interfaces (and any Serializable classes used by those interfaces) and a "server" jar which includes the implementation classes. you would then run the server with the server jar, and the rmiregistry/client with the client jars.
This is a pretty good (up to date and simple) getting started guide.
To say it in short what the other answers elaborated:
The client needs only the common interfaces (and the client classes), not the server implementation.
The server needs interfaces and implementation (and your server main class).
The rmiregistry needs only the interfaces.
(Actually, you can start your own registry inside the server process - then you don't need the rmiregistry at all. Have a look at the createRegistry methods in the java.rmi.registry.LocateRegistry class.)
"Interfaces" here means both the remote interfaces and any (serializable) classes used by these as parameter or argument types.
How you distribute these classes to jar files is independent of this.

Categories