How to get the implicit port from an embedded Neo4j database? - java

Given one starts an embedded database:
BoltConnector boltConnector = new BoltConnector("bolt");
new GraphDatabaseFactory()
.newEmbeddedDatabaseBuilder(Files.createTempDirectory("part-relasjon").toFile())
.setConfig(boltConnector.enabled, "true")
.setConfig(boltConnector.type, "BOLT")
.setConfig(boltConnector.listen_address, "localhost:0")
.newGraphDatabase();
how do I get hold of the port to which the database is bound? I want to create a Driver to connect to the database:
GraphDatabase.driver("bolt://localhost:?")
I am doing this to run integration tests in a specific application profile. Right now, I find a free port manually and bind the server and driver to it, but I am looking for a way of extracting it from the randomly selected port as it seems like the cleaner solution. I had a look into the Neo4jRule from the test package to see how it is done there but the latter bootstraps a server what is much more complex than the above code which is why I want to avoid it.

This is an example from the documentation here : https://neo4j.com/docs/java-reference/3.3/#tutorials-java-embedded-bolt
GraphDatabaseSettings.BoltConnector bolt = GraphDatabaseSettings.boltConnector( "0" );
GraphDatabaseService graphDb = new GraphDatabaseFactory()
.newEmbeddedDatabaseBuilder( DB_PATH )
.setConfig( bolt.type, "BOLT" )
.setConfig( bolt.enabled, "true" )
.setConfig( bolt.address, "localhost:7687" )
.newGraphDatabase();
As you can see, here, you define the port (here 7687, the default one).
You can find a free port by yourself in java via something like that :
InetSocketAddress address = new InetSocketAddress( 'localhost', 7687 );
new ServerSocket( addres.getPort(), 100, address.getAddress() ).close();
If you got an exception, port is not free.
It seems that since the version 3.3, you can define to 0 the port of any connector, and Neo4j will find a free port.
To know the port, you can try this java code (comes from here : https://github.com/neo4j/neo4j/blob/3.4/community/neo4j-harness/src/main/java/org/neo4j/harness/internal/InProcessServerControls.java):
server.getDependencyResolver().resolveDependency( ConnectorPortRegister.class ).getLocalAddress( "bolt" ).getPort();
Cheers

Related

Lotus Notes Java replication of remote database

I have a lot of Lotus Notes / Domino (version 7) database to migrate to a new software.
On my workstation (with Lotus Notes installed), I'm using a standalone Java application to connect to a local replica an extract data.
However the replication of the distant database is still a manual process. I'd like to automatise it.
My java code basically looks like this :
Session localSession = NotesFactory.createSession(); // With Notes thread initialized
Session remoteSession = NotesFactory.createSession(SERVER, USER, PASSWORD);
Database localDb = localSession.getDbDirectory(null).openDatabase("local_name", true);
Database remoteDb = remoteSession.getDbDirectory(null).openDatabaseByReplicaID(REPLICA);
// ***EDITED CALLING INSTANCE BELOW***
remoteDb.createReplica(null, "local_name"); // Error thrown here
However the last line throws an exception (from memroy, but something like)
CN=****/***** does not have the right to create database on a server
How is it possible that I don't have the right to create database on my local computer ?
Is there any other way to programmaticly create a local replica from a distant database ?
Edit: changed calling instance of create replica to match my code causing the issue
My guess is that it's just giving you the wrong error message. One thing that's definitely wrong is that he first argument for createReplica should be an empty string, not a null pointer. I.e., try this:
localDb.createReplica("", "local_name");
Ok it looks like I found the answer.
AFAIU I had to open the database on the target server, using my local session, and run the createReplica() from here. This way, the createReplica is executed on my local Lotus Notes server, and the replica is created locally.
Session localSession = NotesFactory.createSession((String)null, (String)null, PASSWORD);
DbDirectory remoteDbDirectory = localSession.getDbDirectory(remoteSession.getServerName());
Database localSessionRemoteDatabase = remoteDbDirectory.openDatabaseByReplicaID(REMOTE_REPLICA_ID);
localSessionRemoteDatabase.createReplica("", LOCAL_FILE_NAME);
#Richard Schwartz Can you confirm this is ok ?
The only weird thing, is that it opens a prompt (like when it's expecting password) but the replica is created.
The process is executed within Eclipse.

How do you use consistent hashing with the java elasticache libs?

Im trying to use elasticache as a memcache service with AWS's elasticache client library for java.
The following code works for connecting to the cluster:
_client = new MemcachedClient(_serverList);
But any attempt I've made to use consistent hashing results in memcache client failing to initialize:
_client = new MemcachedClient(new KetamaConnectionFactory(), _serverList);
or
ConnectionFactoryBuilder connectionFactoryBuilder = new ConnectionFactoryBuilder();
connectionFactoryBuilder.setLocatorType(Locator.CONSISTENT);
connectionFactoryBuilder.setHashAlg(DefaultHashAlgorithm.KETAMA_HASH);
connectionFactoryBuilder.setClientMode(ClientMode.Dynamic);
ConnectionFactory connectionFactory = connectionFactoryBuilder.build();
_client = new MemcachedClient(connectionFactory, _serverList);
Any attempt I've made to use anything but a vanilla MemcacheClient results in errors such as :
2015-04-07 07:00:32.914 WARN net.spy.memcached.ConfigurationPoller: The configuration is null in the server localhost
2015-04-07 07:00:32.914 WARN net.spy.memcached.ConfigurationPoller: Number of consecutive poller errors is 7. Number of minutes since the last successful polling is 0
Also, I've verified with telnet, spymecached libs, and the vanilla MemcacheClient constructor, that the security groups are permissive.
When using the AWS Client Library KetamaConnectionFactory defaults to the "dynamic" client mode which tries to poll the list of available memcached nodes from the configuration endpoint. For this to work your _serverList should only contain the configuration endpoint.
Your error message indicates the host was a "plain" memcached node which doesn't understand the ElastiCache extensions. If this is what you intend to do (specify the nodes yourself rather than use the autodiscovery feature) then you need to use the multiple-arg KetamaConnectionFactory constructor and pass in ClientMode.Static as the first argument.
You will need to use the AddrUtil.getAddresses() method.
_client = new MemcachedClient(new KetamaConnectionFactory(), AddrUtil.getAddresses("configEndpoint:port"));
or
ConnectionFactoryBuilder connectionFactoryBuilder = new ConnectionFactoryBuilder(new KetamaConnectionFactory());
// set any other properties you want on the builder
ConnectionFactory connectionFactory = connectionFactoryBuilder.build();
_client = new MemcachedClient(connectionFactory, AddrUtil.getAddresses("configEndpoint:port"));

MongoDB Java Client Automatic Failover Failing

I have a 3 node replica set setup on localhost:
mongod --port 27017 --dbpath C:/data/repset/rsdb0 --replSet rs0
mongod --port 27018 --dbpath C:/data/repset/rsdb1 --replSet rs0
mongod --port 27019 --dbpath C:/data/repset/rsdb2 --replSet rs0
In my Java client I'm connecting to the replica set:
List<ServerAddress> addrs = new ArrayList<>();
addrs.add( new ServerAddress( "HOST" , 27017 ) );
addrs.add( new ServerAddress( "HOST" , 27018 ) );
addrs.add( new ServerAddress( "HOST" , 27019 ) );
MongoClient mongo = new MongoClient( addrs );
System.out.println(mongo.getReplicaSetStatus());
All works fine unless I take down (stop) the 3rd Secondary mongodb instance (the one on port 27019). This is to simulate a server failure.
Then when I run the above java code I get:
Feb 17, 2014 10:51:18 PM com.mongodb.ConnectionStatus$UpdatableNode update
WARNING: Server seen down: HOST/192.168.0.5:27019
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
Which seems to mean that the replica set is not failing over. I would expect the client to carry on reading and writing until I bring the 'failed' server back up and then I would expect a re-syncing.
I guess I must be doing something wrong, because automatic failover is fundamental to MongoDB, but can anyone tell me what it is?
Thanks!
Which seems to mean that the replica set is not failing over.
It's not the replica set that should fail over but the driver.
And what tells you it doesn't? The warning just informs you that one of the nodes is in DOWN state. You haven't done any operation yet, hence nothing to fail over from.
Did you try to read or write beyond connecting to the set? I'm pretty much convinced you can.
I guess I must be doing something wrong
No you don't, except for understanding the meaning of the warning
You'll get that error if you use a stale connection. When the driver detects that failed connection, it will repopulate the pool with new connections. If you retry in a few seconds (once any primary election has been sorted out by the server), you'll see that the java driver reconnects to the new primary and is ready for writing.

Restrict java-websocket access to localhost

I'm writing a java-websocket server as a cryptocurrency client.
For security reasons, I'd like to restrict access to the local machine.
Is there a way to restrict access to a java-websocket server by IP or hostname?
If so, how?
You should specify your listening ip to 127.0.0.1 thus it wont be possible to connect from the outside.
Edit
Looking at the example ChatServer.java the binding happens with
ChatServer s = new ChatServer( port );
The class implements two constructors:
public ChatServer( int port ) throws UnknownHostException {
super( new InetSocketAddress( port ) );
}
public ChatServer( InetSocketAddress address ) {
super( address );
}
So you could also call the server with an inetSocketAddress. Create one thats binds to localhost:
new ServerSocket(9090, 0, InetAddress.getByName(null));
and then call the server with that instead of just the port.
So replace
ChatServer s = new ChatServer( port );
with
InetSocketAddress myLocalSocket = new ServerSocket(9090, 0, InetAddress.getByName(null));
ChatServer s = new ChatServer( myLocalSocket );
in your example and it should work.
The accepted answer by Max will prevent connections to your socket from outside, but there is another attack vector that you should consider.
A connection to your localhost WebSocket can be made by JavaScript hosted on any outside website. If your local user is tricked into visiting a remote site, the HTML/JavaScript hosted by that site will be able to communicate with your local web socket.
You may be able to mitigate this by restrict connections based Origin header value, which will indicates the script origin address generating the WebSocket connection request. Keep in mind that the Origin header is optional and you are relying on the browser to set it appropriately to where the script came from.

failed to get liferay server url in a scheduler as PortalUtil.getPortalPort(secure) always returns -1

When I use PortalUtil.getPortalPort(secure) in a scheduler it always returns -1 instead of the right port number 8080.
boolean secure = "https".equalsIgnoreCase( PropsUtil.get( PropsKeys.WEB_SERVER_PROTOCOL ) );
Company company = CompanyLocalServiceUtil.getCompanies().get(0);
String portalURL = PortalUtil.getPortalURL(company.getVirtualHostname(), PortalUtil.getPortalPort(secure), secure);
If I run the code from a managed bean, it works fine.
Is there any other way to get the server port from a scheduler?
Since you are executing the code from scheduler, and you don't have reference to request object, you cant get the server port using PortalUtil.
You can try the hack mentioned here. Java EE getting servlet container port

Categories