I have a 3 node replica set setup on localhost:
mongod --port 27017 --dbpath C:/data/repset/rsdb0 --replSet rs0
mongod --port 27018 --dbpath C:/data/repset/rsdb1 --replSet rs0
mongod --port 27019 --dbpath C:/data/repset/rsdb2 --replSet rs0
In my Java client I'm connecting to the replica set:
List<ServerAddress> addrs = new ArrayList<>();
addrs.add( new ServerAddress( "HOST" , 27017 ) );
addrs.add( new ServerAddress( "HOST" , 27018 ) );
addrs.add( new ServerAddress( "HOST" , 27019 ) );
MongoClient mongo = new MongoClient( addrs );
System.out.println(mongo.getReplicaSetStatus());
All works fine unless I take down (stop) the 3rd Secondary mongodb instance (the one on port 27019). This is to simulate a server failure.
Then when I run the above java code I get:
Feb 17, 2014 10:51:18 PM com.mongodb.ConnectionStatus$UpdatableNode update
WARNING: Server seen down: HOST/192.168.0.5:27019
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
Which seems to mean that the replica set is not failing over. I would expect the client to carry on reading and writing until I bring the 'failed' server back up and then I would expect a re-syncing.
I guess I must be doing something wrong, because automatic failover is fundamental to MongoDB, but can anyone tell me what it is?
Thanks!
Which seems to mean that the replica set is not failing over.
It's not the replica set that should fail over but the driver.
And what tells you it doesn't? The warning just informs you that one of the nodes is in DOWN state. You haven't done any operation yet, hence nothing to fail over from.
Did you try to read or write beyond connecting to the set? I'm pretty much convinced you can.
I guess I must be doing something wrong
No you don't, except for understanding the meaning of the warning
You'll get that error if you use a stale connection. When the driver detects that failed connection, it will repopulate the pool with new connections. If you retry in a few seconds (once any primary election has been sorted out by the server), you'll see that the java driver reconnects to the new primary and is ready for writing.
Related
This is the scenario
I started the Server node.
I started Client Ignite node which will be done via a Java application say "X".
In visor I could see two nodes one is client and one is server when given command "node".
I killed the Java app "X" by doing "kill -9 pid".
Now when I go to visor terminal and enter "node" it still shows "client" and "server" nodes in the list. when asked about client node details it throws error obviously.
Now, when I restart the Java app "X", in that Java code again there will be an attempt to connect to Ignite server. But instead of connecting it is printing these logs so many times
"org.apache.ignite.logger.java.JavaLogger" "info" "INFO" "" "284" "Accepted incoming communication connection [locAddr=/0:0:0:0:0:0:0:1:47101, rmtAddr=/0:0:0:0:0:0:0:1:62856]" "" "" "" "" "" "" "1587013526124" "" "" "" "" "" "" "ROOT" "{""service"":"""",""logger_name"":""org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi""}"
It's not connecting and continuing to execute the code in Java. So the application is not resuming. And I found this is Ignite server log
[10:37:57] Possible failure suppressed accordingly to a configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=SYSTEM_CRITICAL_OPERATION_TIMEOUT, err=class o.a.i.IgniteException: Checkpoint read lock acquisition has been timed out.]]
[10:37:57,739][SEVERE][exchange-worker-#46][GridCacheDatabaseSharedManager] Checkpoint read lock acquisition has been timed out.
class org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$CheckpointReadLockTimeoutException: Checkpoint read lock acquisition has been timed out.
at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.failCheckpointReadLock(GridCacheDatabaseSharedManager.java:1708)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.checkpointReadLock(GridCacheDatabaseSharedManager.java:1640)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.initTopologies(GridDhtPartitionsExchangeFuture.java:1078)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:944)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3258)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3104)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:119)
at java.lang.Thread.run(Thread.java:748)
[10:39:21,547][SEVERE][tcp-disco-msg-worker-[693d29cd 0:0:0:0:0:0:0:1%lo0:47501 crd]-#2][G] Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour [workerName=db-checkpoint-thread, threadName=db-checkpoint-thread-#59, blockedFor=209s]
I am assuming here that since I am force shutting down the Java application which starts Ignite Client node, it's possible that there would be some topology imbalance that might happen.
Can someone please suggest, if at all I force kill the Client application, is there a correct way to restart the Client application such that it'll continue re-establishing the connection with Ignite server and continue working?
This scenario is possible when you have very long timeouts.
You should not expect node to be dropped, and a new one to join, before all timeout runs off, such as, network timeout, socket write timeout, failure detection timeout. That, unless you do graceful shutdown.
Given one starts an embedded database:
BoltConnector boltConnector = new BoltConnector("bolt");
new GraphDatabaseFactory()
.newEmbeddedDatabaseBuilder(Files.createTempDirectory("part-relasjon").toFile())
.setConfig(boltConnector.enabled, "true")
.setConfig(boltConnector.type, "BOLT")
.setConfig(boltConnector.listen_address, "localhost:0")
.newGraphDatabase();
how do I get hold of the port to which the database is bound? I want to create a Driver to connect to the database:
GraphDatabase.driver("bolt://localhost:?")
I am doing this to run integration tests in a specific application profile. Right now, I find a free port manually and bind the server and driver to it, but I am looking for a way of extracting it from the randomly selected port as it seems like the cleaner solution. I had a look into the Neo4jRule from the test package to see how it is done there but the latter bootstraps a server what is much more complex than the above code which is why I want to avoid it.
This is an example from the documentation here : https://neo4j.com/docs/java-reference/3.3/#tutorials-java-embedded-bolt
GraphDatabaseSettings.BoltConnector bolt = GraphDatabaseSettings.boltConnector( "0" );
GraphDatabaseService graphDb = new GraphDatabaseFactory()
.newEmbeddedDatabaseBuilder( DB_PATH )
.setConfig( bolt.type, "BOLT" )
.setConfig( bolt.enabled, "true" )
.setConfig( bolt.address, "localhost:7687" )
.newGraphDatabase();
As you can see, here, you define the port (here 7687, the default one).
You can find a free port by yourself in java via something like that :
InetSocketAddress address = new InetSocketAddress( 'localhost', 7687 );
new ServerSocket( addres.getPort(), 100, address.getAddress() ).close();
If you got an exception, port is not free.
It seems that since the version 3.3, you can define to 0 the port of any connector, and Neo4j will find a free port.
To know the port, you can try this java code (comes from here : https://github.com/neo4j/neo4j/blob/3.4/community/neo4j-harness/src/main/java/org/neo4j/harness/internal/InProcessServerControls.java):
server.getDependencyResolver().resolveDependency( ConnectorPortRegister.class ).getLocalAddress( "bolt" ).getPort();
Cheers
I have a lot of Lotus Notes / Domino (version 7) database to migrate to a new software.
On my workstation (with Lotus Notes installed), I'm using a standalone Java application to connect to a local replica an extract data.
However the replication of the distant database is still a manual process. I'd like to automatise it.
My java code basically looks like this :
Session localSession = NotesFactory.createSession(); // With Notes thread initialized
Session remoteSession = NotesFactory.createSession(SERVER, USER, PASSWORD);
Database localDb = localSession.getDbDirectory(null).openDatabase("local_name", true);
Database remoteDb = remoteSession.getDbDirectory(null).openDatabaseByReplicaID(REPLICA);
// ***EDITED CALLING INSTANCE BELOW***
remoteDb.createReplica(null, "local_name"); // Error thrown here
However the last line throws an exception (from memroy, but something like)
CN=****/***** does not have the right to create database on a server
How is it possible that I don't have the right to create database on my local computer ?
Is there any other way to programmaticly create a local replica from a distant database ?
Edit: changed calling instance of create replica to match my code causing the issue
My guess is that it's just giving you the wrong error message. One thing that's definitely wrong is that he first argument for createReplica should be an empty string, not a null pointer. I.e., try this:
localDb.createReplica("", "local_name");
Ok it looks like I found the answer.
AFAIU I had to open the database on the target server, using my local session, and run the createReplica() from here. This way, the createReplica is executed on my local Lotus Notes server, and the replica is created locally.
Session localSession = NotesFactory.createSession((String)null, (String)null, PASSWORD);
DbDirectory remoteDbDirectory = localSession.getDbDirectory(remoteSession.getServerName());
Database localSessionRemoteDatabase = remoteDbDirectory.openDatabaseByReplicaID(REMOTE_REPLICA_ID);
localSessionRemoteDatabase.createReplica("", LOCAL_FILE_NAME);
#Richard Schwartz Can you confirm this is ok ?
The only weird thing, is that it opens a prompt (like when it's expecting password) but the replica is created.
The process is executed within Eclipse.
I am using JDBC driver to connect to mySql from my java code (read client).
Driver = com.mysql.jdbc.Driver
JdbcUrl = jdbc:mysql://<<IpOftheDb>>/<<DbSchema Name>>?autoReconnect=true&connectTimeout=5000&socketTimeout=10000
In case the database is down ( machine hosting the db is up but the mysqld process is not running) , it takes some time to get the exception , The exception is
"com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException:Could not create connection to database server. Attempted reconnect 3 times. Giving up."
In the statement above socketTimeout is 10 sec . Now if I bring up the db with 10 sec as SocketTimeout I get the response correctly.
But If i reduce it to one sec and am executing the query I get the same exception.
"com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException:Could not create connection to database server. Attempted reconnect 3 times. Giving up."
But connectTimeout doesnt change anything. Can someone explain me what socketTimeout and connectTimeout means.
Also , If we are setting up replication and specifying the 2nd database as failover i.e.
my connection string changes to
jdbc:mysql://<<PrimaryDbIP>>,<<SecondaryDbIp>>/<<DbSchema>>?useTimezone=true
&serverTimezone=UTC&useLegacyDatetimeCode=false
&failOverReadOnly=false&autoReconnect=true&maxReconnects=3
&initialTimeout=5000&connectTimeout=6000&socketTimeout=6000
&queriesBeforeRetryMaster=50&secondsBeforeRetryMaster=30
I see that if primary is down then I get the response from secondary (failover Db) .
Now when client executes a query , does it go to primary database and then wait for socketTimeout (or whatever) and then goes to Secondary or it goes to Seconday before timeout occurs.
Moreover, the second time when the same connection Object is used , does it go directly to the secondary or again the above process is repeated .
I tried find some documentation which explains this but couldnt get .
Hopefully , someone can help here explaining the various timeout parameters and their usefulness.
I have a requirement to publish (to Graphite) the 'number of active connection available' status of a Mongo db instance when a REST service is called. I know we can use db.serverStatus() to know the details of connections on the server side.
I am looking to get the 'number of active connections available' information on the client side using JAVA API. The MongoDB Java Driver API documentation doesn't help much on it.
Assuming you are using the 3.0.x driver, and connecting to localhost on default port:
MongoClient mongoClient = new MongoClient();
MongoDatabase database = mongoClient.getDatabase("admin");
Document serverStatus = database.runCommand(new Document("serverStatus", 1));
Map connections = (Map) serverStatus.get("connections");
Integer current = (Integer) connections.get("current");
db.serverStatus() provides information about the number of connections created and number of connections available. Something like below :
"connections" : {
"current" : 3,
"available" : 2045,
"totalCreated" : NumberLong(3)
}
You could also use db.currentOp(true) to fetch the in progress details.
http://docs.mongodb.org/manual/reference/method/db.currentOp/