My program requires a large number of connections to be open (Mongo). I get the error :
Too many connections open, can't open anymore
after 819 connections. I already know we can increase this limit. But that's not what I have in mind. I'm thinking of closing the MongoClient object, and then creating a new one again after 800 connections.
My thinking is that with a new mongoClient object all the connections will be closed and when I start/create it again, the connections will be opened again until 800.
Thus not giving the error. (Let me know if this approach is totally wrong/ won't give the required results.)
For this I need to know the number of connections opened ATM. Is there any way to get this information using java?
You can get connection information by using the db.serverStatus() command. It has a connections subdocument which contains the total/available connections information.
For more information :
Documentation of server status
Details of connections block
Check the number of MongoDB connections using MongoDB Scala driver:
Create a MongoDB client:
import org.mongodb.scala._
import scala.collection.JavaConverters._
import scala.concurrent.Await
import scala.concurrent.duration._
import scala.util.{Failure, Success, Try}
// To directly connect to the default server localhost on port 27017
val mongodbClient: MongoClient = MongoClient()
// Use a Connection String
val mongodbClient: MongoClient = MongoClient("mongodb://localhost")
// or provide custom MongoClientSettings
val settings: MongoClientSettings = MongoClientSettings.builder()
.applyToClusterSettings(b => b.hosts(List(new ServerAddress("localhost")).asJava).
.build()
val mongodbClient: MongoClient = MongoClient(settings)
Call getNoOfMongodbConnection by passing mongodbClient:
val result = getNoOfMongodbConnection(mongodbClient)
Method to get the number of connections(current, available and total)
def getNoOfMongodbConnection(mongodbClient: MongoClient) = {
val adminDatabase = mongodbClient.getDatabase("admin")
val serverStatus = adminDatabase.runCommand(Document("serverStatus" -> 1)).toFuture()
Try {
Await.result(serverStatus, 10 seconds)
} match {
case Success(x) => {
val connection = x.get("connections")
logger.info("Number of mongodb connection:--> " + connection)
connection
}
case Failure(ex) => {
logger.error("Got error while getting the number of Mongodb connection:---> " + ex.printStackTrace())
None
}
}
}
Related
I'm runnig a select query on cassandra in java.It return read timeout exception.The problem is that is doen not produce an error in cqlsh.So the problem is in the code i guess.I'm using flink to connect to cassandra.Also i think that maybe the problem is that in cqlsh i need to hit enter the get more results.Maybe that's the cause for stopping the drive to read through all the result.Also when i use limit 100 it runs well.I have changed the cassandra.yaml read configuration but nothing changed.
ClusterBuilder cb = new ClusterBuilder() {
#Override
public Cluster buildCluster(Cluster.Builder builder) {
return builder.withPort(9042).addContactPoint("127.0.0.1")
.withSocketOptions(new SocketOptions().setReadTimeoutMillis(60000).setKeepAlive(true).setReuseAddress(true))
.withLoadBalancingPolicy(new RoundRobinPolicy())
.withReconnectionPolicy(new ConstantReconnectionPolicy(500L))
.withQueryOptions(new QueryOptions().setConsistencyLevel(ConsistencyLevel.ONE))
.build();
}
};
CassandraInputFormat<Tuple1<ProfileAlternative>> cassandraInputFormat = new CassandraInputFormat("SELECT toJson(profilealternative) FROM profiles.profile where skills contains 'Financial Sector' ", cb);
cassandraInputFormat.configure(null);
cassandraInputFormat.open(null);
Tuple1<ProfileAlternative> testOutputTuple = new Tuple1();
while (!cassandraInputFormat.reachedEnd()) {
cassandraInputFormat.nextRecord(testOutputTuple);
System.out.println(testOutputTuple.f0);
}
I'm wondering if any one experienced the same problem.
We have a Vert.x application and in the end it's purpose is to insert 600 million rows into a Cassandra cluster. We are testing the speed of Vert.x in combination with Cassandra by doing tests in smaller amounts.
If we run the fat jar (build with Shade plugin) without the -cluster option, we are able to insert 10 million records in about a minute. When we add the -cluster option (eventually we will run the Vert.x application in cluster) it takes about 5 minutes for 10 million records to insert.
Does anyone know why?
We know that the Hazelcast config will create some overhead, but never thought it would be 5 times slower. This implies we will need 5 EC2 instances in cluster to get the same result when using 1 EC2 without the cluster option.
As mentioned, everything runs on EC2 instances:
2 Cassandra servers on t2.small
1 Vert.x server on t2.2xlarge
You are actually running into corner cases of the Vert.x Hazelcast Cluster manager.
First of all you are using a worker Verticle to send your messages (30000001). Under the hood Hazelcast is blocking and thus when you send a message from a worker the version 3.3.3 does not take that in account. Recently we added this fix https://github.com/vert-x3/issues/issues/75 (not present in 3.4.0.Beta1 but present in 3.4.0-SNAPSHOTS) that will improve this case.
Second when you send all your messages at the same time, it runs into another corner case that prevents the Hazelcast cluster manager to use a cache of the cluster topology. This topology cache is usually updated after the first message has been sent and sending all the messages in one shot prevents the usage of the ache (short explanation HazelcastAsyncMultiMap#getInProgressCount will be > 0 and prevents the cache to be used), hence paying the penalty of an expensive lookup (hence the cache).
If I use Bertjan's reproducer with 3.4.0-SNAPSHOT + Hazelcast and the following change: send message to destination, wait for reply. Upon reply send all messages then I get a lot of improvements.
Without clustering : 5852 ms
With clustering with HZ 3.3.3 :16745 ms
With clustering with HZ 3.4.0-SNAPSHOT + initial message : 8609 ms
I believe also you should not use a worker verticle to send that many messages and instead send them using an event loop verticle via batches. Perhaps you should explain your use case and we can think about the best way to solve it.
When you're you enable clustering (of any kind) to an application you are making your application more resilient to failures but you're also adding a performance penalty.
For example your current flow (without clustering) is something like:
client ->
vert.x app ->
in memory same process eventbus (negletible) ->
handler -> cassandra
<- vert.x app
<- client
Once you enable clustering:
client ->
vert.x app ->
serialize request ->
network request cluster member ->
deserialize request ->
handler -> cassandra
<- serialize response
<- network reply
<- deserialize response
<- vert.x app
<- client
As you can see there are many encode decode operations required plus several network calls and this all gets added to your total request time.
In order to achive best performance you need to take advantage of locality the closer you are of your data store usually the fastest.
Just to add the code of the project. I guess that would help.
Sender verticle:
public class ProviderVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
IntStream.range(1, 30000001).parallel().forEach(i -> {
vertx.eventBus().send("clustertest1", Json.encode(new TestCluster1(i, "abc", LocalDateTime.now())));
});
}
#Override
public void stop() throws Exception {
super.stop();
}
}
And the inserter verticle
public class ReceiverVerticle extends AbstractVerticle {
private int messagesReceived = 1;
private Session cassandraSession;
#Override
public void start() throws Exception {
PoolingOptions poolingOptions = new PoolingOptions()
.setCoreConnectionsPerHost(HostDistance.LOCAL, 2)
.setMaxConnectionsPerHost(HostDistance.LOCAL, 3)
.setCoreConnectionsPerHost(HostDistance.REMOTE, 1)
.setMaxConnectionsPerHost(HostDistance.REMOTE, 3)
.setMaxRequestsPerConnection(HostDistance.LOCAL, 20)
.setMaxQueueSize(32768)
.setMaxRequestsPerConnection(HostDistance.REMOTE, 20);
Cluster cluster = Cluster.builder()
.withPoolingOptions(poolingOptions)
.addContactPoints(ClusterSetup.SEEDS)
.build();
System.out.println("Connecting session");
cassandraSession = cluster.connect("kiespees");
System.out.println("Session connected:\n\tcluster [" + cassandraSession.getCluster().getClusterName() + "]");
System.out.println("Connected hosts: ");
cassandraSession.getState().getConnectedHosts().forEach(host -> System.out.println(host.getAddress()));
PreparedStatement prepared = cassandraSession.prepare(
"insert into clustertest1 (id, value, created) " +
"values (:id, :value, :created)");
PreparedStatement preparedTimer = cassandraSession.prepare(
"insert into timer (name, created_on, amount) " +
"values (:name, :createdOn, :amount)");
BoundStatement timerStart = preparedTimer.bind()
.setString("name", "clusterteststart")
.setInt("amount", 0)
.setTimestamp("createdOn", new Timestamp(new Date().getTime()));
cassandraSession.executeAsync(timerStart);
EventBus bus = vertx.eventBus();
System.out.println("Bus info: " + bus.toString());
MessageConsumer<String> cons = bus.consumer("clustertest1");
System.out.println("Consumer info: " + cons.address());
System.out.println("Waiting for messages");
cons.handler(message -> {
TestCluster1 tc = Json.decodeValue(message.body(), TestCluster1.class);
if (messagesReceived % 100000 == 0)
System.out.println("Message received: " + messagesReceived);
BoundStatement boundRecord = prepared.bind()
.setInt("id", tc.getId())
.setString("value", tc.getValue())
.setTimestamp("created", new Timestamp(new Date().getTime()));
cassandraSession.executeAsync(boundRecord);
if (messagesReceived % 100000 == 0) {
BoundStatement timerStop = preparedTimer.bind()
.setString("name", "clusterteststop")
.setInt("amount", messagesReceived)
.setTimestamp("createdOn", new Timestamp(new Date().getTime()));
cassandraSession.executeAsync(timerStop);
}
messagesReceived++;
//message.reply("OK");
});
}
#Override
public void stop() throws Exception {
super.stop();
cassandraSession.close();
}
}
I am adding the Neo4j Bolt driver to my application just following the http://neo4j.com/developer/java/:
import org.neo4j.driver.v1.*;
Driver driver = GraphDatabase.driver( "bolt://localhost", AuthTokens.basic( "neo4j", "neo4j" ) );
Session session = driver.session();
session.run( "CREATE (a:Person {name:'Arthur', title:'King'})" );
StatementResult result = session.run( "MATCH (a:Person) WHERE a.name = 'Arthur' RETURN a.name AS name, a.title AS title" );
while ( result.hasNext() )
{
Record record = result.next();
System.out.println( record.get( "title" ).asString() + " " + record.get("name").asString() );
}
session.close();
driver.close();
However, always from the official documentation unit testing is made using:
GraphDatabaseService db = new TestGraphDatabaseFactory()
.newImpermanentDatabaseBuilder()
So if I want to test in some way the code above, I have to replace the GraphDatabase.driver( "bolt://localhost",...) with the GraphDatabaseService from the test. How can I do that? I cannot extract any sort of in-memory driver from there as far as I can see.
The Neo4j JDBC has a class called Neo4jBoltRule for unit testing. It is a junit rule starting/stopping an impermanent database together with some configuration to start bolt.
The rule class uses dynamic port assignment to prevent test failure due to running multiple tests in parallel (think of your CI infrastructure).
An example of a unit test using that rule class is available at https://github.com/neo4j-contrib/neo4j-jdbc/blob/master/neo4j-jdbc-bolt/src/test/java/org/neo4j/jdbc/bolt/SampleIT.java
An easy way now is to pull neo4j-harness, and use their built-in Neo4jRule as follows:
import static org.neo4j.graphdb.factory.GraphDatabaseSettings.boltConnector;
// [...]
#Rule public Neo4jRule graphDb = new Neo4jRule()
.withConfig(boltConnector("0").address, "localhost:" + findFreePort());
Where findFreePort implementation can be as simple as:
private static int findFreePort() {
try (ServerSocket socket = new ServerSocket(0)) {
return socket.getLocalPort();
} catch (IOException e) {
throw new RuntimeException(e.getMessage(), e);
}
}
As the Javadoc of ServerSocket explains:
A port number of 0 means that the port number is automatically allocated, typically from an ephemeral port range. This port number can then be retrieved by calling getLocalPort.
Moreover, the socket is closed before the port value is returned, so there are great chances the returned port will still be available upon return (the window of opportunity for the port to be allocated again in between is small - the computation of the window size is left as an exercise to the reader).
Et voilĂ !
I am using P4Java library in my build.gradle file to sync a large zip file (>200MB) residing at a remote Perforce repository but I am encountering a "java.net.SocketTimeoutException: Read timed out" error either during the sync process or (mostly) during deleting the temporary client created for the sync operation. I am referring http://razgulyaev.blogspot.in/2011/08/p4-java-api-how-to-work-with-temporary.html for working with temporary clients using P4Java API.
I tried increasing the socket read timeout from default 30 sec as suggested in http://answers.perforce.com/articles/KB/8044 and also by introducing sleep but both approaches didn't solved the problem. Probing the server to verify the connection using getServerInfo() right before performing sync or delete operations results in a successful connection check. Can someone please point me as to where I should look for answers?
Thank you.
Providing the code snippet:
void perforceSync(String srcPath, String destPath, String server) {
// Generating the file(s) to sync-up
String[] pathUnderDepot = [
srcPath + "*"
]
// Increasing timeout from default 30 sec to 60 sec
Properties defaultProps = new Properties()
defaultProps.put(PropertyDefs.PROG_NAME_KEY, "CustomBuildApp")
defaultProps.put(PropertyDefs.PROG_VERSION_KEY, "tv_1.0")
defaultProps.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000")
// Instantiating the server
IOptionsServer p4Server = ServerFactory.getOptionsServer("p4java://" + server, defaultProps)
p4Server.connect()
// Authorizing
p4Server.setUserName("perforceUserName")
p4Server.login("perforcePassword")
// Just check if connected successfully
IServerInfo serverInfo = p4Server.getServerInfo()
println 'Server info: ' + serverInfo.getServerLicense()
// Creating new client
IClient tempClient = new Client()
// Setting up the name and the root folder
tempClient.setName("tempClient" + UUID.randomUUID().toString().replace("-", ""))
tempClient.setRoot(destPath)
tempClient.setServer(p4Server)
// Setting the client as the current one for the server
p4Server.setCurrentClient(tempClient)
// Creating Client View entry
ClientViewMapping tempMappingEntry = new ClientViewMapping()
// Setting up the mapping properties
tempMappingEntry.setLeft(srcPath + "...")
tempMappingEntry.setRight("//" + tempClient.getName() + "/...")
tempMappingEntry.setType(EntryType.INCLUDE)
// Creating Client view
ClientView tempClientView = new ClientView()
// Attaching client view entry to client view
tempClientView.addEntry(tempMappingEntry)
tempClient.setClientView(tempClientView)
// Registering the new client on the server
println p4Server.createClient(tempClient)
// Surrounding the underlying block with try as we want some action
// (namely client removing) to be performed in any way
try {
// Forming the FileSpec collection to be synced-up
List<IFileSpec> fileSpecsSet = FileSpecBuilder.makeFileSpecList(pathUnderDepot)
// Syncing up the client
println "Syncing..."
tempClient.sync(FileSpecBuilder.getValidFileSpecs(fileSpecsSet), true, false, false, false)
}
catch (Exception e) {
println "Sync failed. Trying again..."
sleep(60 * 1000)
tempClient.sync(FileSpecBuilder.getValidFileSpecs(fileSpecsSet), true, false, false, false)
}
finally {
println "Done syncing."
try {
p4Server.connect()
IServerInfo serverInfo2 = p4Server.getServerInfo()
println '\nServer info: ' + serverInfo2.getServerLicense()
// Removing the temporary client from the server
println p4Server.deleteClient(tempClient.getName(), false)
}
catch(Exception e) {
println 'Ignoring exception caught while deleting tempClient!'
/*sleep(60 * 1000)
p4Server.connect()
IServerInfo serverInfo3 = p4Server.getServerInfo()
println '\nServer info: ' + serverInfo3.getServerLicense()
sleep(60 * 1000)
println p4Server.deleteClient(tempClient.getName(), false)*/
}
}
}
One unusual thing which I observed while deleting tempClient was it was actually deleting the client but still throwing "java.net.SocketTimeoutException: Read timed out" which is why I ended up commenting the second delete attempt in the second catch block.
Which version of P4Java are you using? Have you tried this out with the newest P4Java? There are notable fixes dealing with RPC sockets since the 2013.2 version forward as can be seen in the release notes:
http://www.perforce.com/perforce/doc.current/user/p4javanotes.txt
Here are some variations that you can try where you have your code to increase timeout and instantiating the server:
a] Have you tried to passing props in its own argument,? For example:
Properties prop = new Properties();
prop.setProperty(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "300000");
UsageOptions uop = new UsageOptions(prop);
server = ServerFactory.getOptionsServer(ServerFactory.DEFAULT_PROTOCOL_NAME + "://" + serverPort, prop, uop);
Or something like the following:
IOptionsServer p4Server = ServerFactory.getOptionsServer("p4java://" + server, defaultProps)
You can also set the timeout to "0" to give it no timeout.
b]
props.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000");
props.put(RpcPropertyDefs.RPC_SOCKET_POOL_SIZE_NICK, "5");
c]
Properties props = System.getProperties();
props.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000");
IOptionsServer server =
ServerFactory.getOptionsServer("p4java://perforce:1666", props, null);
d] In case you have Eclipse users using our P4Eclipse plugin, the property can be set in the plugin preferences (Team->Perforce->Advanced) under the Custom P4Java Properties.
"sockSoTimeout" : "3000000"
REFERENCES
Class RpcPropertyDefs
http://perforce.com/perforce/doc.current/manuals/p4java-javadoc/com/perforce/p4java/impl/mapbased/rpc/RpcPropertyDefs.html
P4Eclipse or P4Java: SocketTimeoutException: Read timed out
http://answers.perforce.com/articles/KB/8044
I'm getting this error when reading from a table in a 5 node cluster using datastax drivers.
2015-02-19 03:24:09,908 ERROR [akka.actor.default-dispatcher-9] OneForOneStrategy akka://user/HealthServiceChecker-49e686b9-e189-48e3-9aeb-a574c875a8ab Can't use this Cluster instance because it was previously closed
java.lang.IllegalStateException: Can't use this Cluster instance because it was previously closed
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1128) ~[cassandra-driver-core-2.0.4.jar:na]
at com.datastax.driver.core.Cluster.init(Cluster.java:149) ~[cassandra-driver-core-2.0.4.jar:na]
at com.datastax.driver.core.Cluster.connect(Cluster.java:225) ~[cassandra-driver-core-2.0.4.jar:na]
at com.datastax.driver.core.Cluster.connect(Cluster.java:258) ~[cassandra-driver-core-2.0.4.jar:na]
I am able to connect using cqlsh and perform read operations.
Any clue what could be the problem here?
settings:
Consistency Level: ONE
keyspace replication strategy:
'class': 'NetworkTopologyStrategy',
'DC2': '1',
'DC1': '1'
cassandra version: 2.0.6
The code managing cassandra sessions is central and it is;
trait ConfigCassandraCluster
extends CassandraCluster
{
def cassandraConf: CassandraConfig
lazy val port = cassandraConf.port
lazy val host = cassandraConf.host
lazy val cluster: Cluster =
Cluster.builder()
.addContactPoints(host)
.withReconnectionPolicy(new ExponentialReconnectionPolicy(100, 30000))
.withPort(port)
.withSocketOptions(new SocketOptions().setKeepAlive(true))
.build()
lazy val keyspace = cassandraConf.keyspace
private lazy val casSession = cluster.connect(keyspace)
val session = new SessionProvider(casSession)
}
class SessionProvider(casSession: => Session) extends Logging {
var lastSuccessful: Long = 0
var firstSuccessful: Long = -1
def apply[T](fn: Session => T): T = {
val result = retry(fn, 15)
if(firstSuccessful < 0)
firstSuccessful = System.currentTimeMillis()
lastSuccessful = System.currentTimeMillis()
result
}
private def retry[T](fn: Session => T, remainingAttempts: Int): T = {
//retry logic
}
The problem is, cluster.connect(keyspace) will close the cluster itself if it experiences NoHostAvailableException. Due to that during retry logic, you are experiencing IllegalStateException.
Have a look at Cluster init() method and you will understand more.
The solution for your problem would be, in the retry logic, do Cluster.builder.addContactPoint(node).build.connect(keyspace). This will enable to have a new cluster object while you retry.
Search your code for session.close().
You are closing your connection somewhere as stated in the comments. Once a session is closed, it can't be used again. Instead of closing connections, pool them to allow for re-use.