I'm creating a application scoped mongo client to reuse it in my app. The first time the mongo client is accessed I see the following log entires in my server log indicating three clients are created. The third client is the one that is created in application scope as you can see by the preceding info message.
I can additionally see in the mongodb logs that 2 additional connections are opened and not closed or reused by succeeding calls.
[08.09.14 20:09:57:060 CEST] 000000c9 Mongo I Creating Mongo instance (driver version 2.11.4) with authority MongoAuthority{type=Direct, serverAddresses=[/127.0.0.1:27017], credentials={credentials={}}} and options MongoOptions{description='null', connectionsPerHost=100, threadsAllowedToBlockForConnectionMultiplier=5, maxWaitTime=120000, connectTimeout=10000, socketTimeout=0, socketKeepAlive=false, autoConnectRetry=false, maxAutoConnectRetryTime=0, slaveOk=false, readPreference=primary, dbDecoderFactory=DefaultDBDecoder.DefaultFactory, dbEncoderFactory=DefaultDBEncoder.DefaultFactory, safe=false, w=0, wtimeout=0, fsync=false, j=false, socketFactory=javax.net.DefaultSocketFactory#f6fb9709, cursorFinalizerEnabled=true, writeConcern=WriteConcern { "getlasterror" : 1} / (Continue Inserting on Errors? false), alwaysUseMBeans=false}
[08.09.14 20:09:57:060 CEST] 000001ba Mongo I Creating Mongo instance (driver version 2.11.4) with authority MongoAuthority{type=Direct, serverAddresses=[/127.0.0.1:27017], credentials={credentials={}}} and options MongoOptions{description='null', connectionsPerHost=100, threadsAllowedToBlockForConnectionMultiplier=5, maxWaitTime=120000, connectTimeout=10000, socketTimeout=0, socketKeepAlive=false, autoConnectRetry=false, maxAutoConnectRetryTime=0, slaveOk=false, readPreference=primary, dbDecoderFactory=DefaultDBDecoder.DefaultFactory, dbEncoderFactory=DefaultDBEncoder.DefaultFactory, safe=false, w=0, wtimeout=0, fsync=false, j=false, socketFactory=javax.net.DefaultSocketFactory#f6fb9709, cursorFinalizerEnabled=true, writeConcern=WriteConcern { "getlasterror" : 1} / (Continue Inserting on Errors? false), alwaysUseMBeans=false}
[08.09.14 20:09:57:070 CEST] 000000c9 mongodb I multiple Mongo instances for same host, jmx numbers might be off
[08.09.14 20:09:57:070 CEST] 000001ba mongodb I multiple Mongo instances for same host, jmx numbers might be off
[08.09.14 20:09:57:111 CEST] 000001ba SystemOut O INFO MongoDBConnection initializeClient - Creating mongo client for localhost:27017
[08.09.14 20:09:57:111 CEST] 000001ba Mongo I Creating Mongo instance (driver version 2.11.4) with authority MongoAuthority{type=Direct, serverAddresses=[localhost/127.0.0.1:27017], credentials={credentials={}}} and options MongoOptions{description='null', connectionsPerHost=100, threadsAllowedToBlockForConnectionMultiplier=5, maxWaitTime=120000, connectTimeout=10000, socketTimeout=0, socketKeepAlive=false, autoConnectRetry=false, maxAutoConnectRetryTime=0, slaveOk=false, readPreference=primary, dbDecoderFactory=DefaultDBDecoder.DefaultFactory, dbEncoderFactory=DefaultDBEncoder.DefaultFactory, safe=false, w=0, wtimeout=0, fsync=false, j=false, socketFactory=javax.net.DefaultSocketFactory#f6fb9709, cursorFinalizerEnabled=true, writeConcern=WriteConcern { "getlasterror" : 1} / (Continue Inserting on Errors? false), alwaysUseMBeans=false}
Please also note the different server addresses in the logs: serverAddresses=[localhost/127.0.0.1:27017] and serverAddresses=[/127.0.0.1:27017]. I'm setting localhost as the host name in my code.
Below you find the producer method for the mongo client.
#Produces
#ApplicationScoped
public MongoClient initializeClient() {
log.info(String.format("Creating mongo client for %s:%s", host, port));
MongoClient client = null;
try {
client = new MongoClient(host, port);
}
catch (UnknownHostException e) {
log.error(e);
}
return client;
}
Does someone know what causes these two other instances to be created and how can I prevent that?
Related
full code
public class SolrToMongodb {
private static final Logger LOGGER = LoggerFactory.getLogger(SolrToMongodb.class);
public static void main(String[] args) throws IOException, SolrServerException {
SolrToMongodb main = new SolrToMongodb();
main.run();
}
public void run() throws IOException, SolrServerException {
SparkConfig config = new SparkConfig();
JavaSparkContext jsc = new JavaSparkContext(config.sparkConf("admiralty-stream"));
SolrClient client = new HttpSolrClient(Constant.SOLR_STREAMING);
SolrQuery q = new SolrQuery();
q.set("q","*:*");
q.set("indent","on");
q.set("wt", "json");
client.query(q);
try {
CloudSolrClient cloudSolrClient = new CloudSolrClient(Constant.ZOOKEEPER_SOLR);
SolrJavaRDD solrRDD = SolrJavaRDD.get(cloudSolrClient.getZkHost(), "admiraltyStream", jsc.sc());
JavaRDD<SolrDocument> resultsRDD = solrRDD.queryShards(q);
JavaRDD<Object> objectJavaRDD = resultsRDD.map(new Function<SolrDocument, Object>() {
#Override
public Object call(SolrDocument v1) throws Exception {
System.out.println(v1.getFieldValueMap());
return v1.getFieldValueMap();
}
});
}
catch (Exception e){
System.out.println("Exception here : "+e.getMessage());
}
}}
ERROR LOG :
2017-08-02 10:02:58,709 [main] ERROR CloudSolrClient - Request to collection admiraltyStream failed due to (0) java.net.ConnectException: Connection refused (Connection refused), retry? 0
2017-08-02 10:02:59,688 [main] ERROR CloudSolrClient - Request to collection admiraltyStream failed due to (0) java.net.ConnectException: Connection refused (Connection refused), retry? 1
2017-08-02 10:03:01,630 [main] ERROR CloudSolrClient - Request to collection admiraltyStream failed due to (0) java.net.ConnectException: Connection refused (Connection refused), retry? 2
2017-08-02 10:03:02,579 [main] ERROR CloudSolrClient - Request to collection admiraltyStream failed due to (0) java.net.ConnectException: Connection refused (Connection refused), retry? 3
2017-08-02 10:03:03,540 [main] ERROR CloudSolrClient - Request to collection admiraltyStream failed due to (0) java.net.ConnectException: Connection refused (Connection refused), retry? 4
2017-08-02 10:03:04,484 [main] ERROR CloudSolrClient - Request to collection admiraltyStream failed due to (0) java.net.ConnectException: Connection refused (Connection refused), retry? 5
Exception :
Exception here : No live SolrServers available to handle this request:[http://xxx.xxx.ph:8983/solr/admiraltyStream, http://xxx.xxx.ph:8983/solr/admiraltyStream, http://xxx.xxx.ph:8983/solr/admiraltyStream]
Using the CloudSolrClient instead of HttpSolrClient allows solrj to do a round-robin load balancing between the available solr servers, and of course is recommended in a SolrCloud context. The "No live SolrServers” message indicates a problem with the collection admiraltyStream.
Specifically, behind the scenes, SolrJ is using LBHttpSolrClient (which is using a set of HttpSolrClient instances) for round-robin requests between shards. I think that your problem is actually this: some shard is not available (i.e. the leader and the replicas).
I would review the currently online replicas (http://solr.server:8983/#/~cloud): there you should see if all the replicas for your collection online.
There must be at least one replica per shard; and I think that in your case:
the node you’re trying to connect directly using HttpSolrClient is up and running
when using CloudSolrClient (i.e. Zookeeper -> Solr), there’s something wrong in your cluster state: Solr believes that there are no replicas for at least one shard, so the list of the available HttpSolrClient instances within a given instance of LBHttpSolrClient is empty
I'm trying to set up an Ignite cluster with SSL encryption in my Spring application.
My target is to set up a replicated cache over several nodes.
We deploy our application into a Tomcat 8 and set environment variables for our Key- and Truststore at startup of the Tomcat.
I want to start Ignite embedded in my Spring application. So i create a Bean which returns a CacheManager.
#Bean
public SpringCacheManager replicatedCache() {
int[] eventTypes = new int[] {EventType.EVT_CACHE_ENTRY_EVICTED, EventType.EVT_CACHE_OBJECT_REMOVED, EventType.EVT_CACHE_ENTRY_DESTROYED, EventType.EVT_CACHE_OBJECT_EXPIRED};
SpringCacheManager cacheManager = new SpringCacheManager();
IgniteConfiguration configuration = new IgniteConfiguration();
configuration.setIncludeEventTypes(eventTypes);
configuration.setGridName("igniteCluster");
Slf4jLogger logger = new Slf4jLogger(LoggerFactory.getLogger(IGNITE_CACHE_LOGGER_NAME));
configuration.setGridLogger(logger);
CacheConfiguration cacheConfiguration1 = new CacheConfiguration();
cacheConfiguration1.setName("replicatedCache");
cacheConfiguration1.setCacheMode(CacheMode.REPLICATED);
cacheConfiguration1.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
configuration.setCacheConfiguration(cacheConfiguration1);
configuration.setSslContextFactory(() -> {
try {
return SSLContext.getDefault();
} catch (NoSuchAlgorithmException e) {
throw new WA3InternalErrorException("Could not create SSLContext", e);
}
});
configuration.setLocalHost(env.getProperty("caching.localBind", "0.0.0.0"));
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
List<String> nodes = Arrays.stream(env.getRequiredProperty("caching.nodes").split(",")).collect(Collectors.toList());
ipFinder.setAddresses(nodes);
TcpDiscoverySpi spi = new TcpDiscoverySpi();
spi.setIpFinder(ipFinder);
configuration.setDiscoverySpi(spi);
TcpCommunicationSpi communicationSpi = new TcpCommunicationSpi();
communicationSpi.setLocalPort(env.getRequiredProperty("caching.localPort", Integer.class));
communicationSpi.setConnectTimeout(100000); // Line added in first edit
configuration.setCommunicationSpi(communicationSpi);
IgnitePredicate<? extends CacheEvent> localEvent = event -> {
System.out.println(event);
return true;
};
Map<IgnitePredicate<? extends Event>, int[]> ignitePredicateIntegerMap = Collections.singletonMap(localEvent, eventTypes);
configuration.setLocalEventListeners(ignitePredicateIntegerMap);
cacheManager.setConfiguration(configuration);
return cacheManager;
}
As you can see, i also configure that Ignite here.
Binding to the IP-adress of the server and setting a port (which is 47100 like the default port) to the CommunicationSpi.
I am using SSLContext.getDefault() here, so it is using the default Key- and Truststores.
Everything works, when SSL is disabled (not setting SSLContextFactory).
But as soon as I set the Factory, the nodes can still find, but can't communicate with each other.
The metrics log looks fine, 2 nodes as expected:
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=41687971, name=igniteCluster, uptime=00:54:00:302]
^-- H/N/C [hosts=2, nodes=2, CPUs=4]
^-- CPU [cur=33.5%, avg=36.96%, GC=0%]
^-- Heap [used=193MB, free=85.51%, comm=627MB]
^-- Non heap [used=125MB, free=-1%, comm=127MB]
^-- Public thread pool [active=0, idle=2, qSize=0]
^-- System thread pool [active=0, idle=7, qSize=0]
^-- Outbound messages queue [size=0]
What i can see so far is, that Ignite is trying to connect on a port - which fails, increments that port and tries again.
2017-05-02T08:15:35,154 [] [] [grid-nio-worker-tcp-comm-1-#18%igniteCluster%] WARN org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi [warning():104] [] - Communication SPI session write timed out (consider increasing 'socketWriteTimeout' configuration property) [remoteAddr=/10.30.0.106:53603, writeTimeout=2000]
2017-05-02T08:15:39,192 [] [] [grid-nio-worker-tcp-comm-2-#19%igniteCluster%] WARN org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi [warning():104] [] - Communication SPI session write timed out (consider increasing 'socketWriteTimeout' configuration property) [remoteAddr=/10.30.0.106:53604, writeTimeout=2000]
I don't know what port that is.
I have restarted all nodes several times and it looks like it is starting at a random port between 30000 and 50000.
My final questions are:
What am I missing here?
Why does my SSL connection not work?
Regards
I have increased the timeout, as Valentin suggested. Still have problems with my cluster.
2017-05-03T12:19:29,429 [] [] [localhost-startStop-1] WARN org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager [warning():104] [] - Failed to wait for initial partition map exchange. Possible reasons are:
^-- Transactions in deadlock.
^-- Long running transactions (ignore if this is the case).
^-- Unreleased explicit locks.
I get these log messages on the node which tries to connect to the cluster.
Try to increase socketWriteTimeout, as error message suggests. SSL connection is slower and there is a chance that default values are not enough for it in your network.
I have a Java application that schedules a cron job after every 1 min. It runs on Glassfish 4. We are using Hibernate with JTA Entity Manager which is container managed for executing the queries on SQL Server database.
JDBC Connection Pool Settings are:
Initial and Minimum Pool Size:16
Maximum Pool Size:64
Pool Resize Quantity:4
Idle Timeout:300
Max Wait Time:60000
JDBC Connection Pool Statistics after 22 Hours run:
NumConnUsed 0count
NumConnAcquired 14404count
NumConnReleased 14404count
NumConnCreated 16count
NumConnFree 16count
The number of acquired connections keeps on incrementing and the Glassfish 4 crashes after around 10 days with below exception.
RAR5117 : Failed to obtain/create connection from connection pool [ com.beonic.tiv5 ]. Reason : com.sun.appserv.connectors.internal.api.PoolingException: java.lang.RuntimeException: Got exception during XAResource.start:
Please suggest how to avoid Glassfish crash.
finally
{
em = null;
ic = null;
}
I think here is the problem you are never commiting or closing the transacction
Giving this example and documentation of JTA check 5.2.2
// BMT idiom
#Resource public UserTransaction utx;
#Resource public EntityManagerFactory factory;
public void doBusiness() {
EntityManager em = factory.createEntityManager();
try {
// do some work
...
utx.commit();
}
catch (RuntimeException e) {
if (utx != null) utx.rollback();
throw e; // or display error message
}
finally {
em.close();
}
This is the correct way of doing a transacction. But you are only nulling the values and nothing more, that's why you your pools and not being closed
Here is more documentation about Transactions
It's hard to tell what is the real cause of the problem, but the problem might be that all your connections have become stale because not used for a long time.
It is a good practice to set up connection validation, which ensures that connections are reopened when closed by the external server.
There is a thorough article about connection pools in Glassfish/Payara, checkout especially the section about Connection validation (using Derby DB in the example):
To turn on connection validation :
asadmin set
resources.jdbc-connection-pool.test-pool.connection-validation-method=custom-validation
asadmin set
resources.jdbc-connection-pool.test-pool.validation-classname=
org.glassfish.api.jdbc.validation.DerbyConnectionValidation
asadmin
set
resources.jdbc-connection-pool.test-pool.is-connection-validation-required=true
I want to test if my MongoClient has authenticated correctly using default authentication mechanism and if it's not I want to try connecting with different one.
I have following code used for openning mongo connection which uses MONGODB-CR:
MongoCredential credential = MongoCredential.createMongoCRCredential(this.user, this.db, this.pass.toCharArray());
ServerAddress serverAddress = new ServerAddress(this.host, this.port.intValue());
MongoClient mongoClient = new MongoClient(serverAddress, Arrays.asList(new MongoCredential[]{credential}));
I need to find a way to test if authentication was complete. If it was not I want to try with different mechanism - SCRAM-SHA-1. The problem is I dont know how to test connection. Calling most of mongoClient methods leads to MongoCommandException:
com.mongodb.MongoSecurityException: Exception authenticating
at com.mongodb.connection.NativeAuthenticator.authenticate(NativeAuthenticator.java:48)
at com.mongodb.connection.InternalStreamConnectionInitializer.authenticateAll(InternalStreamConnectionInitializer.java:99)
at com.mongodb.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:44)
at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:115)
at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:128)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.mongodb.MongoCommandException: Command failed with error 18: 'auth failed' on server localhost:27017. The full response is { "ok" : 0.0, "errmsg" : "auth failed", "code" : 18 }
at com.mongodb.connection.CommandHelper.createCommandFailureException(CommandHelper.java:170)
at com.mongodb.connection.CommandHelper.receiveCommandResult(CommandHelper.java:123)
at com.mongodb.connection.CommandHelper.executeCommand(CommandHelper.java:32)
at com.mongodb.connection.NativeAuthenticator.authenticate(NativeAuthenticator.java:46)
... 5 common frames omitted
But this is actually not an Exception It's just information in logs, so I'm not able to catch it.
Any ideas how to test if MongoClient has authenticated correctly?
I have the following problem:
I try to connect to an ActiveMQ broker (which is now down) using the following piece of code
connectionFactory = new ActiveMQConnectionFactory(this.url + "?timeout=2000");
connection = connectionFactory.createConnection();
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
LOGGER.info("Connected to " + this.url);
The problem is that the timeout does not have any effect
connection.start()
is blocked forever.
I inspected ActiveMQ log and found the following info:
2013-12-20 01:49:03,149 DEBUG [ActiveMQ Task-1] (FailoverTransport.java:786) - urlList connectionList:[tcp://localhost:61616?timeout=2000], from: [tcp://localhost:61616?timeout=2000]
2013-12-20 01:49:03,149 DEBUG [ActiveMQ Task-1] (FailoverTransport.java:1040) - Connect fail to: tcp://localhost:61616?timeout=2000, reason: java.lang.IllegalArgumentException: Invalid connect parameters: {timeout=2000}
The timeout parameter is specified here http://activemq.apache.org/cms/configuring.html
Has anybody any idea how to pass timeout argument to ActiveMQConnectionFactory?
Or how to set a timeout for connection.start() ?
Thank you!
Update: I found this on Stackoverflow: ActiveMQ - CreateSession failover timeout after a connection is resumed . I tried it but the following exception is thrown:
javax.jms.JMSException: Could not create Transport. Reason: java.lang.IllegalArgumentException: Invalid connect parameters: {transport.timeout=5000}
at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:35)
I use ActiveMQ 5.8.0 from maven repo
It appears that your url is invalid still in both cases when attempting to set the timeout property.
If you're trying to have a failover URL, which it looks like you are since it is getting in to the Failover code then you're probably looking for initialReconnectDelay (and possibly maxReconnectAttempts which would throw an exception if the server is still down after the number of attempts is reached).
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("failover://(tcp://localhost:61616)?initialReconnectDelay=2000&maxReconnectAttempts=2");