I'm trying to set up an Ignite cluster with SSL encryption in my Spring application.
My target is to set up a replicated cache over several nodes.
We deploy our application into a Tomcat 8 and set environment variables for our Key- and Truststore at startup of the Tomcat.
I want to start Ignite embedded in my Spring application. So i create a Bean which returns a CacheManager.
#Bean
public SpringCacheManager replicatedCache() {
int[] eventTypes = new int[] {EventType.EVT_CACHE_ENTRY_EVICTED, EventType.EVT_CACHE_OBJECT_REMOVED, EventType.EVT_CACHE_ENTRY_DESTROYED, EventType.EVT_CACHE_OBJECT_EXPIRED};
SpringCacheManager cacheManager = new SpringCacheManager();
IgniteConfiguration configuration = new IgniteConfiguration();
configuration.setIncludeEventTypes(eventTypes);
configuration.setGridName("igniteCluster");
Slf4jLogger logger = new Slf4jLogger(LoggerFactory.getLogger(IGNITE_CACHE_LOGGER_NAME));
configuration.setGridLogger(logger);
CacheConfiguration cacheConfiguration1 = new CacheConfiguration();
cacheConfiguration1.setName("replicatedCache");
cacheConfiguration1.setCacheMode(CacheMode.REPLICATED);
cacheConfiguration1.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
configuration.setCacheConfiguration(cacheConfiguration1);
configuration.setSslContextFactory(() -> {
try {
return SSLContext.getDefault();
} catch (NoSuchAlgorithmException e) {
throw new WA3InternalErrorException("Could not create SSLContext", e);
}
});
configuration.setLocalHost(env.getProperty("caching.localBind", "0.0.0.0"));
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
List<String> nodes = Arrays.stream(env.getRequiredProperty("caching.nodes").split(",")).collect(Collectors.toList());
ipFinder.setAddresses(nodes);
TcpDiscoverySpi spi = new TcpDiscoverySpi();
spi.setIpFinder(ipFinder);
configuration.setDiscoverySpi(spi);
TcpCommunicationSpi communicationSpi = new TcpCommunicationSpi();
communicationSpi.setLocalPort(env.getRequiredProperty("caching.localPort", Integer.class));
communicationSpi.setConnectTimeout(100000); // Line added in first edit
configuration.setCommunicationSpi(communicationSpi);
IgnitePredicate<? extends CacheEvent> localEvent = event -> {
System.out.println(event);
return true;
};
Map<IgnitePredicate<? extends Event>, int[]> ignitePredicateIntegerMap = Collections.singletonMap(localEvent, eventTypes);
configuration.setLocalEventListeners(ignitePredicateIntegerMap);
cacheManager.setConfiguration(configuration);
return cacheManager;
}
As you can see, i also configure that Ignite here.
Binding to the IP-adress of the server and setting a port (which is 47100 like the default port) to the CommunicationSpi.
I am using SSLContext.getDefault() here, so it is using the default Key- and Truststores.
Everything works, when SSL is disabled (not setting SSLContextFactory).
But as soon as I set the Factory, the nodes can still find, but can't communicate with each other.
The metrics log looks fine, 2 nodes as expected:
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=41687971, name=igniteCluster, uptime=00:54:00:302]
^-- H/N/C [hosts=2, nodes=2, CPUs=4]
^-- CPU [cur=33.5%, avg=36.96%, GC=0%]
^-- Heap [used=193MB, free=85.51%, comm=627MB]
^-- Non heap [used=125MB, free=-1%, comm=127MB]
^-- Public thread pool [active=0, idle=2, qSize=0]
^-- System thread pool [active=0, idle=7, qSize=0]
^-- Outbound messages queue [size=0]
What i can see so far is, that Ignite is trying to connect on a port - which fails, increments that port and tries again.
2017-05-02T08:15:35,154 [] [] [grid-nio-worker-tcp-comm-1-#18%igniteCluster%] WARN org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi [warning():104] [] - Communication SPI session write timed out (consider increasing 'socketWriteTimeout' configuration property) [remoteAddr=/10.30.0.106:53603, writeTimeout=2000]
2017-05-02T08:15:39,192 [] [] [grid-nio-worker-tcp-comm-2-#19%igniteCluster%] WARN org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi [warning():104] [] - Communication SPI session write timed out (consider increasing 'socketWriteTimeout' configuration property) [remoteAddr=/10.30.0.106:53604, writeTimeout=2000]
I don't know what port that is.
I have restarted all nodes several times and it looks like it is starting at a random port between 30000 and 50000.
My final questions are:
What am I missing here?
Why does my SSL connection not work?
Regards
I have increased the timeout, as Valentin suggested. Still have problems with my cluster.
2017-05-03T12:19:29,429 [] [] [localhost-startStop-1] WARN org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager [warning():104] [] - Failed to wait for initial partition map exchange. Possible reasons are:
^-- Transactions in deadlock.
^-- Long running transactions (ignore if this is the case).
^-- Unreleased explicit locks.
I get these log messages on the node which tries to connect to the cluster.
Try to increase socketWriteTimeout, as error message suggests. SSL connection is slower and there is a chance that default values are not enough for it in your network.
Related
I m trying to use apache ignite, I m generating my node using the ignite web console.
I needed to configure 2 caches from database and enabled the persistence storage since the two table have lot of data.
Here is what I have done (the console)
/**
* Configure grid.
*
* #return Ignite configuration.
* #throws Exception If failed to construct Ignite configuration instance.
**/
public static IgniteConfiguration createConfiguration() throws Exception {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true);
cfg.setIgniteInstanceName("attiryak");
TcpDiscoverySpi discovery = new TcpDiscoverySpi();
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500..47510"));
discovery.setIpFinder(ipFinder);
cfg.setDiscoverySpi(discovery);
AtomicConfiguration atomicCfg = new AtomicConfiguration();
atomicCfg.setCacheMode(CacheMode.LOCAL);
cfg.setAtomicConfiguration(atomicCfg);
DataStorageConfiguration dataStorageCfg = new DataStorageConfiguration();
dataStorageCfg.setPageSize(16384);
dataStorageCfg.setConcurrencyLevel(2);
dataStorageCfg.setSystemRegionInitialSize(52428800L);
dataStorageCfg.setSystemRegionMaxSize(209715200L);
DataRegionConfiguration dataRegionCfg = new DataRegionConfiguration();
dataRegionCfg.setInitialSize(536870912L);
dataRegionCfg.setMaxSize(1073741824L);
dataRegionCfg.setMetricsEnabled(true);
dataRegionCfg.setPersistenceEnabled(true);
dataStorageCfg.setDefaultDataRegionConfiguration(dataRegionCfg);
cfg.setDataStorageConfiguration(dataStorageCfg);
cfg.setCacheConfiguration(
cacheMInoutlineCache(),
cacheMInoutlineconfirmCache()
);
return cfg;
}
public static CacheConfiguration cacheMInoutlineCache() throws Exception {
CacheConfiguration ccfg = new CacheConfiguration();
ccfg.setName("MInoutlineCache");
ccfg.setCacheMode(CacheMode.LOCAL);
ccfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
ccfg.setCopyOnRead(true);
CacheJdbcPojoStoreFactory cacheStoreFactory = new CacheJdbcPojoStoreFactory();
cacheStoreFactory.setDataSourceFactory(new Factory<DataSource>() {
/** {#inheritDoc} **/
#Override public DataSource create() {
return DataSources.INSTANCE_dsOracle_Compiere;
};
});
cacheStoreFactory.setDialect(new OracleDialect());
cacheStoreFactory.setTypes(jdbcTypeMInoutline(ccfg.getName()));
ccfg.setCacheStoreFactory(cacheStoreFactory);
ccfg.setReadThrough(true);
ccfg.setWriteThrough(true);
ArrayList<QueryEntity> qryEntities = new ArrayList<>();
QueryEntity qryEntity = new QueryEntity();
qryEntity.setKeyType("java.lang.Long");
qryEntity.setValueType("com.gmail.talcorpdz.model.MInoutline");
qryEntity.setTableName("M_INOUTLINE");
qryEntity.setKeyFieldName("mInoutlineId");
HashSet<String> keyFields = new HashSet<>();
keyFields.add("mInoutlineId");
qryEntity.setKeyFields(keyFields);
LinkedHashMap<String, String> fields = new LinkedHashMap<>();
fields.put("adClientId", "java.lang.Long");
qryEntity.setFields(fields);
HashMap<String, String> aliases = new HashMap<>();
aliases.put("mInoutlineId", "M_INOUTLINE_ID");
qryEntity.setAliases(aliases);
ArrayList<QueryIndex> indexes = new ArrayList<>();
QueryIndex index = new QueryIndex();
index.setName("IDX$$_00010002");
index.setIndexType(QueryIndexType.SORTED);
LinkedHashMap<String, Boolean> indFlds = new LinkedHashMap<>();
indFlds.put("mAttributesetinstanceId", false);
indFlds.put("mInoutId", false);
qryEntity.setIndexes(indexes);
qryEntities.add(qryEntity);
ccfg.setQueryEntities(qryEntities);
/**
* #author taleb
*
* spec 1.0 : no schema needed solution
* https://stackoverflow.com/a/58930331/4388228
* */
ccfg.setSqlSchema("PUBLIC");
return ccfg;
}
I believe that I m miss configuring my storage since it is mandatory to help memory to use my disk space.
Here is the stack trace of the exception
[11:49:58] __________ ________________
[11:49:58] / _/ ___/ |/ / _/_ __/ __/
[11:49:58] _/ // (7 7 // / / / / _/
[11:49:58] /___/\___/_/|_/___/ /_/ /___/
[11:49:58]
[11:49:58] ver. 2.7.6#20190911-sha1:21f7ca41
[11:49:58] 2019 Copyright(C) Apache Software Foundation
[11:49:58]
[11:49:58] Ignite documentation: http://ignite.apache.org
[11:49:58]
[11:49:58] Quiet mode.
[11:49:58] ^-- Logging by 'JavaLogger [quiet=true, config=null]'
[11:49:58] ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or "-v" to ignite.{sh|bat}
[11:49:58]
[11:49:58] OS: Linux 4.19.0-kali5-amd64 amd64
[11:49:58] VM information: Java(TM) SE Runtime Environment 1.8.0_201-b09 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.201-b09
[11:49:58] Please set system property '-Djava.net.preferIPv4Stack=true' to avoid possible problems in mixed environments.
[11:49:58] Initial heap size is 124MB (should be no less than 512MB, use -Xms512m -Xmx512m).
[11:49:58] Configured plugins:
[11:49:58] ^-- None
[11:49:58]
[11:49:58] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]]]
[11:49:59] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[11:49:59] Security status [authentication=off, tls/ssl=off]
[11:49:59] REST protocols do not start on client node. To start the protocols on client node set '-DIGNITE_REST_START_ON_CLIENT=true' system property.
[11:50:00] Nodes started on local machine require more than 80% of physical RAM what can lead to significant slowdown due to swapping (please decrease JVM heap size, data region size or checkpoint buffer size) [required=4979MB, available=7867MB]
[11:50:00] Performance suggestions for grid 'attiryak' (fix if possible)
[11:50:00] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[11:50:00] ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options)
[11:50:00] ^-- Specify JVM heap max size (add '-Xmx<size>[g|G|m|M|k|K]' to JVM options)
[11:50:00] ^-- Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options)
[11:50:00] ^-- Disable processing of calls to System.gc() (add '-XX:+DisableExplicitGC' to JVM options)
[11:50:00] Refer to this page for more performance suggestions: https://apacheignite.readme.io/docs/jvm-and-system-tuning
[11:50:00]
[11:50:00] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[11:50:00] Data Regions Configured:
[11:50:00] ^-- default [initSize=512.0 MiB, maxSize=1.0 GiB, persistence=true]
[11:50:00]
[11:50:00] Ignite node started OK (id=7ad24962, instance name=attiryak)
[11:50:00] >>> Ignite cluster is not active (limited functionality available). Use control.(sh|bat) script or IgniteCluster interface to activate.
[11:50:00] Topology snapshot [ver=2, locNode=7ad24962, servers=1, clients=1, state=INACTIVE, CPUs=8, offheap=2.0GB, heap=3.4GB]
>> Loading caches...
Nov 19, 2019 11:50:01 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to activate node components [nodeId=7ad24962-e5c8-4f0b-8b99-1c42a3c91c01, client=true, topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1]]
java.lang.ClassCastException: org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl cannot be cast to org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryEx
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.getOrAllocateCacheMetas(GridCacheOffheapManager.java:728)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.initDataStructures(GridCacheOffheapManager.java:123)
at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.start(IgniteCacheOffheapManagerImpl.java:196)
at org.apache.ignite.internal.processors.cache.CacheGroupContext.start(CacheGroupContext.java:937)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCacheGroup(GridCacheProcessor.java:2251)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:2146)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processCacheStartRequests(CacheAffinitySharedManager.java:898)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:798)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onClusterStateChangeRequest(GridDhtPartitionsExchangeFuture.java:1114)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:736)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2681)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2553)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
>> Loading cache: MInoutlineCache
Nov 19, 2019 11:50:02 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to process custom exchange task: ClientCacheChangeDummyDiscoveryMessage [reqId=587e6edb-95ee-4208-a525-a35ca441bf7c, cachesToClose=null, startCaches=[MInoutlineCache]]
java.lang.ClassCastException: org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl cannot be cast to org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryEx
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.getOrAllocateCacheMetas(GridCacheOffheapManager.java:728)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.initDataStructures(GridCacheOffheapManager.java:123)
at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.start(IgniteCacheOffheapManagerImpl.java:196)
at org.apache.ignite.internal.processors.cache.CacheGroupContext.start(CacheGroupContext.java:937)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCacheGroup(GridCacheProcessor.java:2251)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:2146)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCacheStartRequests(CacheAffinitySharedManager.java:438)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCachesChanges(CacheAffinitySharedManager.java:637)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCustomExchangeTask(GridCacheProcessor.java:391)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.processCustomTask(GridCachePartitionExchangeManager.java:2489)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2634)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2553)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
what should I do to fix this ?
This is a known issue and it stems from the fact that previously you started the same cluster but without persistence.
Please remove your Ignite work dir (%TMP%\ignite\work or /tmp/ignite/work or ./ignite/work) and restart your node.
UPD: There is also this issue about LOCAL cache on client node with persistence: IGNITE-11677. My recommendation is to avoid using LOCAL caches at all.
I am experiencing Embedded InfiniSpan cache issue where nodes timeout on re-joining the cluster.
Caused by: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 7 from vvshost
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onTimeout(SingleTargetRequest.java:64)
at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:86)
at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:21)
The only way I can get the node to re-join is to switch off the cache and delete all local cache persistence files.
Here is the configuration which I am using:
Transport:
TransportConfigurationBuilder - defaultClusteredBuild
JMX Statistics - Enabled
Duplicate domains - Allowed
Cache Manager:
Manager Class - EmbeddedCacheManager
Memory - Memory Size: 0
Persistence: Single File Store
async: disabled
Clustering Cache Mode - CacheMode.DIST_SYNC
It seems right to me, but the value of remote-timeout is "15000" milliseconds by default. Increase the timeout until you stop getting the error.
Hope it helps
I am trying to connect to mongo query routers in a test environment (I setup just one query router for test -> pointing to a config server (instead of three) which in turn points to a two node shard with no replicas). I can insert/fetch documents using the mongo shell (and have verified that the documents are going to the sharded nodes). However, when I try to test the connection to the mongo database, I get the output copied below (code being used is also copied underneath). I am using mongo database v3.2.0 and java driver v3.2.2 (I am trying to use the async api).
[info] 14:34:44.562 227 [main] MongoAuthentication INFO - testing 1
[info] 14:34:44.595 260 [main] cluster INFO - Cluster created with settings {hosts=[192.168.0.1:27018], mode=MULTIPLE, requiredClusterType=SHARDED, serverSelectionTimeout='30000 ms', maxWaitQueueSize=30}
[info] 14:34:44.595 260 [main] cluster INFO - Adding discovered server 192.168.0.1:27018 to client view of cluster
[info] 14:34:44.652 317 [main] cluster DEBUG - Updating cluster description to {type=SHARDED, servers=[{address=192.168.0.1:27018, type=UNKNOWN, state=CONNECTING}]
[info] Outputting database names:
[info] 14:34:44.660 325 [main] cluster INFO - No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=SHARDED, connectionMode=MULTIPLE, all=[ServerDescription{address=192.168.0.1:27018, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
[info] Counting the number of documents
[info] 14:34:44.667 332 [main] cluster INFO - No server chosen by ReadPreferenceServerSelector{readPreference=primaryPreferred} from cluster description ClusterDescription{type=SHARDED, connectionMode=MULTIPLE, all=[ServerDescription{address=192.168.0.1:27018, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
[info] - Count result: 0
[info] 14:34:45.669 1334 [cluster-ClusterId{value='577414c420055e5bc086c255', description='null'}-192.168.0.1:27018] connection DEBUG - Closing connection connectionId{localValue:1}
part of the code being used
final MongoClient mongoClient = MongoClientAccessor.INSTANCE.getMongoClientInstance();
final CountDownLatch listDbsLatch = new CountDownLatch(1);
System.out.println("Outputting database names:");
mongoClient.listDatabaseNames().forEach(new Block<String>() {
#Override
public void apply(final String name) {
System.out.println(" - " + name);
}
}, new SingleResultCallback<Void>() {
#Override
public void onResult(final Void result, final Throwable t) {
listDbsLatch.countDown();
}
});
The enum being used is responsible for reading config options and passing a MongoClient reference to its caller. The enum itself calls other classes which I can copy as well if needed. I have the following option configured for ReadPreference:
mongo.client.readPreference=PRIMARYPREFERRED
Any thoughts on what I might be doing wrong or might have misinterpreted? The goal is to connect to the shard via the mongos (query router) so that I can insert/fetch documents in the Mongo shard.
The mongo shard setup (query router, config and shard with replica sets) was not correctly configured. Ensure that the config server(s) replica set is launched first, mongos (query router) is brought up and points to these config servers, the mongo shards are brought up and then the shards are registered via the query router (mongos) as well as the collection is enabled for sharding. Obviously, make sure that the driver is connecting to the mongos (query router) process.
This is the basic scenario:
I have a Jetty server running on a AWS instance T2.medium.
This server contains 3 REST services: A,B and C.
My client, which is a java class running in my local eclipse
Creates around 400 objects for service A
After that it creates 5 B objects for each A (2000)
Finally another 5 C objects for each B (10000).
In the end, we have around 15,000 objects created.
The execution takes 45-50 minutes to create all of the 15,000 objects.
During the first 100 requests, everything is good and beautiful, it takes around 25 to 30ms to complete each request.
After 800 requests, things are not quite good, takes around 161ms to 182ms to complete each request.
This time keeps increasing until it reaches 1300ms to 1321ms to complete each request. In the end, after around 10,000 requests each one takes around 3000ms to complete.
I'm not running multiple threads to do the requests.
I'm not running other services in this VM.
I'm using MySql on a RDS instance.
*Ohhh and 1 more thing, if I restart the Jetty and try to create the objects again, the behaviour keeps repeating, it starts with 30ms and gets to 3000ms. This makes me think it has something to do with the threadpool on jetty.
The code I'm using to start the jetty is:
QueuedThreadPool threadPool = new QueuedThreadPool(100);
threadPool.setMinThreads(100);
threadPool.setMaxThreads(700);
threadPool.setMaxIdleTimeMs(3000);
httpServer = new Server(port);
httpServer.setThreadPool(threadPool);
ContextHandlerCollection contexts = new ContextHandlerCollection();
httpServer.setHandler(contexts);
Context root = new Context(contexts, "/", Context.SESSIONS);
GenericWebApplicationContext springContext = new GenericWebApplicationContext();
springContext.setParent(new ClassPathXmlApplicationContext("/education/applicationContext.xml"));
root.setAttribute(WebApplicationContext.ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE,springContext);
Context html = new Context(contexts, "/html", Context.SESSIONS);
ResourceHandler htmlHandler = new ResourceHandler();
htmlHandler.setResourceBase("src/main/webapp");
html.setHandler(htmlHandler);
ServletHolder holder = new ServletHolder(CXFServlet.class);
root.addServlet(holder, "/rest/*");
try {
httpServer.start();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
and after a couple of minutes I start to see the following exception:
**
20:30:34.556 [509681393#qtp-1095433972-667] DEBUG org.mortbay.log - EXCEPTION
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:196) ~[na:1.7.0_65]
at java.net.SocketInputStream.read(SocketInputStream.java:122) ~[na:1.7.0_65]
at org.mortbay.io.ByteArrayBuffer.readFrom(ByteArrayBuffer.java:382) ~[jetty-6.1.26.jar:6.1.26]
at org.mortbay.io.bio.StreamEndPoint.fill(StreamEndPoint.java:114) ~[jetty-6.1.26.jar:6.1.26]
at org.mortbay.jetty.bio.SocketConnector$Connection.fill(SocketConnector.java:198) [jett6.1.26.jar:6.1.26]
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:290) [jetty-6.1.26.jar:6.1.26]
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) [jetty-6.1.26.jar:6.1.26]
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) [jetty-6.1.26.jar:6.1.26]
at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) [jetty- 6.1.26.jar:6.1.26]
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) [jetty- util-6.1.26.jar:6.1.26]
20:30:34.555 [867433358#qtp-1095433972-664] DEBUG org.mortbay.log - EXCEPTION
20:30:34.563 [509681393#qtp-1095433972-667] DEBUG org.mortbay.log - EOF
**
I'm creating a application scoped mongo client to reuse it in my app. The first time the mongo client is accessed I see the following log entires in my server log indicating three clients are created. The third client is the one that is created in application scope as you can see by the preceding info message.
I can additionally see in the mongodb logs that 2 additional connections are opened and not closed or reused by succeeding calls.
[08.09.14 20:09:57:060 CEST] 000000c9 Mongo I Creating Mongo instance (driver version 2.11.4) with authority MongoAuthority{type=Direct, serverAddresses=[/127.0.0.1:27017], credentials={credentials={}}} and options MongoOptions{description='null', connectionsPerHost=100, threadsAllowedToBlockForConnectionMultiplier=5, maxWaitTime=120000, connectTimeout=10000, socketTimeout=0, socketKeepAlive=false, autoConnectRetry=false, maxAutoConnectRetryTime=0, slaveOk=false, readPreference=primary, dbDecoderFactory=DefaultDBDecoder.DefaultFactory, dbEncoderFactory=DefaultDBEncoder.DefaultFactory, safe=false, w=0, wtimeout=0, fsync=false, j=false, socketFactory=javax.net.DefaultSocketFactory#f6fb9709, cursorFinalizerEnabled=true, writeConcern=WriteConcern { "getlasterror" : 1} / (Continue Inserting on Errors? false), alwaysUseMBeans=false}
[08.09.14 20:09:57:060 CEST] 000001ba Mongo I Creating Mongo instance (driver version 2.11.4) with authority MongoAuthority{type=Direct, serverAddresses=[/127.0.0.1:27017], credentials={credentials={}}} and options MongoOptions{description='null', connectionsPerHost=100, threadsAllowedToBlockForConnectionMultiplier=5, maxWaitTime=120000, connectTimeout=10000, socketTimeout=0, socketKeepAlive=false, autoConnectRetry=false, maxAutoConnectRetryTime=0, slaveOk=false, readPreference=primary, dbDecoderFactory=DefaultDBDecoder.DefaultFactory, dbEncoderFactory=DefaultDBEncoder.DefaultFactory, safe=false, w=0, wtimeout=0, fsync=false, j=false, socketFactory=javax.net.DefaultSocketFactory#f6fb9709, cursorFinalizerEnabled=true, writeConcern=WriteConcern { "getlasterror" : 1} / (Continue Inserting on Errors? false), alwaysUseMBeans=false}
[08.09.14 20:09:57:070 CEST] 000000c9 mongodb I multiple Mongo instances for same host, jmx numbers might be off
[08.09.14 20:09:57:070 CEST] 000001ba mongodb I multiple Mongo instances for same host, jmx numbers might be off
[08.09.14 20:09:57:111 CEST] 000001ba SystemOut O INFO MongoDBConnection initializeClient - Creating mongo client for localhost:27017
[08.09.14 20:09:57:111 CEST] 000001ba Mongo I Creating Mongo instance (driver version 2.11.4) with authority MongoAuthority{type=Direct, serverAddresses=[localhost/127.0.0.1:27017], credentials={credentials={}}} and options MongoOptions{description='null', connectionsPerHost=100, threadsAllowedToBlockForConnectionMultiplier=5, maxWaitTime=120000, connectTimeout=10000, socketTimeout=0, socketKeepAlive=false, autoConnectRetry=false, maxAutoConnectRetryTime=0, slaveOk=false, readPreference=primary, dbDecoderFactory=DefaultDBDecoder.DefaultFactory, dbEncoderFactory=DefaultDBEncoder.DefaultFactory, safe=false, w=0, wtimeout=0, fsync=false, j=false, socketFactory=javax.net.DefaultSocketFactory#f6fb9709, cursorFinalizerEnabled=true, writeConcern=WriteConcern { "getlasterror" : 1} / (Continue Inserting on Errors? false), alwaysUseMBeans=false}
Please also note the different server addresses in the logs: serverAddresses=[localhost/127.0.0.1:27017] and serverAddresses=[/127.0.0.1:27017]. I'm setting localhost as the host name in my code.
Below you find the producer method for the mongo client.
#Produces
#ApplicationScoped
public MongoClient initializeClient() {
log.info(String.format("Creating mongo client for %s:%s", host, port));
MongoClient client = null;
try {
client = new MongoClient(host, port);
}
catch (UnknownHostException e) {
log.error(e);
}
return client;
}
Does someone know what causes these two other instances to be created and how can I prevent that?