apache ignite loading cache ClassCastException - java

I m trying to use apache ignite, I m generating my node using the ignite web console.
I needed to configure 2 caches from database and enabled the persistence storage since the two table have lot of data.
Here is what I have done (the console)
/**
* Configure grid.
*
* #return Ignite configuration.
* #throws Exception If failed to construct Ignite configuration instance.
**/
public static IgniteConfiguration createConfiguration() throws Exception {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true);
cfg.setIgniteInstanceName("attiryak");
TcpDiscoverySpi discovery = new TcpDiscoverySpi();
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500..47510"));
discovery.setIpFinder(ipFinder);
cfg.setDiscoverySpi(discovery);
AtomicConfiguration atomicCfg = new AtomicConfiguration();
atomicCfg.setCacheMode(CacheMode.LOCAL);
cfg.setAtomicConfiguration(atomicCfg);
DataStorageConfiguration dataStorageCfg = new DataStorageConfiguration();
dataStorageCfg.setPageSize(16384);
dataStorageCfg.setConcurrencyLevel(2);
dataStorageCfg.setSystemRegionInitialSize(52428800L);
dataStorageCfg.setSystemRegionMaxSize(209715200L);
DataRegionConfiguration dataRegionCfg = new DataRegionConfiguration();
dataRegionCfg.setInitialSize(536870912L);
dataRegionCfg.setMaxSize(1073741824L);
dataRegionCfg.setMetricsEnabled(true);
dataRegionCfg.setPersistenceEnabled(true);
dataStorageCfg.setDefaultDataRegionConfiguration(dataRegionCfg);
cfg.setDataStorageConfiguration(dataStorageCfg);
cfg.setCacheConfiguration(
cacheMInoutlineCache(),
cacheMInoutlineconfirmCache()
);
return cfg;
}
public static CacheConfiguration cacheMInoutlineCache() throws Exception {
CacheConfiguration ccfg = new CacheConfiguration();
ccfg.setName("MInoutlineCache");
ccfg.setCacheMode(CacheMode.LOCAL);
ccfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
ccfg.setCopyOnRead(true);
CacheJdbcPojoStoreFactory cacheStoreFactory = new CacheJdbcPojoStoreFactory();
cacheStoreFactory.setDataSourceFactory(new Factory<DataSource>() {
/** {#inheritDoc} **/
#Override public DataSource create() {
return DataSources.INSTANCE_dsOracle_Compiere;
};
});
cacheStoreFactory.setDialect(new OracleDialect());
cacheStoreFactory.setTypes(jdbcTypeMInoutline(ccfg.getName()));
ccfg.setCacheStoreFactory(cacheStoreFactory);
ccfg.setReadThrough(true);
ccfg.setWriteThrough(true);
ArrayList<QueryEntity> qryEntities = new ArrayList<>();
QueryEntity qryEntity = new QueryEntity();
qryEntity.setKeyType("java.lang.Long");
qryEntity.setValueType("com.gmail.talcorpdz.model.MInoutline");
qryEntity.setTableName("M_INOUTLINE");
qryEntity.setKeyFieldName("mInoutlineId");
HashSet<String> keyFields = new HashSet<>();
keyFields.add("mInoutlineId");
qryEntity.setKeyFields(keyFields);
LinkedHashMap<String, String> fields = new LinkedHashMap<>();
fields.put("adClientId", "java.lang.Long");
qryEntity.setFields(fields);
HashMap<String, String> aliases = new HashMap<>();
aliases.put("mInoutlineId", "M_INOUTLINE_ID");
qryEntity.setAliases(aliases);
ArrayList<QueryIndex> indexes = new ArrayList<>();
QueryIndex index = new QueryIndex();
index.setName("IDX$$_00010002");
index.setIndexType(QueryIndexType.SORTED);
LinkedHashMap<String, Boolean> indFlds = new LinkedHashMap<>();
indFlds.put("mAttributesetinstanceId", false);
indFlds.put("mInoutId", false);
qryEntity.setIndexes(indexes);
qryEntities.add(qryEntity);
ccfg.setQueryEntities(qryEntities);
/**
* #author taleb
*
* spec 1.0 : no schema needed solution
* https://stackoverflow.com/a/58930331/4388228
* */
ccfg.setSqlSchema("PUBLIC");
return ccfg;
}
I believe that I m miss configuring my storage since it is mandatory to help memory to use my disk space.
Here is the stack trace of the exception
[11:49:58] __________ ________________
[11:49:58] / _/ ___/ |/ / _/_ __/ __/
[11:49:58] _/ // (7 7 // / / / / _/
[11:49:58] /___/\___/_/|_/___/ /_/ /___/
[11:49:58]
[11:49:58] ver. 2.7.6#20190911-sha1:21f7ca41
[11:49:58] 2019 Copyright(C) Apache Software Foundation
[11:49:58]
[11:49:58] Ignite documentation: http://ignite.apache.org
[11:49:58]
[11:49:58] Quiet mode.
[11:49:58] ^-- Logging by 'JavaLogger [quiet=true, config=null]'
[11:49:58] ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or "-v" to ignite.{sh|bat}
[11:49:58]
[11:49:58] OS: Linux 4.19.0-kali5-amd64 amd64
[11:49:58] VM information: Java(TM) SE Runtime Environment 1.8.0_201-b09 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.201-b09
[11:49:58] Please set system property '-Djava.net.preferIPv4Stack=true' to avoid possible problems in mixed environments.
[11:49:58] Initial heap size is 124MB (should be no less than 512MB, use -Xms512m -Xmx512m).
[11:49:58] Configured plugins:
[11:49:58] ^-- None
[11:49:58]
[11:49:58] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]]]
[11:49:59] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[11:49:59] Security status [authentication=off, tls/ssl=off]
[11:49:59] REST protocols do not start on client node. To start the protocols on client node set '-DIGNITE_REST_START_ON_CLIENT=true' system property.
[11:50:00] Nodes started on local machine require more than 80% of physical RAM what can lead to significant slowdown due to swapping (please decrease JVM heap size, data region size or checkpoint buffer size) [required=4979MB, available=7867MB]
[11:50:00] Performance suggestions for grid 'attiryak' (fix if possible)
[11:50:00] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[11:50:00] ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options)
[11:50:00] ^-- Specify JVM heap max size (add '-Xmx<size>[g|G|m|M|k|K]' to JVM options)
[11:50:00] ^-- Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options)
[11:50:00] ^-- Disable processing of calls to System.gc() (add '-XX:+DisableExplicitGC' to JVM options)
[11:50:00] Refer to this page for more performance suggestions: https://apacheignite.readme.io/docs/jvm-and-system-tuning
[11:50:00]
[11:50:00] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[11:50:00] Data Regions Configured:
[11:50:00] ^-- default [initSize=512.0 MiB, maxSize=1.0 GiB, persistence=true]
[11:50:00]
[11:50:00] Ignite node started OK (id=7ad24962, instance name=attiryak)
[11:50:00] >>> Ignite cluster is not active (limited functionality available). Use control.(sh|bat) script or IgniteCluster interface to activate.
[11:50:00] Topology snapshot [ver=2, locNode=7ad24962, servers=1, clients=1, state=INACTIVE, CPUs=8, offheap=2.0GB, heap=3.4GB]
>> Loading caches...
Nov 19, 2019 11:50:01 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to activate node components [nodeId=7ad24962-e5c8-4f0b-8b99-1c42a3c91c01, client=true, topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1]]
java.lang.ClassCastException: org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl cannot be cast to org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryEx
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.getOrAllocateCacheMetas(GridCacheOffheapManager.java:728)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.initDataStructures(GridCacheOffheapManager.java:123)
at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.start(IgniteCacheOffheapManagerImpl.java:196)
at org.apache.ignite.internal.processors.cache.CacheGroupContext.start(CacheGroupContext.java:937)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCacheGroup(GridCacheProcessor.java:2251)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:2146)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processCacheStartRequests(CacheAffinitySharedManager.java:898)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:798)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onClusterStateChangeRequest(GridDhtPartitionsExchangeFuture.java:1114)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:736)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2681)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2553)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
>> Loading cache: MInoutlineCache
Nov 19, 2019 11:50:02 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to process custom exchange task: ClientCacheChangeDummyDiscoveryMessage [reqId=587e6edb-95ee-4208-a525-a35ca441bf7c, cachesToClose=null, startCaches=[MInoutlineCache]]
java.lang.ClassCastException: org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl cannot be cast to org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryEx
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.getOrAllocateCacheMetas(GridCacheOffheapManager.java:728)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.initDataStructures(GridCacheOffheapManager.java:123)
at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.start(IgniteCacheOffheapManagerImpl.java:196)
at org.apache.ignite.internal.processors.cache.CacheGroupContext.start(CacheGroupContext.java:937)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCacheGroup(GridCacheProcessor.java:2251)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:2146)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCacheStartRequests(CacheAffinitySharedManager.java:438)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCachesChanges(CacheAffinitySharedManager.java:637)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCustomExchangeTask(GridCacheProcessor.java:391)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.processCustomTask(GridCachePartitionExchangeManager.java:2489)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2634)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2553)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
what should I do to fix this ?

This is a known issue and it stems from the fact that previously you started the same cluster but without persistence.
Please remove your Ignite work dir (%TMP%\ignite\work or /tmp/ignite/work or ./ignite/work) and restart your node.
UPD: There is also this issue about LOCAL cache on client node with persistence: IGNITE-11677. My recommendation is to avoid using LOCAL caches at all.

Related

Kafka Stream memory management (Ktable, RocksDb)

Hi I do not seem to be able to correcly scale my pod for a Kafka stream application (running on java 11 jre) and keep on having OOMKilled containers.
kafka stream topology
The job consists in an Aggregation of quite a lot of concurrent values
I use a KTable :
KTable<String, MinuteValue> MinuteValuesKtable = builder.table(
"minuteTopicCompact",
Materialized.<String, MinuteValue, KeyValueStore<Bytes, byte[]>>with(Serdes.String(), minuteValueSerdes)
.withLoggingEnabled(new HashMap<>()));
And compute an aggregation :
KStream<String, MinuteAggreg> minuteAggByDay = MinuteValuesKtable
// rekey each MinuteValue and group them
.groupBy(
(key, minuteValue) -> new KeyValue<>(getAggKey(minuteValue), billLine), Serialized.with(Serdes.String(), billLineSerdes))
// aggregate to MinuteAggreg
.aggregate(
MinuteAggreg::new,
(String key, MinuteValue value, MinuteAggreg aggregate) -> aggregate.addLine(value),
(String key, MinuteValue value, MinuteAggreg aggregate) -> aggregate.removeLine(value),
Materialized.with(Serdes.String(), minuteAggregSerdes))
.toStream()
// [...] send to another topic
kafka stream memory settings
I tried to tweak these values :
// memory sizing and caches
properties.put(StreamsConfig.WINDOW_STORE_CHANGE_LOG_ADDITIONAL_RETENTION_MS_CONFIG, 5 * 60 * 1000L);
// Enable record cache of size 8 MB.
properties.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 8 * 1024 * 1024L);
// Set commit interval to 1 second.
properties.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 1000);
My java 11 Application is started with these arguments :
-XX:+UseContainerSupport
-XX:MaxRAMFraction=2
pod memory settings
And the pod has some memory limits :
Limits:
cpu: 4
memory: 2Gi
Requests:
cpu: 2
memory: 1Gi
But still get pod failures, kubernetes deletes the pod with an "OOMKilled".
Could an expert on Kafka stream help me tweaking these values ?
read resources
I have read :
https://docs.confluent.io/current/streams/sizing.html#troubleshooting
and
https://kafka.apache.org/10/documentation/streams/developer-guide/memory-mgmt.html
but could not find a comprehensive and simple enough answer for tweaking :
rocks db limits,
kafka stream limits,
jmv limits
and the containers'limit

Set up Ignite cluster with SSL in spring using SSLContext.getDefault()

I'm trying to set up an Ignite cluster with SSL encryption in my Spring application.
My target is to set up a replicated cache over several nodes.
We deploy our application into a Tomcat 8 and set environment variables for our Key- and Truststore at startup of the Tomcat.
I want to start Ignite embedded in my Spring application. So i create a Bean which returns a CacheManager.
#Bean
public SpringCacheManager replicatedCache() {
int[] eventTypes = new int[] {EventType.EVT_CACHE_ENTRY_EVICTED, EventType.EVT_CACHE_OBJECT_REMOVED, EventType.EVT_CACHE_ENTRY_DESTROYED, EventType.EVT_CACHE_OBJECT_EXPIRED};
SpringCacheManager cacheManager = new SpringCacheManager();
IgniteConfiguration configuration = new IgniteConfiguration();
configuration.setIncludeEventTypes(eventTypes);
configuration.setGridName("igniteCluster");
Slf4jLogger logger = new Slf4jLogger(LoggerFactory.getLogger(IGNITE_CACHE_LOGGER_NAME));
configuration.setGridLogger(logger);
CacheConfiguration cacheConfiguration1 = new CacheConfiguration();
cacheConfiguration1.setName("replicatedCache");
cacheConfiguration1.setCacheMode(CacheMode.REPLICATED);
cacheConfiguration1.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
configuration.setCacheConfiguration(cacheConfiguration1);
configuration.setSslContextFactory(() -> {
try {
return SSLContext.getDefault();
} catch (NoSuchAlgorithmException e) {
throw new WA3InternalErrorException("Could not create SSLContext", e);
}
});
configuration.setLocalHost(env.getProperty("caching.localBind", "0.0.0.0"));
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
List<String> nodes = Arrays.stream(env.getRequiredProperty("caching.nodes").split(",")).collect(Collectors.toList());
ipFinder.setAddresses(nodes);
TcpDiscoverySpi spi = new TcpDiscoverySpi();
spi.setIpFinder(ipFinder);
configuration.setDiscoverySpi(spi);
TcpCommunicationSpi communicationSpi = new TcpCommunicationSpi();
communicationSpi.setLocalPort(env.getRequiredProperty("caching.localPort", Integer.class));
communicationSpi.setConnectTimeout(100000); // Line added in first edit
configuration.setCommunicationSpi(communicationSpi);
IgnitePredicate<? extends CacheEvent> localEvent = event -> {
System.out.println(event);
return true;
};
Map<IgnitePredicate<? extends Event>, int[]> ignitePredicateIntegerMap = Collections.singletonMap(localEvent, eventTypes);
configuration.setLocalEventListeners(ignitePredicateIntegerMap);
cacheManager.setConfiguration(configuration);
return cacheManager;
}
As you can see, i also configure that Ignite here.
Binding to the IP-adress of the server and setting a port (which is 47100 like the default port) to the CommunicationSpi.
I am using SSLContext.getDefault() here, so it is using the default Key- and Truststores.
Everything works, when SSL is disabled (not setting SSLContextFactory).
But as soon as I set the Factory, the nodes can still find, but can't communicate with each other.
The metrics log looks fine, 2 nodes as expected:
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=41687971, name=igniteCluster, uptime=00:54:00:302]
^-- H/N/C [hosts=2, nodes=2, CPUs=4]
^-- CPU [cur=33.5%, avg=36.96%, GC=0%]
^-- Heap [used=193MB, free=85.51%, comm=627MB]
^-- Non heap [used=125MB, free=-1%, comm=127MB]
^-- Public thread pool [active=0, idle=2, qSize=0]
^-- System thread pool [active=0, idle=7, qSize=0]
^-- Outbound messages queue [size=0]
What i can see so far is, that Ignite is trying to connect on a port - which fails, increments that port and tries again.
2017-05-02T08:15:35,154 [] [] [grid-nio-worker-tcp-comm-1-#18%igniteCluster%] WARN org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi [warning():104] [] - Communication SPI session write timed out (consider increasing 'socketWriteTimeout' configuration property) [remoteAddr=/10.30.0.106:53603, writeTimeout=2000]
2017-05-02T08:15:39,192 [] [] [grid-nio-worker-tcp-comm-2-#19%igniteCluster%] WARN org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi [warning():104] [] - Communication SPI session write timed out (consider increasing 'socketWriteTimeout' configuration property) [remoteAddr=/10.30.0.106:53604, writeTimeout=2000]
I don't know what port that is.
I have restarted all nodes several times and it looks like it is starting at a random port between 30000 and 50000.
My final questions are:
What am I missing here?
Why does my SSL connection not work?
Regards
I have increased the timeout, as Valentin suggested. Still have problems with my cluster.
2017-05-03T12:19:29,429 [] [] [localhost-startStop-1] WARN org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager [warning():104] [] - Failed to wait for initial partition map exchange. Possible reasons are:
^-- Transactions in deadlock.
^-- Long running transactions (ignore if this is the case).
^-- Unreleased explicit locks.
I get these log messages on the node which tries to connect to the cluster.
Try to increase socketWriteTimeout, as error message suggests. SSL connection is slower and there is a chance that default values are not enough for it in your network.

connecting to mongo shard via java driver 3.2.2

I am trying to connect to mongo query routers in a test environment (I setup just one query router for test -> pointing to a config server (instead of three) which in turn points to a two node shard with no replicas). I can insert/fetch documents using the mongo shell (and have verified that the documents are going to the sharded nodes). However, when I try to test the connection to the mongo database, I get the output copied below (code being used is also copied underneath). I am using mongo database v3.2.0 and java driver v3.2.2 (I am trying to use the async api).
[info] 14:34:44.562 227 [main] MongoAuthentication INFO - testing 1
[info] 14:34:44.595 260 [main] cluster INFO - Cluster created with settings {hosts=[192.168.0.1:27018], mode=MULTIPLE, requiredClusterType=SHARDED, serverSelectionTimeout='30000 ms', maxWaitQueueSize=30}
[info] 14:34:44.595 260 [main] cluster INFO - Adding discovered server 192.168.0.1:27018 to client view of cluster
[info] 14:34:44.652 317 [main] cluster DEBUG - Updating cluster description to {type=SHARDED, servers=[{address=192.168.0.1:27018, type=UNKNOWN, state=CONNECTING}]
[info] Outputting database names:
[info] 14:34:44.660 325 [main] cluster INFO - No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=SHARDED, connectionMode=MULTIPLE, all=[ServerDescription{address=192.168.0.1:27018, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
[info] Counting the number of documents
[info] 14:34:44.667 332 [main] cluster INFO - No server chosen by ReadPreferenceServerSelector{readPreference=primaryPreferred} from cluster description ClusterDescription{type=SHARDED, connectionMode=MULTIPLE, all=[ServerDescription{address=192.168.0.1:27018, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
[info] - Count result: 0
[info] 14:34:45.669 1334 [cluster-ClusterId{value='577414c420055e5bc086c255', description='null'}-192.168.0.1:27018] connection DEBUG - Closing connection connectionId{localValue:1}
part of the code being used
final MongoClient mongoClient = MongoClientAccessor.INSTANCE.getMongoClientInstance();
final CountDownLatch listDbsLatch = new CountDownLatch(1);
System.out.println("Outputting database names:");
mongoClient.listDatabaseNames().forEach(new Block<String>() {
#Override
public void apply(final String name) {
System.out.println(" - " + name);
}
}, new SingleResultCallback<Void>() {
#Override
public void onResult(final Void result, final Throwable t) {
listDbsLatch.countDown();
}
});
The enum being used is responsible for reading config options and passing a MongoClient reference to its caller. The enum itself calls other classes which I can copy as well if needed. I have the following option configured for ReadPreference:
mongo.client.readPreference=PRIMARYPREFERRED
Any thoughts on what I might be doing wrong or might have misinterpreted? The goal is to connect to the shard via the mongos (query router) so that I can insert/fetch documents in the Mongo shard.
The mongo shard setup (query router, config and shard with replica sets) was not correctly configured. Ensure that the config server(s) replica set is launched first, mongos (query router) is brought up and points to these config servers, the mongo shards are brought up and then the shards are registered via the query router (mongos) as well as the collection is enabled for sharding. Obviously, make sure that the driver is connecting to the mongos (query router) process.

How to initialize JRockit MBean tree

I have the following code that just lists all MBean names found in platform MBean server:
public static void main(final String[] args) throws Exception {
initJMX();
}
#SuppressWarnings("unchecked")
private static void initJMX() throws IOException, MalformedURLException, AttributeNotFoundException,
InstanceNotFoundException, MalformedObjectNameException, MBeanException, ReflectionException,
NullPointerException {
JMXConnector jmxc = null;
final Map<String, String> map = new HashMap<String, String>();
jmxc = JMXConnectorFactory.newJMXConnector(createConnectionURL("localhost", 7788), map);
jmxc.connect();
final MBeanServerConnection connection = jmxc.getMBeanServerConnection();
final String[] domains = connection.getDomains();
for (final String domain : domains) {
final Set<ObjectName> mBeans = connection.queryNames(new ObjectName(domain + ":*"), null);
for (final ObjectName name : mBeans) {
System.out.println(name);
}
}
jmxc.close();
}
When I try to run this code with JRockit 1.5.0_4.0.1 with the following parameters:
-Xmanagement:ssl=false,authenticate=false,autodiscovery=false,port=7788
And it prints the following list:
[INFO ][mgmnt ] Remote JMX connector started at address localhost:7788
[INFO ][mgmnt ] Local JMX connector started
com.oracle.jrockit:type=FlightRecorder
java.util.logging:type=Logging
JMImplementation:type=MBeanServerDelegate
java.lang:type=Compilation
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Young Collector
java.lang:type=MemoryManager,name=Class Manager
java.lang:type=MemoryPool,name=ClassBlock Memory
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Old Collector
java.lang:type=Runtime
java.lang:type=MemoryPool,name=Nursery
java.lang:type=ClassLoading
java.lang:type=Threading
java.lang:type=MemoryPool,name=Class Memory
java.lang:type=OperatingSystem
java.lang:type=Memory
java.lang:type=MemoryPool,name=Old Space
But if I put a breakpoint before a call to initJMX method and at that point connect to that JVM with JRMC, then JRMC displays much more MBeans and also after I continue program execution it also prints a different list which contains more JRockit related MBeans:
[INFO ][mgmnt ] Remote JMX connector started at address T500W7AAD:7788
[INFO ][mgmnt ] Local JMX connector started
com.oracle.jrockit:type=FlightRecorder
oracle.jrockit.management:type=PerfCounters
oracle.jrockit.management:type=Compilation
oracle.jrockit.management:type=Log
oracle.jrockit.management:type=Profiler
oracle.jrockit.management:type=MemLeak
oracle.jrockit.management:type=JRockitConsole
oracle.jrockit.management:type=GarbageCollector
oracle.jrockit.management:type=Runtime
oracle.jrockit.management:type=Threading
oracle.jrockit.management:type=DiagnosticCommand
oracle.jrockit.management:type=Memory
java.util.logging:type=Logging
JMImplementation:type=MBeanServerDelegate
java.lang:type=Compilation
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Young Collector
java.lang:type=MemoryManager,name=Class Manager
java.lang:type=MemoryPool,name=ClassBlock Memory
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Old Collector
java.lang:type=Runtime
java.lang:type=MemoryPool,name=Nursery
java.lang:type=ClassLoading
java.lang:type=Threading
java.lang:type=MemoryPool,name=Class Memory
java.lang:type=OperatingSystem
java.lang:type=Memory
java.lang:type=MemoryPool,name=Old Space
Is there a way to say JRockit to initialize those beans automatically on JVM startup without a need of explicit JRMC connection? The problem is that I'm trying to write some code that reuses some of those MBeans, but they are not available until I connect with JRMC.
UPDATE: This seems to be JRockit jdk1.5.0_4.0.1 problem. As same code works as expected on JRockit jdk6.0_4.1.0.
This appears to be a problem with the Windows version of JRockit that I use:
java version "1.5.0_24"
Java(TM) Platform, Standard Edition for Business (build 1.5.0_24-b02)
Oracle JRockit(R) (build R28.0.1-21-133393-1.5.0_24-20100512-2131-windows-x86_64, compiled mode)
Same code works as expected on latest JRockit for JDK 1.6.0 on Windows:
java version "1.6.0_29"
Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
Oracle JRockit(R) (build R28.2.2-7-148152-1.6.0_29-20111221-2104-windows-x86_64, compiled mode)
and on the same JRockit version, but for Linux:
java version "1.5.0_24"
Java(TM) Platform, Standard Edition for Business (build 1.5.0_24-b02)
Oracle JRockit(R) (build R28.1.0-123-138454-1.5.0_24-20101014-1350-linux-x86_64, compiled mode)
try your query with object names of *:*
final Set<ObjectName> mBeans = connection.queryNames(new ObjectName("*:*"),
Maybe there is more than one MBeanServer in the JRockit that the JRMC finds all MBeanServers.

DPWS explorer is not starting in Snow Leopard

I am using Mac OsX 10.6.6. I had downloaded ws4d-explorer-v3.1-cocoa-macosx-x86_64.jar.
The when i run the DPWS explore from command prompt it hangs with following error.
[INFO ] Problems occured when loading ./persistence/ExplorerProperties.xml (java.io.FileNotFoundException: /Users/hba/Downloads/./persistence/ExplorerProperties.xml (No such file or directory)).
[INFO ] Supported DPWS Version(s): DPWS1.1
[INFO ] DPWS Framework ready.
[INFO ] Explorer DPWS Version settings: DPWS2006 (disabled) & DPWS1.1 (enabeld)
2011-02-03 16:57:15.765 java[2557:c07] * __NSAutoreleaseNoPool(): Object 0x102413f70 of class NSCFString autoreleased with no pool in place - just leaking
2011-02-03 16:57:15.768 java[2557:c07] * __NSAutoreleaseNoPool(): Object 0x10010d4f0 of class NSCFNumber autoreleased with no pool in place - just leaking
2011-02-03 16:57:15.769 java[2557:c07] * __NSAutoreleaseNoPool(): Object 0x102306b80 of class NSCFString autoreleased with no pool in place - just leaking
2011-02-03 16:57:15.779 java[2557:c07] * __NSAutoreleaseNoPool(): Object 0x1024189e0 of class NSPathStore2 autoreleased with no pool in place - just leaking
2011-02-03 16:57:15.780 java[2557:c07] * __NSAutoreleaseNoPool(): Object 0x102418ba0 of class NSPathStore2 autoreleased with no pool in place - just leaking
2011-02-03 16:57:15.781 java[2557:c07] * __NSAutoreleaseNoPool(): Object 0x7fff706aafb0 of class NSCFString autoreleased with no pool in place - just leaking
....
....
....
[INFO ] The DPWS Client of the DPWSExplorer is starting... Please wait!
Please let me know did some one has faced this issue. And how to solve this issue.
i finally figured the problem. Please use the following command while running dpws explore in mac snow lepord
java -jar -d64 -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv6Addresses=false -XstartOnFirstThread ws4d-explorer-v3.1-cocoa-macosx-x86_64.jar
it will work fine i belive there is a issue to run the DPWS stack with ipv6 so i forced it to work on ipv4

Categories