I am building a messaging application using Netty 4.1 Beta3 for designing my server and the server understands MQTT protocol.
This is my MqttServer.java class that sets up the Netty server and binds it to a specific port.
EventLoopGroup bossPool=new NioEventLoopGroup();
EventLoopGroup workerPool=new NioEventLoopGroup();
try {
ServerBootstrap boot=new ServerBootstrap();
boot.group(bossPool,workerPool);
boot.channel(NioServerSocketChannel.class);
boot.childHandler(new MqttProxyChannel());
boot.bind(port).sync().channel().closeFuture().sync();
} catch (Exception e) {
e.printStackTrace();
}finally {
workerPool.shutdownGracefully();
bossPool.shutdownGracefully();
}
}
Now I did a load testing of my application on my Mac having the following configuration
The netty performance was exceptional. I had a look at the jstack while executing my code and found that netty NIO spawns about 19 threads and none of them seem to be stuck up waiting for channels or something else.
Then I executed my code on a linux machine
This is a 2 core 15GB machine. The problem is that the packet sent by my MQTT client seems to take a long time to pass through the netty pipeline and also on taking jstack I found that there were 5 netty threads and all were stuck up like this
."nioEventLoopGroup-3-4" #112 prio=10 os_prio=0 tid=0x00007fb774008800 nid=0x2a0e runnable [0x00007fb768fec000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000006d0fdc898> (a
io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x00000006d100ae90> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000006d0fdc7f0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:621)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:309)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:834)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
Is this some performance issue related to epoll on linux machine. If yes then what changes should be made to netty configuration to handle this or to improve performance.
Edit
Java Version on local system is :-
java version "1.8.0_40"
Java(TM) SE Runtime Environment (build 1.8.0_40-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25, mixed mode)
Java version on AWS is :-
openjdk version "1.8.0_40-internal"
OpenJDK Runtime Environment (build 1.8.0_40-internal-b09)
OpenJDK 64-Bit Server VM (build 25.40-b13, mixed mode)
Here are my findings from implementing a very simple HTTP → Kafka forklift:
Consider switching to EpollEventLoopGroup.
Simple autoreplace NioEventLoopGroup → EpollEventLoopGroup gave me 30% perfomance boost.
Removing LoggingHandler from the pipeline (if you have any) can give you a CPU usage drop (in my case CPU the drop was almost unbelievable: 80%).
Play around with the worker threads to see if this improves performance. The standard constructor of NioEventLoopGroup() creates the default amount of event loop threads:
DEFAULT_EVENT_LOOP_THREADS = Math.max(1, SystemPropertyUtil.getInt(
"io.netty.eventLoopThreads", Runtime.getRuntime().availableProcessors() * 2));
As you can see you can pass io.netty.eventLoopThreads as a launch argument but I usually don't do that.
You can also pass the amount of threads in the constructor of NioEventLoopGroup().
In our environment we have netty servers that accept communication from hundreds of clients. Usually one boss thread to handle the connections is enough. The worker thread amount needs to be scaled though. We use this:
private final static int BOSS_THREADS = 1;
private final static int MAX_WORKER_THREADS = 12;
EventLoopGroup bossGroup = new NioEventLoopGroup(BOSS_THREADS);
EventLoopGroup workerGroup = new NioEventLoopGroup(calculateThreadCount());
private int calculateThreadCount() {
int threadCount;
if ((threadCount = SystemPropertyUtil.getInt("io.netty.eventLoopThreads", 0)) > 0) {
return threadCount;
} else {
threadCount = Runtime.getRuntime().availableProcessors() * 2;
return threadCount > MAX_WORKER_THREADS ? MAX_WORKER_THREADS : threadCount;
}
}
So in our case we use just one boss thread. The worker threads depend on if a launch argument has been given. If not then use cores * 2 but never more than 12.
You will have to test yourself though what numbers work best for your environment.
Related
I have a grpc (1.13.x) server on java that isn't performing any computation or I/O intensive task. The intention is to check the number of requests this server can support per second on 80 core machine.
Server:
ExecutorService executor = new ThreadPoolExecutor(160, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>(),
new ThreadFactoryBuilder()
.setDaemon(true)
.setNameFormat("Glowroot-IT-Harness-GRPC-Executor-%d")
.build());
Server server = NettyServerBuilder.forPort(50051)
.addService(new MyService())
.executor(executor)
.build()
.start();
Service:
#Override
public void verify(Request request, StreamObserver<Result> responseObserver) {
Result result = Result.newBuilder()
.setMessage("hello")
.build();
responseObserver.onNext(result);
responseObserver.onCompleted();
}
I am using ghz client to perform a load test. Server is able to handle 40k requests per second but RPS count is not able to exceed more than 40k even on increase in number of concurrent clients with incoming requests rate 100k. GRPC server is able to handle just 40K requests per second and it queues all other requests. CPU is underutilized (7%). About 90% of grpc threads (with prefix grpc-default-executor) were in waiting state, despite no I/O operation. More than 25k threads are in waiting state.
Stacktrace of threads in waiting:
grpc-default-executor-4605
PRIORITY :5
THREAD ID :0X00007F15A4440D80
NATIVE ID :
stackTrace:
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base#15.0.1/Native Method)
- parking to wait for <0x00007f1df161ae20> (a java.util.concurrent.SynchronousQueue$TransferStack)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base#15.0.1/LockSupport.java:252)
at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.base#15.0.1/SynchronousQueue.java:462)
at java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.base#15.0.1/SynchronousQueue.java:361)
at java.util.concurrent.SynchronousQueue.poll(java.base#15.0.1/SynchronousQueue.java:937)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base#15.0.1/ThreadPoolExecutor.java:1055)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base#15.0.1/ThreadPoolExecutor.java:1116)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base#15.0.1/ThreadPoolExecutor.java:630)
at java.lang.Thread.run(java.base#15.0.1/Thread.java:832)
Locked ownable synchronizers:
- None
How can I configure the server to support 100K+ requests?
Nothing in the gRPC stack seems to cause this limit. What's the average response time on the server side? It looks like you are limited by the ephemeral ports or TCP connection limit and you may want to tweak your kernel as described at here https://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-1 or here https://blog.box.com/ephemeral-port-exhaustion-and-web-services-at-scale
I'm having a race condition between two threads simultaneously trying to close a JMS Session, which was created for the IBM MQ broker. It appears theres an option to prevent different threads from using a same connection handler simultaneously. The config is called 'MQCNO_HANDLE_SHARE_NO_BLOCK' (check https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_7.5.0/com.ibm.mq.ref.dev.doc/q095480_.htm).
I'm looking for some way to configure this property using the MQConnectionFactory, in the IBM MQ Java client (v.9.1.1.0).
I've already tried using the connectionFactory.setMQConnectionOptions, and or-ing the 'com.ibm.mq.constants.CMQC#MQCNO_HANDLE_SHARE_NO_BLOCK' constant to the actual value, but the client fails to start, telling me it's that the connection options are not valid.
connectionFactory.setMQConnectionOptions(connectionFactory.getMQConnectionOptions());
connectionFactory.setMQConnectionOptions(connectionFactory.getMQConnectionOptions() | MQCNO_HANDLE_SHARE_NO_BLOCK);
I've found in some adaptation of the IBM MQ client to GoLang that the flag is setted in here:
if gocno == nil {
// Because Go programs are always threaded, and we cannot
// tell on which thread we might get dispatched, allow handles always to
// be shareable.
gocno = NewMQCNO()
gocno.Options = MQCNO_HANDLE_SHARE_NO_BLOCK
} else {
if (gocno.Options & (MQCNO_HANDLE_SHARE_NO_BLOCK |
MQCNO_HANDLE_SHARE_BLOCK)) == 0 {
gocno.Options |= MQCNO_HANDLE_SHARE_NO_BLOCK
}
}
copyCNOtoC(&mqcno, gocno)
C.MQCONNX((*C.MQCHAR)(mqQMgrName), &mqcno, &qMgr.hConn, &mqcc, &mqrc)
Does anyone dealt with this problem, or used this flag?
EDIT - Added thread dumps from two locked threads.
I have the following threads locked:
1) JMSCCThreadPoolWoker, aka, IBM worker handling the exception raised by an IBM TCP receiver thread:
"JMSCCThreadPoolWorker-493": inconsistent?, holding [0x00000006d605a0b8, 0x00000006d5f6b9e8, 0x00000005c631e140]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at com.ibm.mq.jmqi.remote.util.ReentrantMutex.acquire(ReentrantMutex.java:167)
at com.ibm.mq.jmqi.remote.util.ReentrantMutex.acquire(ReentrantMutex.java:73)
at com.ibm.mq.jmqi.remote.api.RemoteHconn.requestDispatchLock(RemoteHconn.java:1219)
at com.ibm.mq.jmqi.remote.api.RemoteFAP.MQCTL(RemoteFAP.java:2576)
at com.ibm.mq.jmqi.monitoring.JmqiInterceptAdapter.MQCTL(JmqiInterceptAdapter.java:333)
at com.ibm.msg.client.wmq.internal.WMQConsumerOwnerShadow.controlAsyncService(WMQConsumerOwnerShadow.java:169)
at com.ibm.msg.client.wmq.internal.WMQConsumerOwnerShadow.stop(WMQConsumerOwnerShadow.java:471)
at com.ibm.msg.client.wmq.internal.WMQSession.stop(WMQSession.java:1894)
at com.ibm.msg.client.jms.internal.JmsSessionImpl.stop(JmsSessionImpl.java:2515)
at com.ibm.msg.client.jms.internal.JmsSessionImpl.stop(JmsSessionImpl.java:2498)
at com.ibm.msg.client.jms.internal.JmsConnectionImpl.stop(JmsConnectionImpl.java:1263)
at com.ibm.mq.jms.MQConnection.stop(MQConnection.java:473)
at org.springframework.jms.connection.SingleConnectionFactory.closeConnection(SingleConnectionFactory.java:491)
at org.springframework.jms.connection.SingleConnectionFactory.resetConnection(SingleConnectionFactory.java:383)
at org.springframework.jms.connection.CachingConnectionFactory.resetConnection(CachingConnectionFactory.java:199)
at org.springframework.jms.connection.SingleConnectionFactory.onException(SingleConnectionFactory.java:361)
at org.springframework.jms.connection.SingleConnectionFactory$AggregatedExceptionListener.onException(SingleConnectionFactory.java:715)
at com.ibm.msg.client.jms.internal.JmsProviderExceptionListener.run(JmsProviderExceptionListener.java:413)
at com.ibm.msg.client.commonservices.workqueue.WorkQueueItem.runTask(WorkQueueItem.java:319)
at com.ibm.msg.client.commonservices.workqueue.SimpleWorkQueueItem.runItem(SimpleWorkQueueItem.java:99)
at com.ibm.msg.client.commonservices.workqueue.WorkQueueItem.run(WorkQueueItem.java:343)
at com.ibm.msg.client.commonservices.workqueue.WorkQueueManager.runWorkQueueItem(WorkQueueManager.java:312)
at com.ibm.msg.client.commonservices.j2se.workqueue.WorkQueueManagerImplementation$ThreadPoolWorker.run(WorkQueueManagerImplementation.java:1227)
2) A message handling threads, which happens to be processing a message, and in a Catch clause, attemps to close the sessions/producer/consumer (there's a JMS REPLY_TO handling situtation).
"[MuleRuntime].cpuLight.07: CPU_LITE #76c382e9": awaiting notification on [0x00000006d5f6b9c8], holding [0x0000000718d73900]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
at com.ibm.msg.client.jms.internal.JmsSessionImpl.close(JmsSessionImpl.java:383)
at com.ibm.msg.client.jms.internal.JmsSessionImpl.close(JmsSessionImpl.java:349)
at com.ibm.mq.jms.MQSession.close(MQSession.java:275)
at org.springframework.jms.connection.CachingConnectionFactory$CachedSessionInvocationHandler.physicalClose(CachingConnectionFactory.java:481)
at org.springframework.jms.connection.CachingConnectionFactory$CachedSessionInvocationHandler.invoke(CachingConnectionFactory.java:311)
at com.sun.proxy.$Proxy197.close(Unknown Source)
at org.mule.jms.commons.internal.connection.session.DefaultJmsSession.close(DefaultJmsSession.java:65)
at org.mule.jms.commons.internal.common.JmsCommons.closeQuietly(JmsCommons.java:165)
at org.mule.jms.commons.internal.source.JmsListener.doReply(JmsListener.java:326)
at MORE STUFF BELOW
I've seen other references to this issue, such as here and here, although these reference different versions of Netty. Tried this using the latest in the 4.0 branch (4.0.29) and in the 5.0 alpha branch (5.0-Alpha3). Local (non-linux) jdk 1.8.040, fine. Remote (Linux) with java jdk 1.8.025-b17 get 100% cpu.
Linux kernel version 2.6.32.
Tried using EpollEventLoopGroup();
Tried calling
workerGroup = new NioEventLoopGroup();
workerGroup.rebuildSelectors();
Can anyone offer any suggestions? I've seen references to this bug w/different versions of Netty. Jdk bug? Netty bug? Process goes to 100% immediately on startup and stays there.
Update: Upgraded to java 1.8.045, same difference.
JStack output of all runnable threads (there's some rabbitmq stuff in there, only included for completeness - that's common to other applications, and is not the cause of the problem).
As we identified in the comments, the thread that consumed CPU is busy in the following stack:
"pool-9-thread-1" #49 prio=5 os_prio=0 tid=0x00007ffd508e8000 nid=0x3a0c runnable [0x00007ffd188b6000]
java.lang.Thread.State: RUNNABLE
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.poll(ScheduledThreadPoolExecutor.java:809)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I have managed to reproduce a similar behavior by creating a ScheduledThreadPoolExecutor, configuring it to allow core threads to time out, and scheduling a lot of repeating tasks with a short delay. It yields a lot of CPU on my machine and the jstack output is similar (sometimes deeper into the poll method). This code reproduces it:
ScheduledThreadPoolExecutor executor = new ScheduledThreadPoolExecutor(1);
executor.setKeepAliveTime(1, TimeUnit.MINUTES);
executor.allowCoreThreadTimeOut(true);
for (long i = 0; i < 1000; i++) {
executor.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
}
}, 0, 1, TimeUnit.NANOSECONDS);
}
Now we just have to identify which code sets up a broken ScheduledThreadPoolExecutor. I searched through the RabbitMQ and Netty source code without finding anything obvoius. Could it be something you do in your own code?
Edit: As mentioned in the comments, the root cause was a ScheduledThreadPoolExecutor initialized with 0 which apparently can cause a CPU spin om some platforms. This was done in the OP's code.
I use a lot of client sends a request to the server about 1000 requests per second a client, the server's CPU soon rose to 600% (8 cores), and always maintain this state. When I use jstack printing process content, I found SelectorImpl is BLOCKED state. Records are as follows:
nioEventLoopGroup-4-1 prio=10 tid=0x00007fef28001800 nid=0x1dbf waiting for monitor entry [0x00007fef9eec7000]
java.lang.Thread.State: BLOCKED (on object monitor)
at sun.nio.ch.EPollSelectorImpl.doSelect(Unknown Source)
- waiting to lock <0x00000000c01f1af8> (a java.lang.Object)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(Unknown Source)
- locked <0x00000000c01d9420> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x00000000c01f1948> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000c01d92c0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(Unknown Source)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:635)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:319)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101)
at java.lang.Thread.run(Unknown Source)
High CPU has something to do with this? Another problem is that when I connect a lot of clients, find some client will connect, an error is as follows:
"nioEventLoopGroup-4-1" prio=10 tid=0x00007fef28001800 nid=0x1dbf waiting for monitor entry [0x00007fef9eec7000]
java.lang.Thread.State: BLOCKED (on object monitor)
at sun.nio.ch.EPollSelectorImpl.doSelect(Unknown Source)
- waiting to lock <0x00000000c01f1af8> (a java.lang.Object)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(Unknown Source)
- locked <0x00000000c01d9420> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x00000000c01f1948> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000c01d92c0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(Unknown Source)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:635)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:319)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101)
at java.lang.Thread.run(Unknown Source)
Generate client is accomplished by using a thread pool, and has set up a connection timeout, but why frequent connection timeout? Is to serve the cause of the suit?
public void run() {
System.out.println(tnum + " connecting...");
try {
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(group)
.channel(NioSocketChannel.class)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 30000)
.handler(loadClientInitializer);
// Start the connection attempt.
ChannelFuture future = bootstrap.connect(host, port);
future.channel().attr(AttrNum).set(tnum);
future.sync();
if (future.isSuccess()) {
System.out.println(tnum + " login success.");
goSend(tnum, future.channel());
} else {
System.out.println(tnum + " login failed.");
}
} catch (Exception e) {
XLog.error(e);
} finally {
// group.shutdownGracefully();
}
}
High CPU has something to do with this?
It might be. I'd diagnose this problem following way (on a Linux box):
Find threads which are eating CPU
Using pidstat I'd find which threads are eating CPU and in what mode (user/kernel) time is spent.
$ pidstat -p [java-process-pid] -tu 1 | awk '$9 > 50'
This command shows threads eating at least 50% of CPU time. You can inspect what those threads are doing using jstack, VisualVM or Java Flight Recorder.
If CPU-hungry threads and BLOCKED threads are the same, CPU usage seems to do something with contention.
Find reason for connection timeout
Basically you will get connection timeout if two OS'es can't finish TCP-handshake in a given time. Several reasons for this:
network link saturation. Can be diagnosed using sar -n DEV 1 and comparing rxkB/s and txkB/s columns to your link maximum throughput.
server (Netty) doesn't respond with accept() call in given timeout. This thread can be BLOCKED or starving for CPU time. You can find which threads are calling accept() (therefore finishing TCP-handshake) using strace -f -e trace=accept -p [java-pid]. And after that check for possible reasons using pidstat/jstack.
Also you can find number of received requests for connection open (but not confirmed) with netstat -an | grep -c SYN_RECV
If you can elaborate more on what your Netty is doing it could be helpful. Regardless - please make sure you are closing the channels. Notice from the Channel javadoc:
It is important to call close() or close(ChannelPromise) to release all resources once you are done with the Channel. This ensures all resources are released in a proper way, i.e. filehandles
If you are closing the channels, then the problem may be with the logic it self - running into infinite loops or similar - which may be able to explain the high CPU.
I have the following code that just lists all MBean names found in platform MBean server:
public static void main(final String[] args) throws Exception {
initJMX();
}
#SuppressWarnings("unchecked")
private static void initJMX() throws IOException, MalformedURLException, AttributeNotFoundException,
InstanceNotFoundException, MalformedObjectNameException, MBeanException, ReflectionException,
NullPointerException {
JMXConnector jmxc = null;
final Map<String, String> map = new HashMap<String, String>();
jmxc = JMXConnectorFactory.newJMXConnector(createConnectionURL("localhost", 7788), map);
jmxc.connect();
final MBeanServerConnection connection = jmxc.getMBeanServerConnection();
final String[] domains = connection.getDomains();
for (final String domain : domains) {
final Set<ObjectName> mBeans = connection.queryNames(new ObjectName(domain + ":*"), null);
for (final ObjectName name : mBeans) {
System.out.println(name);
}
}
jmxc.close();
}
When I try to run this code with JRockit 1.5.0_4.0.1 with the following parameters:
-Xmanagement:ssl=false,authenticate=false,autodiscovery=false,port=7788
And it prints the following list:
[INFO ][mgmnt ] Remote JMX connector started at address localhost:7788
[INFO ][mgmnt ] Local JMX connector started
com.oracle.jrockit:type=FlightRecorder
java.util.logging:type=Logging
JMImplementation:type=MBeanServerDelegate
java.lang:type=Compilation
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Young Collector
java.lang:type=MemoryManager,name=Class Manager
java.lang:type=MemoryPool,name=ClassBlock Memory
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Old Collector
java.lang:type=Runtime
java.lang:type=MemoryPool,name=Nursery
java.lang:type=ClassLoading
java.lang:type=Threading
java.lang:type=MemoryPool,name=Class Memory
java.lang:type=OperatingSystem
java.lang:type=Memory
java.lang:type=MemoryPool,name=Old Space
But if I put a breakpoint before a call to initJMX method and at that point connect to that JVM with JRMC, then JRMC displays much more MBeans and also after I continue program execution it also prints a different list which contains more JRockit related MBeans:
[INFO ][mgmnt ] Remote JMX connector started at address T500W7AAD:7788
[INFO ][mgmnt ] Local JMX connector started
com.oracle.jrockit:type=FlightRecorder
oracle.jrockit.management:type=PerfCounters
oracle.jrockit.management:type=Compilation
oracle.jrockit.management:type=Log
oracle.jrockit.management:type=Profiler
oracle.jrockit.management:type=MemLeak
oracle.jrockit.management:type=JRockitConsole
oracle.jrockit.management:type=GarbageCollector
oracle.jrockit.management:type=Runtime
oracle.jrockit.management:type=Threading
oracle.jrockit.management:type=DiagnosticCommand
oracle.jrockit.management:type=Memory
java.util.logging:type=Logging
JMImplementation:type=MBeanServerDelegate
java.lang:type=Compilation
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Young Collector
java.lang:type=MemoryManager,name=Class Manager
java.lang:type=MemoryPool,name=ClassBlock Memory
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Old Collector
java.lang:type=Runtime
java.lang:type=MemoryPool,name=Nursery
java.lang:type=ClassLoading
java.lang:type=Threading
java.lang:type=MemoryPool,name=Class Memory
java.lang:type=OperatingSystem
java.lang:type=Memory
java.lang:type=MemoryPool,name=Old Space
Is there a way to say JRockit to initialize those beans automatically on JVM startup without a need of explicit JRMC connection? The problem is that I'm trying to write some code that reuses some of those MBeans, but they are not available until I connect with JRMC.
UPDATE: This seems to be JRockit jdk1.5.0_4.0.1 problem. As same code works as expected on JRockit jdk6.0_4.1.0.
This appears to be a problem with the Windows version of JRockit that I use:
java version "1.5.0_24"
Java(TM) Platform, Standard Edition for Business (build 1.5.0_24-b02)
Oracle JRockit(R) (build R28.0.1-21-133393-1.5.0_24-20100512-2131-windows-x86_64, compiled mode)
Same code works as expected on latest JRockit for JDK 1.6.0 on Windows:
java version "1.6.0_29"
Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
Oracle JRockit(R) (build R28.2.2-7-148152-1.6.0_29-20111221-2104-windows-x86_64, compiled mode)
and on the same JRockit version, but for Linux:
java version "1.5.0_24"
Java(TM) Platform, Standard Edition for Business (build 1.5.0_24-b02)
Oracle JRockit(R) (build R28.1.0-123-138454-1.5.0_24-20101014-1350-linux-x86_64, compiled mode)
try your query with object names of *:*
final Set<ObjectName> mBeans = connection.queryNames(new ObjectName("*:*"),
Maybe there is more than one MBeanServer in the JRockit that the JRMC finds all MBeanServers.