I have a binary protocol implemented with Netty that is being performance tested, and the JVM is crashing with the below report. I do not know how to repeat the crash, but it does happen regularly and only under heavy load. I have the following dependencies:
java 7.0_51-b13
netty 4.0.18_Final
fedora 20
It appears that the array copy is occurring in the nioEventLoopGroup thread. The performance test I am running is sending a large number of small messages over ~50 TCP connections. Where a large number is about 1 million 200 byte messages per connection. Each message has 2 response messages sent back.
This is what I am doing to create Netty:
Bootstrap:
m_serverBootstrap.group(m_eventLoopGroup)
.channel(NioServerSocketChannel.class)
.localAddress(m_config.getSmppPort())
.childAttr(InternalAttributeKeys.METRICS, m_metricRegistry)
.childHandler(new CustomServerChannelInitializer());
m_serverBindChannelFuture = m_serverBootstrap.bind().sync();
CustomerServerChannelInitializer
protected void initChannel(SocketChannel ch) throws Exception {
log.info("initChannel(SocketChannel ch) {} {} ",ch,this);
ch.pipeline()
.addLast(new IpFilterHandler())
.addLast(new ProtocolEncoder())
.addLast(new LengthFieldBasedFrameDecoder(4 * 1024, 0, 4, -4, 0))
.addLast(new ProtocolDecoder())
.addLast(new WindowingHandler())
.addLast(new SequenceNumberAssignmentHandler())
.addLast("idleState", new IdleStateHandler(idleTime, idleTime, idleTime))
.addLast("idleDisconnect", m_idleDisconnectHandler)
.addLast("auth", m_authHandler)
.addLast("catchall", new CatchallHandler(false));
ch.config().setAllocator(PooledByteBufAllocator.DEFAULT);
ch.config().setAutoRead(true);
log.info("finished initChannel(SocketChannel ch) {} {} ",ch,this);
}
After initial connection the pipeline is altered again in the authHandler
#Override
protected void channelRead0(ChannelHandlerContext ctx, CustomMessage msg) throws Exception {
ResponseMessage response = auth(msg,ctx);
ctx.pipeline().replace("auth", "msghandler", new MessageHandler());
ctx.pipeline().replace("idleState", "inactivityPeriod", new IdleStateHandler());
ctx.pipeline().addAfter("msghandler", "responsehandler", new ResponseHandler());
ctx.pipeline().addAfter("responsehandler", "heartbeat", new HeartbeatHandler());
ctx.pipeline().addAfter("heartbeat", "disconnect", new DisconnectHandler());
ctx.channel().closeFuture().addListener(new CleanupChannelFutureListener(ctx));
ctx.writeAndFlush(response);
}
jvm report. I have a detailed report if it helps http://pastebin.com/RV0KqPMf
If the JMX threads in the detailed report are bothering you, I can and have reproduced the issue without them.
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007ffa9eb18eaa, pid=1731, tid=140710808540928
#
# JRE version: Java(TM) SE Runtime Environment (7.0_51-b13) (build 1.7.0_51-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.51-b03 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# v ~StubRoutines::jbyte_disjoint_arraycopy
#
# Core dump written. Default location: /home/user/dir/core or core.1731
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
#
--------------- T H R E A D ---------------
Current thread (0x00007ff9fc06f800): JavaThread "nioEventLoopGroup-2-12" [_thread_in_Java, id=1912, stack(0x00007ff9c9b25000,0x00007ff9c9c26000)]
siginfo:si_signo=SIGSEGV: si_errno=0, si_code=1 (SEGV_MAPERR), si_addr=0x00007ff987df7715
What is the best way to find out what is causing this SIGSEGV in the JVM?
This is definitely a Netty bug.
Netty 4.x heavily uses Unsafe API - Oracle JDK internal API that allows raw memory access.
See PlatformDependent0.java from Netty sources.
The crash log tells that the problem happens inside Unsafe.copyMemory call where the target is a byte[] array in Java Heap young generation, and the source points to an unmapped memory region. Most likely this is caused by an attempt to get bytes from a native buffer that has been previously released. There are no sanity checks inside Unsafe API, so any misuse typically ends up with a JVM crash.
Upgrading from Netty 4.0.18.Final to 4.0.20.Final fixed this issue.
Related
I developed a desktop application using JavaFX and Maven dependency manager. I used Java 8 and JSSC package to communicate with a serial port using USB. That time it was working as I expected. But now when I try to run the project, it's showing me the following exception and shutdown the app.
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x000000007110b5db, pid=9452, tid=0x0000000000002384
#
# JRE version: OpenJDK Runtime Environment (8.0_332-b08) (build 1.8.0_332-b08)
# Java VM: OpenJDK 64-Bit Server VM (25.332-b08 mixed mode windows-amd64 compressed oops)
# Problematic frame:
# C [jSSC-2.8_x86_64.dll+0xb5db]
#
# Failed to write core dump. Minidumps are not enabled by default on client versions of Windows
#
# An error report file with more information is saved as:
# C:\Users\Sincos\Desktop\HomeOffice\Java\FDH-Relay\hs_err_pid9452.log
#
# If you would like to submit a bug report, please visit:
# https://github.com/corretto/corretto-8/issues/
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
Process finished with exit code 1
When I start the process, the following function is triggered.
public void open() throws SerialPortException {
port = new SerialPort(comPort);
port.openPort();//Open serial port
port.setParams(Integer.parseInt(baudRate), Integer.parseInt(dataSize), Integer.parseInt(stopBit), Integer.parseInt(parity));
port.addEventListener(new SerialPortEventListener() {
public void serialEvent(SerialPortEvent serialPortEvent) {
try {
int length = 0;
buffer= port.readString();
if (buffer != null){
length = buffer.length();
}
for (int i=0;i<length;i++){
queue.add((int)buffer.charAt(i));
}
} catch (SerialPortException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
});
}
I set up the port, baudRate, dataSize, stopBit, and parity.
Here is the maven dependency that I used in the project.
<!-- https://mvnrepository.com/artifact/org.scream3r/jssc -->
<dependency>
<groupId>org.scream3r</groupId>
<artifactId>jssc</artifactId>
<version>2.8.0</version>
</dependency>
Here is the other variable and constructor where I initialize the data.
String comPort;
public static Queue<Integer> queue = new LinkedList<>();
SerialPort port;
java.lang.String buffer;
String baudRate, dataSize, stopBit, parity;
public ExternalSerialConnection(String comport, String baudRate, String dataSize, String stopBit, String parity) {
this.comPort=comport;
this.baudRate=baudRate;
this.stopBit=stopBit;
this.dataSize=dataSize;
this.parity=parity;
}
Is there anyone who can help me to solve this issue?
We are using spark 2.0.2 managed by a DCOS system that fetch data from a Kafka 1.0.0 messaging service and writes parquet in a hdfs system.
Every thing was working ok, but when we increase the number of topics in Kafka, our spark executors began to crash constantly with OOM errors:
java.lang.OutOfMemoryError: Java heap space
at org.apache.parquet.column.values.dictionary.IntList.initSlab(IntList.java:90)
at org.apache.parquet.column.values.dictionary.IntList.<init>(IntList.java:86)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter.<init>(DictionaryValuesWriter.java:93)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter$PlainDoubleDictionaryValuesWriter.<init>(DictionaryValuesWriter.java:422)
at org.apache.parquet.column.ParquetProperties.dictionaryWriter(ParquetProperties.java:139)
at org.apache.parquet.column.ParquetProperties.dictWriterWithFallBack(ParquetProperties.java:178)
at org.apache.parquet.column.ParquetProperties.getValuesWriter(ParquetProperties.java:203)
at org.apache.parquet.column.impl.ColumnWriterV1.<init>(ColumnWriterV1.java:83)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.newMemColumn(ColumnWriteStoreV1.java:68)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.getColumnWriter(ColumnWriteStoreV1.java:56)
at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.<init>(MessageColumnIO.java:183)
at org.apache.parquet.io.MessageColumnIO.getRecordWriter(MessageColumnIO.java:375)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.initStore(InternalParquetRecordWriter.java:109)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.<init>(InternalParquetRecordWriter.java:99)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:217)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:175)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:146)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:113)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:87)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:62)
at org.apache.parquet.avro.AvroParquetWriter.<init>(AvroParquetWriter.java:47)
at npm.parquet.ParquetMeasurementWriter.ensureOpenWriter(ParquetMeasurementWriter.java:91)
at npm.parquet.ParquetMeasurementWriter.write(ParquetMeasurementWriter.java:75)
at npm.ingestion.spark.StagingArea$Measurements.store(StagingArea.java:100)
at npm.ingestion.spark.StagingArea$StagingAreaStorage.store(StagingArea.java:80)
at npm.ingestion.spark.StagingArea.add(StagingArea.java:40)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.sendToStagingArea(Kafka2HDFSPM.java:207)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.consumeRecords(Kafka2HDFSPM.java:193)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.process(Kafka2HDFSPM.java:169)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:133)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:111)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
18/03/20 18:41:13 ERROR [Executor task launch worker-0] SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main]
java.lang.OutOfMemoryError: Java heap space
at org.apache.parquet.column.values.dictionary.IntList.initSlab(IntList.java:90)
at org.apache.parquet.column.values.dictionary.IntList.<init>(IntList.java:86)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter.<init>(DictionaryValuesWriter.java:93)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter$PlainDoubleDictionaryValuesWriter.<init>(DictionaryValuesWriter.java:422)
at org.apache.parquet.column.ParquetProperties.dictionaryWriter(ParquetProperties.java:139)
at org.apache.parquet.column.ParquetProperties.dictWriterWithFallBack(ParquetProperties.java:178)
at org.apache.parquet.column.ParquetProperties.getValuesWriter(ParquetProperties.java:203)
at org.apache.parquet.column.impl.ColumnWriterV1.<init>(ColumnWriterV1.java:83)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.newMemColumn(ColumnWriteStoreV1.java:68)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.getColumnWriter(ColumnWriteStoreV1.java:56)
at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.<init>(MessageColumnIO.java:183)
at org.apache.parquet.io.MessageColumnIO.getRecordWriter(MessageColumnIO.java:375)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.initStore(InternalParquetRecordWriter.java:109)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.<init>(InternalParquetRecordWriter.java:99)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:217)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:175)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:146)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:113)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:87)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:62)
at org.apache.parquet.avro.AvroParquetWriter.<init>(AvroParquetWriter.java:47)
at npm.parquet.ParquetMeasurementWriter.ensureOpenWriter(ParquetMeasurementWriter.java:91)
at npm.parquet.ParquetMeasurementWriter.write(ParquetMeasurementWriter.java:75)
at npm.ingestion.spark.StagingArea$Measurements.store(StagingArea.java:100)
at npm.ingestion.spark.StagingArea$StagingAreaStorage.store(StagingArea.java:80)
at npm.ingestion.spark.StagingArea.add(StagingArea.java:40)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.sendToStagingArea(Kafka2HDFSPM.java:207)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.consumeRecords(Kafka2HDFSPM.java:193)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.process(Kafka2HDFSPM.java:169)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:133)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:111)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
We tried to increase the available the executors memory, review the code, but we couldn't find anything wrong.
Another info: we are using RDDs in spark.
Have someone encountered a similar problem, that already been solved
What is the heap configuration for the executor? By default, Java will autotune its heap according to machine memory. You need to change it to fit in your container with -Xmx setting.
See this article about running Java in the container
https://github.com/fabianenardon/docker-java-issues-demo/tree/master/memory-sample
I am building a messaging application using Netty 4.1 Beta3 for designing my server and the server understands MQTT protocol.
This is my MqttServer.java class that sets up the Netty server and binds it to a specific port.
EventLoopGroup bossPool=new NioEventLoopGroup();
EventLoopGroup workerPool=new NioEventLoopGroup();
try {
ServerBootstrap boot=new ServerBootstrap();
boot.group(bossPool,workerPool);
boot.channel(NioServerSocketChannel.class);
boot.childHandler(new MqttProxyChannel());
boot.bind(port).sync().channel().closeFuture().sync();
} catch (Exception e) {
e.printStackTrace();
}finally {
workerPool.shutdownGracefully();
bossPool.shutdownGracefully();
}
}
Now I did a load testing of my application on my Mac having the following configuration
The netty performance was exceptional. I had a look at the jstack while executing my code and found that netty NIO spawns about 19 threads and none of them seem to be stuck up waiting for channels or something else.
Then I executed my code on a linux machine
This is a 2 core 15GB machine. The problem is that the packet sent by my MQTT client seems to take a long time to pass through the netty pipeline and also on taking jstack I found that there were 5 netty threads and all were stuck up like this
."nioEventLoopGroup-3-4" #112 prio=10 os_prio=0 tid=0x00007fb774008800 nid=0x2a0e runnable [0x00007fb768fec000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000006d0fdc898> (a
io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x00000006d100ae90> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000006d0fdc7f0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:621)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:309)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:834)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
Is this some performance issue related to epoll on linux machine. If yes then what changes should be made to netty configuration to handle this or to improve performance.
Edit
Java Version on local system is :-
java version "1.8.0_40"
Java(TM) SE Runtime Environment (build 1.8.0_40-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25, mixed mode)
Java version on AWS is :-
openjdk version "1.8.0_40-internal"
OpenJDK Runtime Environment (build 1.8.0_40-internal-b09)
OpenJDK 64-Bit Server VM (build 25.40-b13, mixed mode)
Here are my findings from implementing a very simple HTTP → Kafka forklift:
Consider switching to EpollEventLoopGroup.
Simple autoreplace NioEventLoopGroup → EpollEventLoopGroup gave me 30% perfomance boost.
Removing LoggingHandler from the pipeline (if you have any) can give you a CPU usage drop (in my case CPU the drop was almost unbelievable: 80%).
Play around with the worker threads to see if this improves performance. The standard constructor of NioEventLoopGroup() creates the default amount of event loop threads:
DEFAULT_EVENT_LOOP_THREADS = Math.max(1, SystemPropertyUtil.getInt(
"io.netty.eventLoopThreads", Runtime.getRuntime().availableProcessors() * 2));
As you can see you can pass io.netty.eventLoopThreads as a launch argument but I usually don't do that.
You can also pass the amount of threads in the constructor of NioEventLoopGroup().
In our environment we have netty servers that accept communication from hundreds of clients. Usually one boss thread to handle the connections is enough. The worker thread amount needs to be scaled though. We use this:
private final static int BOSS_THREADS = 1;
private final static int MAX_WORKER_THREADS = 12;
EventLoopGroup bossGroup = new NioEventLoopGroup(BOSS_THREADS);
EventLoopGroup workerGroup = new NioEventLoopGroup(calculateThreadCount());
private int calculateThreadCount() {
int threadCount;
if ((threadCount = SystemPropertyUtil.getInt("io.netty.eventLoopThreads", 0)) > 0) {
return threadCount;
} else {
threadCount = Runtime.getRuntime().availableProcessors() * 2;
return threadCount > MAX_WORKER_THREADS ? MAX_WORKER_THREADS : threadCount;
}
}
So in our case we use just one boss thread. The worker threads depend on if a launch argument has been given. If not then use cores * 2 but never more than 12.
You will have to test yourself though what numbers work best for your environment.
We're debugging an error that causing a crash in a Tomcat web application.
The application uses 2 3rd-party apps over jni, one of the 3rd-parties using SmartHeap (it is a memory management library for c/c++ applications), the other don't (it is webMethods broker version 5).
The strange thing is I see in the crash log that webMethods calls its native methods to initiate a connection to the broker server, but if I print the call trace of the thread where the crash happened using WinDbg (loading the minidump file created when the JVM crashed), it contains calls to SmartHeap functions. Now i feel I'm a bit lost... because I've checked, and found no references to this dll from the webMethods binaries.
(actually a memory allocation is called)
My question is how is it possible?
I mean anybody could describe how this part is working? Because I thought that the interpreted/compiled and native frames are called in a fixed order (it is logical).
maybe the call stack is invalid? (now we have many dump files with almost the same call trace)
or the call trace (the calling order of the native functions) is valid, only some of the functions have been reordered before calling (like a lazy object has to be generated before sending it to the webMethods broker, but i don't see any sign of this)
I'm querying the call trace on the dump file by calling ".ecxr" and "kv", the output is:
0:060> .ecxr
eax=4d330554 ebx=4d350010 ecx=4d330010 edx=00000000 esi=4d350010 edi=00000000
eip=4c912f15 esp=4bf1dad0 ebp=3574884d iopl=0 nv up ei pl nz na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010206
shsmp!shi_allocSmall2+0x195:
4c912f15 8b4d00 mov ecx,dword ptr [ebp] ss:0023:3574884d=????????
0:060> k
*** Stack trace for last set context - .thread/.cxr resets it
ChildEBP RetAddr
4bf1daec 4c912bbd shsmp!shi_allocSmall2+0x195
4bf1dafc 4c91b973 shsmp!MemAllocPtr+0x5d
*** WARNING: Unable to verify checksum for awssl50jn.dll
*** ERROR: Symbol file could not be found. Defaulted to export symbols for awssl50jn.dll -
4bf1db14 49abc38d shsmp!shi_malloc_dbg+0x23
WARNING: Stack unwind information not available. Following frames may be wrong.
4bf1db3c 49abeca2 awssl50jn!Java_COM_activesw_api_client_ssl_AwSSLNative_getSecurityInfo+0xa1cd
4bf1db48 49ab5e66 awssl50jn!Java_COM_activesw_api_client_ssl_AwSSLNative_getSecurityInfo+0xcae2
4bf1db4c 49ab5e55 awssl50jn!Java_COM_activesw_api_client_ssl_AwSSLNative_getSecurityInfo+0x3ca6
4bf1db60 49ab667d awssl50jn!Java_COM_activesw_api_client_ssl_AwSSLNative_getSecurityInfo+0x3c95
4bf1db80 49abdbbc awssl50jn!Java_COM_activesw_api_client_ssl_AwSSLNative_getSecurityInfo+0x44bd
4bf1dc20 4c912f4f awssl50jn!Java_COM_activesw_api_client_ssl_AwSSLNative_getSecurityInfo+0xb9fc
4bf1dc78 49abd607 shsmp!shi_allocSmall2+0x1cf
00000000 00000000 awssl50jn!Java_COM_activesw_api_client_ssl_AwSSLNative_getSecurityInfo+0xb447`
Any help would be appreciated!
I have the following code that just lists all MBean names found in platform MBean server:
public static void main(final String[] args) throws Exception {
initJMX();
}
#SuppressWarnings("unchecked")
private static void initJMX() throws IOException, MalformedURLException, AttributeNotFoundException,
InstanceNotFoundException, MalformedObjectNameException, MBeanException, ReflectionException,
NullPointerException {
JMXConnector jmxc = null;
final Map<String, String> map = new HashMap<String, String>();
jmxc = JMXConnectorFactory.newJMXConnector(createConnectionURL("localhost", 7788), map);
jmxc.connect();
final MBeanServerConnection connection = jmxc.getMBeanServerConnection();
final String[] domains = connection.getDomains();
for (final String domain : domains) {
final Set<ObjectName> mBeans = connection.queryNames(new ObjectName(domain + ":*"), null);
for (final ObjectName name : mBeans) {
System.out.println(name);
}
}
jmxc.close();
}
When I try to run this code with JRockit 1.5.0_4.0.1 with the following parameters:
-Xmanagement:ssl=false,authenticate=false,autodiscovery=false,port=7788
And it prints the following list:
[INFO ][mgmnt ] Remote JMX connector started at address localhost:7788
[INFO ][mgmnt ] Local JMX connector started
com.oracle.jrockit:type=FlightRecorder
java.util.logging:type=Logging
JMImplementation:type=MBeanServerDelegate
java.lang:type=Compilation
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Young Collector
java.lang:type=MemoryManager,name=Class Manager
java.lang:type=MemoryPool,name=ClassBlock Memory
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Old Collector
java.lang:type=Runtime
java.lang:type=MemoryPool,name=Nursery
java.lang:type=ClassLoading
java.lang:type=Threading
java.lang:type=MemoryPool,name=Class Memory
java.lang:type=OperatingSystem
java.lang:type=Memory
java.lang:type=MemoryPool,name=Old Space
But if I put a breakpoint before a call to initJMX method and at that point connect to that JVM with JRMC, then JRMC displays much more MBeans and also after I continue program execution it also prints a different list which contains more JRockit related MBeans:
[INFO ][mgmnt ] Remote JMX connector started at address T500W7AAD:7788
[INFO ][mgmnt ] Local JMX connector started
com.oracle.jrockit:type=FlightRecorder
oracle.jrockit.management:type=PerfCounters
oracle.jrockit.management:type=Compilation
oracle.jrockit.management:type=Log
oracle.jrockit.management:type=Profiler
oracle.jrockit.management:type=MemLeak
oracle.jrockit.management:type=JRockitConsole
oracle.jrockit.management:type=GarbageCollector
oracle.jrockit.management:type=Runtime
oracle.jrockit.management:type=Threading
oracle.jrockit.management:type=DiagnosticCommand
oracle.jrockit.management:type=Memory
java.util.logging:type=Logging
JMImplementation:type=MBeanServerDelegate
java.lang:type=Compilation
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Young Collector
java.lang:type=MemoryManager,name=Class Manager
java.lang:type=MemoryPool,name=ClassBlock Memory
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Old Collector
java.lang:type=Runtime
java.lang:type=MemoryPool,name=Nursery
java.lang:type=ClassLoading
java.lang:type=Threading
java.lang:type=MemoryPool,name=Class Memory
java.lang:type=OperatingSystem
java.lang:type=Memory
java.lang:type=MemoryPool,name=Old Space
Is there a way to say JRockit to initialize those beans automatically on JVM startup without a need of explicit JRMC connection? The problem is that I'm trying to write some code that reuses some of those MBeans, but they are not available until I connect with JRMC.
UPDATE: This seems to be JRockit jdk1.5.0_4.0.1 problem. As same code works as expected on JRockit jdk6.0_4.1.0.
This appears to be a problem with the Windows version of JRockit that I use:
java version "1.5.0_24"
Java(TM) Platform, Standard Edition for Business (build 1.5.0_24-b02)
Oracle JRockit(R) (build R28.0.1-21-133393-1.5.0_24-20100512-2131-windows-x86_64, compiled mode)
Same code works as expected on latest JRockit for JDK 1.6.0 on Windows:
java version "1.6.0_29"
Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
Oracle JRockit(R) (build R28.2.2-7-148152-1.6.0_29-20111221-2104-windows-x86_64, compiled mode)
and on the same JRockit version, but for Linux:
java version "1.5.0_24"
Java(TM) Platform, Standard Edition for Business (build 1.5.0_24-b02)
Oracle JRockit(R) (build R28.1.0-123-138454-1.5.0_24-20101014-1350-linux-x86_64, compiled mode)
try your query with object names of *:*
final Set<ObjectName> mBeans = connection.queryNames(new ObjectName("*:*"),
Maybe there is more than one MBeanServer in the JRockit that the JRMC finds all MBeanServers.