How to monitor java code level generated statistics efficiently? - java

I am dealing with one of the critical software service in our java web application.
We wanted to monitor the java code level statistics(eg. Mean rate, avg rate and number of order received in system) for our web application so that we can understand that what is going on inside the java code and we can take steps early to rectify them.
There are plenty of open source application monitoring API's are available on internet for this purpose. We have looked into Yammer metrics and Jamon API(Source forge).
But there are some common concerns regarding these all open source projects :
If we incorporate these API's in our main application code base, then it will generate statistics data which can acquire memory on the system. This can lead to potential issues like system breakdown or performance degradation.
In our system we need to capture whole statistics data which was generated since the starting. So somehow we need to continuously stream the generated data to some other server so that it is not saved on main system memory.
We always wanted to implement this performance monitoring system which is loosely coupled from our main application code. In short we cannot claim any memory inside the main application server, because that can shut down the whole business.
I searched internet a lot but cannot find an efficient solution.

Related

stop java execution if condition is met

So the idea is a kind of virtual classroom (a website) where students uploads uncompiled .java files, our server will compile it and execute it through C# or PHP, the language doesn't matter, creating a .bat file and get the feedback of the console if the program compiled correctly or not and if the execution was correct based on some pre-maded test, so far our tests did work but we have completely no control on what's inside the .java file so we want to stop the execution if some criterias did happen, i.e. an user input, infite loop, sockets instances, etc... I've digging on internet if there's a way to configure the java environment to avoid this but so far can't find anything, and we don't want our backend language to go through the file to check this things cause will be a completly mess up
Thanks for the help
You could configure a security manager, but it doesn't have a very good track record of stopping a determined attacker, and doesn't do resource limiting anyways.
You could load the untrusted code with a dedicated class loader that only sees white-listed classes.
Or you could use something like docker to isolate the process at the operating system level. This could also limit its cpu and memory consumption.
I'd probably combine these approaches, but some risk will remain in either case.
(Yes, I realize that is complex, but safely sandboxing arbitrary java code is a hard problem.)

Tracking memory usage of an Actor flow

I am developing an app which has Akka Actors in the backend. It is a very simple app in which developers can code certain computations using a GUI and the result is shown in the UI. Its kind of similar to MIT Scratch. Each component is an Actor in the backend.
Now my question is how do I track the memory usage of a single code snippet. I don't want a single flow to consume more memory and crash the entire system. In some way I want to meter the memory usage of an Akka Actor individually and then if it crosses a threshold, the system will kill it. Ideally I would wan't the CPU usage also to be measured as well, but I figured that I would start with memory since it is simpler. Is there a way to do this using our own code/an open source plugin? I am aware that lightbend telemetry has commercial add-ons but I was wondering if it can be done using open source stuff.

Releasing JavaFX resources

I have a JavaFX application, when the user closes the window, I want to destroy all of the JavaFX related resources and only have a tray icon, where the user can then reopen the application.
I have many background threads running, which should stay running when the GUI is closed. I have tried using Platform.exit() however it has no impact on the RAM usage of the program.
What is the best way to accomplish this? My goal is to reduce the impact on the system from my program as much as possible when the application is closed, but still running all of the background threads.
One option is to run the application as a separate process, launching the process when you want to create the application and exiting the process when the application is no longer needed (so completing a full application lifecycle). That way you will be absolutely sure that the application is not consuming any resources when it is not being used, because it won't be running.
How you would accomplish the launching and any communication between your tray service and the application would be up to you. You can research various mechanisms and, if you decide to go this route, ask some new follow up questions on accomplishing certain aspects of the task.
Some example routes you could look at are the ProcessBuilder, which is admittedly a pretty finicky and horrible API or the new Process API updates that will be available with Java 9. If wish to ensure at most a single instance of the application process is ever used, there are solutions for that. If you need to send a signal to the running application process, you could use something like RMI or run a basic HTTP REST server in your application and send messages into that.
As an aside, years ago there used to be some ongoing work on building multi-process JVMs, but there was never any wide uptake of the idea for Java. Though most modern browser clients, such as Chrome and Firefox, are multi-process architectures, the linked articles give some insight into this architecture, some of the potential implications of it and why it used for those applications.
Before going such a route, I would advise you to ensure that such an approach is truly necessary for your application (as pointed out by user npace in comments).

How to determine why is Java app slow

We have an Java ERP type of application. Communication between server an client is via RMI. In peak hours there can be up to 250 users logged in and about 20 of them are working at the same time. This means that about 20 threads are live at any given time in peak hours.
The server can run for hours without any problems, but all of a sudden response times get higher and higher. Response times can be in minutes.
We are running on Windows 2008 R2 with Sun's JDK 1.6.0_16. We have been using perfmon and Process Explorer to see what is going on. The only thing that we find odd is that when server starts to work slow, the number of handles java.exe process has opened is around 3500. I'm not saying that this is the acual problem.
I'm just curious if there are some guidelines I should follow to be able to pinpoint the problem. What tools should I use? ....
Can you access to the log configuration of this application.
If you can, you should change the log level to "DEBUG". Tracing the DEBUG logs of a request could give you a usefull information about the contention point.
If you can't, profiler tools are can help you :
VisualVM (Free, and good product)
Eclipse TPTP (Free, but more complicated than VisualVM)
JProbe (not Free but very powerful. It is my favorite Java profiler, but it is expensive)
If the application has been developped with JMX control points, you can plug a JMX viewer to get informations...
If you want to stress the application to trigger the problem (if you want to verify whether it is a charge problem), you can use stress tools like JMeter
Sounds like the garbage collection cannot keep up and starts "halt-the-world" collecting for some reason.
Attach with jvisualvm in the JDK when starting and have a look at the collected data when the performance drops.
The problem you'r describing is quite typical but general as well. Causes can range from memory leaks, resource contention etcetera to bad GC policies and heap/PermGen-space allocation. To point out exact problems with your application, you need to profile it (I am aware of tools like Yourkit and JProfiler). If you profile your application wisely, only some application cycles would reveal the problems otherwise profiling isn't very easy itself.
In a similar situation, I have coded a simple profiling code myself. Basically I used a ThreadLocal that has a "StopWatch" (based on a LinkedHashMap) in it, and I then insert code like this into various points of the application: watch.time("OperationX");
then after the thread finishes a task, I'd call watch.logTime(); and the class would write a log that looks like this: [DEBUG] StopWatch time:Stuff=0, AnotherEvent=102, OperationX=150
After this I wrote a simple parser that generates CSV out from this log (per code path). The best thing you can do is to create a histogram (can be easily done using excel). Averages, medium and even mode can fool you.. I highly recommend to create a histogram.
Together with this histogram, you can create line graphs using average/medium/mode (which ever represents data best, you can determine this from the histogram).
This way, you can be 100% sure exactly what operation is taking time. If you can't determine the culprit, binary search is your friend (fine grain the events).
Might sound really primitive, but works. Also, if you make a library out of it, you can use it in any project. It's also cool because you can easily turn it on in production as well..
Aside from the GC that others have mentioned, Try taking thread dumps every 5-10 seconds for about 30 seconds during your slow down. There could be a case where DB calls, Web Service, or some other dependency becomes slow. If you take a look at the tread dumps you will be able to see threads which don't appear to move, and you could narrow your culprit that way.
From the GC stand point, do you monitor your CPU usage during these times? If the GC is running frequently you will see a jump in your overall CPU usage.
If only this was a Solaris box, prstat would be your friend.
For acute issues like this a quick jstack <pid> should quickly point out the problem area. Probably no need to get all fancy on it.
If I had to guess, I'd say Hotspot jumped in and tightly optimised some badly written code. Netbeans grinds to a halt where it uses a WeakHashMap with newly created objects to cache file data. When optimised, the entries can be removed from the map straight after being added. Obviously, if the cache is being relied upon, much file activity follows. You probably wont see the drive light up, because it'll all be cached by the OS.

What is the best way to monitor (java) process deaths on a Windows box?

We have a curious problem with our java processes dying.
The application doesn't stacktrace, or write anything to the logs, the process just randomly dies. It's a heavily used application, but the problem only appears about once a month.
We're currently looking into using Process Monitor but any other suggestions would be welcome.
Edit:
It's a distributed Java application, running on Weblogic with an in-house web framework (Yes, this is a terrible idea, but it's been running for eight years), connecting to Oracle.
-
Out of Memory?
Our logs would catch java.lang.OutOfMemoryException, according to Brian Agnew.
Write crashes to a log? I don't think Java ever gets the chance, the death is happening at a process level, rather than Java exiting.
Can you wrap it in some shell script that captures the log files (stdout/stderr) and the exit code (which should give some indication as to how it died) ? On JVM exit you can also capture machine level stats using WMI
IF the VM itself is crashing it'll leave behind an hs_err_pid... file that contains stacktraces, machine-level debug info. You can then use that to diagnose the VM issue. See this blog entry for further information.
If the problem is related to the app's behaviour, it may be worth looking at JConsole, although from your description of the issue, this sounds much more like a low level VM issue.
(I assume you're on the latest VM for your Java version number etc.)
You can use a Linux NAGIOS Server to monitor the health of your Windows machines and services! Have a look at: nagios-monitoring-windows.
If you have such problems with your java app! You should test it and debug it! Applications shouldn't die without a trace! Look for logfiles! From which vendor is the app? Or is it self written? Try to enforce another Log4J/Logger/Debug Level. Monitor your System with cacti etc. to reduce the possibilities for such a crash. Talk to the software vendor.
Is enogh memory available? Maybe the app runs out of memory? Is it a standalone java process or a java process from a tomcat/jboss server?
Have you written down the crash times to a log? Appear they in different time-slices? Or appear they nearly time-circular?
VisualVM is a new tool which makes monitoring Java applications easier:
https://visualvm.dev.java.net/description.html
"VisualVM is a tool that provides detailed information about Java applications while they are running. It provides an intuitive graphical user interface that allows you to easily see information about multiple Java applications."

Categories