Eclipse doesn't run embedded Tomcat shutdown sequence when pressing stop button - java

Usually, when I hit the stop button from within Eclipse on the embedded Tomcat instance, the current console is swapped with a new, empty, console window. The former console window is still available though, and if I switch back to it I can see the Tomcat shutdown sequence (with all the messages, warning and such).
But it happens to me, at times, that when I hit the stop button, the server is in fact stopped, but the shutdown process isn't visible — i.e. nothing gets written to the console. The console stays on the last application message.
The Tomcat instance is actually stopped almost instantaneously.
Why is this happening? Am I doing something wrong that can happen in production too, or is this just a quirk in the Eclipse Tomcat plugin?
I never saw such behaviour until a couple of days ago, when I started hunting down never ending threads. I'm putting togheter a simple cleaning strategy — I basically trace the creation of each component that makes use of threads in a singleton, and I call any interrupt(), shutdown() or cleanup() method those components expose in a ServletContextListener.
I just discovered that when the shutdown process is correctly called, on the debug pane it says:
<terminated, exit value: 0>\path\to\java_home\bin\javaw.exe
when, instead, the server is suddenly halted, I see:
<terminated, exit value: 1>\path\to\java_home\bin\javaw.exe

Related

Why are scheduled methods randomly stopping in spring boot?

I have around ~20 scheduled methods for automation. They all run fine but they all stop after a few hours.
My logs don't show any errors/server crash, it's just as if spring boot decided to not do any #scheduled methods anymore.
My first intuition was that there was probably infinite loop in the body of a method, however, all my methods have loggers at the start and at the end. i.e. if there we an infinite loop, my final log wouldn't be saying [foo finished successfully].
I even created a tester that just prints every 5 minutes, and that function also stopped, with all the other ones, after a few hours.
My second intuition was to check the file size, since maybe the file size was too big and the logger just stopped logging into the file, and somehow this made the automation stop (scraping the barrel at this point), but since the automation only ran for a few hours, the file size is only ~1200kb, so this was not the issue.
Basically, I don't think there's an infinite loop somewhere because of the way my loggers are set up, I'm not getting any error messages in my logs and I don't know how to debug this.
I tried to include as much useful information, if something is not clear/missing, please let me know.
Other than that, any ideas on how to debug or what could be causing this?
The problem I was encountering was quite unique, posting answer in case it helps someone in the future.
Loggers stopped writing to file even though they weren't meant to. (Another issue I'll fix eventually) Which meant that even if the last log said [foo finished successfully], it was necessarily the last real log of the application.
Since Scheduling on Spring is single thread by default, there was a poorly optimized method call that would take ~12h to complete and this would make it seem as the automation stopped since the logs weren't updating and automation elsewhere wasn't happening.
I would never let this ~12h method call finish, had I waited the 12h I would've realized that the automation wasn't stopped, it simply had a bottleneck method. I would always restart the automation before this method could finish and this made it look as if the automation had indeed stopped for an unknown reason.
How I found out : In my case, my application runs in a container and once it runs I just CTRL + Z.
I had a feeling that the loggers weren't working properly and so once the logger file stopped updating, so I decided to check the live logging of the application instead by typing fg and realized that even though the log file wasn't updating, the server was still running fine.

java Process fails if process is ptrace ATTACH

I'm having the strangest problem: maybe someone can help out.
I have a daemon written in Java (I've tried the latest Java 8 both OpenJDK and Oracle) on Linux. It will spawn some processes and there is a thread which watches to see if they're complete, meanwhile it will run other commands, some of which may kill the processes, etc.
Everything works absolutely fine almost all the time. However, one of the operations that the daemon can perform on its subprocesses is it can grab a core from them (without killing them: a core of the active process). It does this by loading a .so and calling it via JNI, which uses ptrace(PTRACE_ATTACH...) to pause the subprocess and grab core information, then it uses ptrace(PTRACE_DETACH...) to let it run again.
This also works fine and I can see the process operates correctly after the core has completed and all is fine.
Except, Java's Process object is now all wonky. Whenever I invoke exitValue() on this process once this has happened I always get back and exit code of 4991 (which if you decode it, means WIFSTOPPED with a signal value of 19, or SIGSTOP). That's not entirely surprising since the Linux man page states that ptrace() will look to parents as WIFSTOPPED. But the problem is that once the Process object reaches this state, it will never do anything else, even though the process actually goes back to running after the detach.
Even more critically, even after the process really exits, exitValue() still returns 4991, and the process is not reaped (it lives on in Z (zombie) state, as defunct) and will never go away, until my daemon is killed whereupon init will inherit, and reap, the zombie processes.
I've looked at the value of isAlive() (says no) and I've tried to run waitFor(0, TimeUnit.SECONDS) (says not running) but none of them get Process out of this broken state--this doesn't surprise me too much since my reading suggests these methods are actually implemented in terms of the exitValue() method anyway.
This appears to me like a bug in Java Process but I'm not sure... I've tried reproducing it by attaching GDB to the process then detaching it, but that doesn't show the same problem. Similarly just using SIGSTOP / SIGCONT doesn't do it.
Anyone have any hints or thoughts?

IDE hangs in debug mode on break point in java fx application

I have a problem while debugging in IntelliJ IDEA, it hangs in debug mode on break point in listeners in javafx application. I tried to increase heap space, but it's not help. Maybe someone had such problem too, please, suggest me what to do.
Set this as a VM parameter:
-Dsun.awt.disablegrab=true
It will mess up drag-and-drop, and leave an artifact on your screen while the debugger is paused - but it will let you debug. It happens whenever you block the JavaFX thread.
This can happen for a simple reason: The application has a lock on the desktop, for example a modal dialog or a popup or an open menu. Then it stops in a breakpoint. This notifies the IDE. Now the IDE tries to do something on your desktop but can't since the application still has a lock on the whole desktop -> deadlock.
You can use a tool like Chronon which records the whole program and lets you move back and forth on the timeline.
The last option is logging or poor man's debugger (System.out) instead.
[EDIT]
it's hard to check with System.out which of 20 parameters not equal.
It's actually pretty easy:
System.out.println("check");
if(!a1.equals(b2)) System.out.println(a1+"!="+b1);
Then duplicate the last line. That way, you will only get output when something is actually interesting (and not for the 19 equal parameters). Add some patterns to the output if you can't distinguish aX from aY (i.e. both are true):
if(!a1.equals(b2)) System.out.println("a1:"+a1+"!="+b1);

Find, from a ShutdownHook, why a program exits

If I've got a Java program that can exit for various reasons, like:
because the main window, which is set to "exit on close", was closed
because there are some System.exit( 0 ) in the code
because there are no more window at all (and none was set to exit on close) but there's still several threads running then at one point there are only daemon threads running and hence the program exits.
And I've got a shutdown hook installed (which is running fine).
Is there any way to know, from my shutdown hook, what caused the Java program to exit?
(note that I'm not asking if it's a good idea or not to have System.exit(...) spread over the codebase: this is not what this question is about)
Basically I'd like to know if I'm forced myself to intercept every single possible JVM exit point and add infos there or if there's already a method allowing to do that.
You can add a SecurityManager which will be called on System exit (to determine if its allowed). You can save where this was called for later or deal with it in the SecurityManager.
Your shutdown hook will just run your runnable logic in a separate thread when the JVM is shutting down. You cant do anything more with this.

Java Swing application won't quit after recieving TERM signal

I have a Java Swing application that is being used as a cluster application. The problem is that every time the cluster tries to terminate the Java application, it just hangs and Windows displays the "End Now" dialog. The said application is a server type one so it spawns a thread for every attempt to connect to it is made.
I learned that the cluster sends the TERM signal using the program presented in this article. BUT when the console application is used as a cluster application, the cluster can just terminate the process after a few TERM signals.
I also tried the vanilla sample desktop application that's available when making a new project using NetBeans 6.8. It also won't terminate even after receiving the signal.
From the demonstrations done above, I think that it has something to do with Swing or with the threads. Can anyone help me with this? Thank you.
EDIT: It could be killed by using the task manager though I think it sends another signal.
When your Java application receives the TERM signal it will run any registered shut-down hooks before terminating. One possibility is that one of these shut-down hooks is blocking indefinitely or else taking a long time (>30 seconds) to run, causing the Windows "End Now" dialog to be displayed.
One thing you could try is to register a shut-down hook that simply prints to the console and verify that it is indeed being called. However, unfortunately there'll be no way to determine whether other shut-down hooks have run at this point as hooks are run in an arbitrary order.

Categories