Java - why doesn't jstack work in a tight loop - java

I've noticed that jstack doesn't work when code is in a tight loop.
For example in this code:
private static void testAllocationNotOnHeap(){
long time = System.currentTimeMillis();
long result =0;
for (int j = 0; j < 20_000; j++) {
for (int i = 0; i < 100_000_000; i++) {
result += new TestObject(5,5).getResult();
}
}
System.out.println("Allocation not on heap took "
+ (System.currentTimeMillis()-time)
+ " with result " + result);
}
If you run jstack you get this message:
Unable to open socket file: target process not responding or HotSpot VM not loaded
The -F option can be used when the target process is not responding
As soon as you allow the code to 'breathe' e.g. by add a System.out every few thousand iterations it all works fine.
Note this is the same if you want to run jconsole or flight recorder from jmc.
I'd like to understand why this is and if there's anything I can do about it as I'd like to profile a tight loop.

Related

Groovy script reloading on runtime

I want to be a able to execute a groovy script from my Java application.
I want to reload the groovy script on runtime if needed. According to their tutorials, I can do something like that :
long now = System.currentTimeMillis();
for(int i = 0; i < 100000; i++) {
try {
GroovyScriptEngine groovyScriptEngine = new GroovyScriptEngine("");
System.out.println(groovyScriptEngine.run("myScript.groovy", new Binding()););
} catch (Exception e) {
e.printStackTrace();
}
}
long end = System.currentTimeMillis();
System.out.println("time " + (end - now));//24 secs
myScript.groovy
"Hello-World"
This works fine and the script is reloaded everytime i change a line in myScript.groovy.
The problem is that this is not time efficient, what it does is parsing the script from the file every time.
Is there any other alternative ? E.g something smarter that checks if the script is already parsed and if it did not change since the last parse do not parse it again.
<< edited due to comments >>
Like mentioned in one of the comments, separating parsing (which is slow) from execution (which is fast) is mandatory if you need performance.
For reactive reloading of the script source we can for example use the java nio watch service:
import groovy.lang.*
import java.nio.file.*
def source = new File('script.groovy')
def shell = new GroovyShell()
def script = shell.parse(source.text)
def watchService = FileSystems.default.newWatchService()
source.canonicalFile.parentFile.toPath().register(watchService, StandardWatchEventKinds.ENTRY_MODIFY)
boolean keepWatching = true
Thread.start {
while (keepWatching) {
def key = watchService.take()
if (key.pollEvents()?.any { it.context() == source.toPath() }) {
script = shell.parse(source.text)
println "source reloaded..."
}
key.reset()
}
}
def binding = new Binding()
def start = System.currentTimeMillis()
for (i=0; i<100; i++) {
script.setBinding(binding)
def result = script.run()
println "script ran: $result"
Thread.sleep(500)
}
def delta = System.currentTimeMillis() - start
println "took ${delta}ms"
keepWatching = false
The above starts a separate watcher thread which uses the java watch service to monitor the parent directory for file modifications and reloads the script source when a modification is detected. This assumes java version 7 or later. The sleep is just there to make it easier to play around with the code and should naturally be removed if measuring performance.
Storing the string System.currentTimeMillis() in script.groovy and running the above code will leave it looping twice a second. Making modifications to script.groovy during the loop results in:
~> groovy solution.groovy
script ran: 1557302307784
script ran: 1557302308316
script ran: 1557302308816
script ran: 1557302309317
script ran: 1557302309817
source reloaded...
script ran: 1557302310318
script ran: 1557302310819
script ran: 1557302310819
source reloaded...
where the source reloaded... lines are printed whenever a change was made to the source file.
I'm not sure about windows but I believe at least on linux that java uses the fsnotify system under the covers which should make the file monitoring part performant.
Should be noted that if we are really unlucky, the script variable will be reset by the watcher thread between the two lines:
script.setBinding(binding)
def result = script.run()
which would break the code as the reloaded script instance would not have the binding set. To fix this, we can for example use a lock:
import groovy.lang.*
import java.nio.file.*
import java.util.concurrent.locks.ReentrantLock
def source = new File('script.groovy')
def shell = new GroovyShell()
def script = shell.parse(source.text)
def watchService = FileSystems.default.newWatchService()
source.canonicalFile.parentFile.toPath().register(watchService, StandardWatchEventKinds.ENTRY_MODIFY)
lock = new ReentrantLock()
boolean keepWatching = true
Thread.start {
while (keepWatching) {
def key = watchService.take()
if (key.pollEvents()?.any { it.context() == source.toPath() }) {
withLock {
script = shell.parse(source.text)
}
println "source reloaded..."
}
key.reset()
}
}
def binding = new Binding()
def start = System.currentTimeMillis()
for (i=0; i<100; i++) {
withLock {
script.setBinding(binding)
def result = script.run()
println "script ran: $result"
}
Thread.sleep(500)
}
def delta = System.currentTimeMillis() - start
println "took ${delta}ms"
keepWatching = false
def withLock(Closure c) {
def result
lock.lock()
try {
result = c()
} finally {
lock.unlock()
}
result
}
which convolutes the code somewhat but keeps us safe against concurrency issues which tend to be hard to track down.

BigQueue disk space not clearing

I am using a java persistence Queue named BigQueue, It stores the data in the disk, bigQueue.gc() is used to clear the used disk space. The big queue.gc() is not clearing the used disk space. The disk memory is continuously increasing.
IBigQueue bigQueue = new BigQueueImpl("/home/test/BigQueueNew", "demo1");
for (int i = 0; i < 10000; i++) {
ManagedObject mo = new ManagedObject();
mo.setName("Aravind " + i);
bigQueue.enqueue(serialize(mo));
}
while (!bigQueue.isEmpty()) {
ManagedObject mo = (ManagedObject) deserialize(bigQueue.dequeue());
System.out.println("Key Dqueue ME");
}
bigQueue.close();
// bigQueue.removeAll(); bigQueue.gc();; System.out.println("Big Queue is " + bigQueue.isEmpty() +" Size is "+bigQueue.size());
In case someone is looking at this as well.
If you are using Java 11 on ubuntu, this could be a known issue. Refer to the link below.
Unless it is fixed at the source, you could download the source and fix it yourself.
https://github.com/bulldog2011/bigqueue/issues/39

Eclipse can't find/open text files from path

I'm currently making an app which reads from text files and then does cool stuff with the words inside it. Now I unfortunately have the problem that Eclipse can't seem to find/open the text files. Since this is my first app I am not 100% sure if I did the whole "putting-files-in-eclipse"-thing correctly.
Here are two screenshots that pretty much sum up the whole problem:
Error message when the method is executed
My directories look like this.
I already wrote another program where I used similar pathing and everything worked fine.
Here's the code: (elemArray contains "wi", "wa", "f", "l", "d")
String[] elemArray = elems.toArray(new String[0]);
for(int i = 0; i < 5; ++i){
for(int l = 3; l < 6; ++l){
checkFile = new Scanner(new File("texts/" + elemArray[i] + "monster" + l + ".txt")).useDelimiter(",\\s*");
.
.
. does some other irrelevant stuff here
What am I doing wrong?
From the available information I suspect a working directory mismatch.
Working Directory
Your working directory when launching your Java program is not what you expect. The new File("texts/" [...] will create a relative path.
You can specify the working directory in the Launch Configuration in the Arguments tab near the bottom in the Working directory: section:
Test/Debug
Extract the new File("texts/" [...] to a variable (it is quite a long line as it is). You can add an expression of f.getAbsoluteFile() to ensure it resolves as expected.
i.e. rewrite like this (I would probably extract the string passed to new File() too BTW):
String[] elemArray = elems.toArray(new String[0]);
for(int i = 0; i < 5; ++i){
for(int l = 3; l < 6; ++l){
File f = new File("texts/" + elemArray[i] + "monster" + l + ".txt");
checkFile = new Scanner(f).useDelimiter(",\\s*");

Implement Multi-threading on Java Program

I'm writing a little Java program which uses PsExec.exe from cmd launched using ProcessBuilder to copy and install an application on networked PC (the number of PC that will need to be installed can vary from 5 to 50).
The program works fine if I launched ProcessBuilder for each PC sequentially.
However to speed things up I would like to implement some form of MultiThreading which could allow me to install 5 PC's at the time concurrently (one "batch" of 5 X Processbuilder processes untill all PC's have been installed).
I was thinking of using a Fixed Thread Pool in combination with a Callable interface (each execution of PsExec returns a value which indicates if the execution was succesfull and which I have to evaluate).
The code used for the ProcessBuilder is:
// Start iterating over all PC in the list:
for(String pc : pcList)
{
counter++;
logger.info("Starting the installation of remote pc: " + pc);
updateMessage("Starting the installation of remote pc: " + pc);
int exitVal = 99;
logger.debug("Exit Value set to 99");
try
{
ProcessBuilder pB = new ProcessBuilder();
pB.command("cmd", "/c",
"\""+psExecPath+"\"" + " \\\\" + pc + userName + userPassword + " -c" + " -f" + " -h" + " -n 60 " +
"\""+forumViewerPath+"\"" + " -q "+ forumAddress + remotePath + "-overwrite");
logger.debug(pB.command().toString());
pB.redirectError();
Process p = pB.start();
InputStream stErr = p.getErrorStream();
InputStreamReader esr = new InputStreamReader(stErr);
BufferedReader bre = new BufferedReader(esr);
String line = null;
line = bre.readLine();
while (line != null)
{
if(!line.equals(""))
logger.info(line);
line = bre.readLine();
}
exitVal = p.waitFor();
} catch (IOException ex)
{
logger.info("Exception occurred during installation of PC: \n"+pc+"\n "+ ex);
notInstalledPc.add(pc);
}
if(exitVal != 0)
{
notInstalledPc.add(pc);
ret = exitVal;
updateMessage("");
updateMessage("The remote pc: " + pc + " was not installed");
logger.info("The remote pc: " + pc + " was not installed. The error message returned was: \n"+getError(exitVal) + "\nProcess exit code was: " + exitVal);
}
else
{
updateMessage("");
updateMessage("The remote pc: " + pc + " was succesfully installed");
logger.info("The remote pc: " + pc + " was succesfully installed");
}
Now I've read some info on how to implement Callable and I would like to enclose my ProcessBuilder in a Callable interface and then submit all the Tasks for running in the for loop.
Am I on the right track?
You can surely do that. I suppose you want to use Callable instead of runnable to get the result of your exitVal ?
It doesn't seem like you have any shared data between your threads, so I think you should be fine. Since you even know how many Callables you are going to make you could create a collection of your Callables and then do
List<Future<SomeType>> results = pool.invokeAll(collection)
This would allow for easier handling of your result. Probably the most important thing you need to figure out when deciding on whether or not to use a threadpool is what to do if your program terminates while threads are still running; Do you HAVE to finish what you're doing in the threads, do you need to have seamless handling of errors etc.
Check out java threadpools doc: https://docs.oracle.com/javase/tutorial/essential/concurrency/pools.html
or search the web, there are tons of posts/blogs about when or not to use threadpools.
But seems like you're on the right track!
Thank you for your reply! It definitely put me on the right track. I ended up implementing it like this:
ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors.newFixedThreadPool(5); //NEW
List<Future<List<String>>> resultList = new ArrayList<>();
updateMessage("Starting the installation of all remote pc entered...");
// Start iterating over all PC in the list:
for(String pc : pcList)
{
counter++;
logger.debug("Starting the installation of remote pc: " + pc);
psExe p = new psExe(pc);
Future<List<String>> result = executor.submit(p);//NEW
resultList.add(result);
}
for(Future<List<String>> future : resultList)
{.......
in the last for loop I read the result of my operations and write them on screen or act according to the result returned.
I still have a couple of questions as it is not really clear to me:
1 - If I have 20 PC and submit all the callable threads to the pool in my first For loop, do I get it correctly that only 5 threads will be started (threadpool size = 5) but all will already be created and put in Wait status, and only as soon as the first running Thread is complete and returns a result value the next one will automatically start until all PC are finished?
2 - What is the difference (advantage) of using invokeall() as you suggested compared to the method I used (submit() inside the for loop)?
Thank you once more for your help...I really Love this Java stuff!! ;-)

Java clipboard access 10 second hang

When I get the contents of the system clipboard, there is a small chance that the call will hang for exactly 10 seconds.
I ran into this because my popup menu creation would occasionally hang because I was checking the state of the clipboard to enable / disable a paste option.
After narrowing it down to getSystemClipboard().getContents(), I wrote a console app that just loops and queries it. Under windows it runs fine, but under Linux and X, I get quite a few hangs.
In my GUI app, the 10 second hang will be cut short if there are any UI events (ie. mouse movement).
I've also ran my console app pointing to a Xvfb (headless X virtual frame buffer) and if there is only one console app connected to it hitting the clipboard it usually succeeds, but having a few copies of the app hitting the same X display the errors start happening.
Below is an example app. I was wondering if this showed the same problem for you.
public class main {
public static void main(String[] args)
{
System.out.println("Clipboard class: " + Toolkit.getDefaultToolkit().getSystemClipboard().getClass() );
long progstart_ts = System.currentTimeMillis();
int slowcount = 0;
int MAXLOOP = 1000;
for(int i = 0; i < MAXLOOP; i++)
{
long start_ts = System.currentTimeMillis();
Toolkit.getDefaultToolkit().getSystemClipboard().getContents(null);
long end_ts = System.currentTimeMillis();
long elapsed = end_ts - start_ts;
if ( elapsed > 1000 )
{
System.out.println("Loop " + i + ", " + elapsed);
slowcount++;
}
}
long progend_ts = System.currentTimeMillis();
long progelapsed = progend_ts - progstart_ts;
System.out.println("Total time: " + progelapsed);
System.out.println("Good/bad: " + (MAXLOOP - slowcount) + "/" + slowcount);
}
}

Categories