I have one completable future that just runs another completable future(that takes always about 2 seconds and timeout of 50 ms) and waits for it to complete with timeout 1 second.
The problem is timeout of inner future never works, but get works for about two seconds, though it has timeout of 50 ms and consequently outer CompletableFuture time outs.
sleepFor2000Ms calls Thread.sleep(2000)
private static void oneCompletableFutureInsideAnother() throws InterruptedException, ExecutionException{
long time = System.nanoTime();
try{
System.out.println("2 started");
CompletableFuture.runAsync(() -> {
long innerTime = System.nanoTime();
try{
System.out.println("inner started");
CompletableFuture.runAsync(TestApplication::sleepFor2000Ms)
.get(50, TimeUnit.MILLISECONDS); // this get doesn't work
// it waits way longer, until the future completes successfully
System.out.println("inner completed successfully");
}catch(InterruptedException | ExecutionException | TimeoutException e){
System.out.println("inner timed out");
}
long innerTimeEnd = System.nanoTime();
System.out.println("inner took " + (innerTimeEnd - innerTime)/1_000_000 + " ms");
}).get(1, TimeUnit.SECONDS);
System.out.println("2 completed successfully");
}catch(TimeoutException e){
System.out.println("2 timed out");
}
long endTime = System.nanoTime();
System.out.println("2 took " + (endTime - time)/1_000_000 + " ms");
}
Expected output looks like this(and i get this input on java 8):
2 started
inner started
inner timed out
inner took 61 ms
2 completed successfully
2 took 62 ms
Actual output is(i get it on java 9 and higher):
2 started
inner started
2 timed out
2 took 1004 ms
inner completed successfully
inner took 2013 ms
If i do the same job, but inside single CompletableFuture, it time outs correctly:
private static void oneCompletableFuture() throws InterruptedException, ExecutionException{
long time = System.nanoTime();
try{
System.out.println("1 started");
CompletableFuture.runAsync(TestApplication::sleepFor2000Ms)
.get(50, TimeUnit.MILLISECONDS); // this get works ok
// it waits for 50 ms and then throws TimeoutException
System.out.println("1 completed successfully");
}catch(TimeoutException e){
System.out.println("1 timed out");
}
long endTime = System.nanoTime();
System.out.println("1 took " + (endTime - time)/1_000_000 + " ms");
}
Is it intended to work this way or am I doing something wrong or maybe it's bug in java library?
Unlike the Java 8 version, the .get(50, TimeUnit.MILLISECONDS) call of newer versions tries to perform some other pending tasks instead of blocking the caller thread, not considering that it can’t predict how long these tasks may take and hence, by what margin it may miss the timeout goal. When it happens to pick up the very task it’s waiting for, the result is like having no timeout at all.
When I add a Thread.dumpStack(); to sleepFor2000Ms(), the affected environments print something like
java.lang.Exception: Stack trace
at java.base/java.lang.Thread.dumpStack(Thread.java:1380)
at TestApplication.sleepFor2000Ms(TestApplication.java:36)
at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)
at java.base/java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1796)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.helpAsyncBlocker(ForkJoinPool.java:1253)
at java.base/java.util.concurrent.ForkJoinPool.helpAsyncBlocker(ForkJoinPool.java:2237)
at java.base/java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1933)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2095)
at TestApplication.lambda$0(TestApplication.java:15)
at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)
at java.base/java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1796)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
but note that this is a race. It does not always happen. And when I change the inner code to
CompletableFuture<Void> inner
= CompletableFuture.runAsync(TestApplication::sleepFor2000Ms);
LockSupport.parkNanos(1_000_000);
inner.get(50, TimeUnit.MILLISECONDS);
the timeout reproducibly works (this may still fail under heavy load though).
I could not find a matching bug report, however, there’s a similar problem with ForkJoinTask, ForkJoinTask.get(timeout) Might Wait Forever. This also hasn’t been fixed yet.
I would expect that when Virtual Threads (aka project Loom) become reality, such problems will disappear, as then, there is no reason to avoid blocking of threads because the underlying native thread can be reused without such quirks.
Until then, you should rather avoid blocking worker threads in general. Java 8’s strategy of starting compensation threads when worker threads get blocked, doesn’t scale well, so you’re exchanging one problem for another.
Related
I am trying to measure the time that a thread takes to execute.
I have created a sample programme
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadPoolExecutor;
public class Sample {
public static void main(String[] args ) {
ThreadPoolExecutor exec = (ThreadPoolExecutor)Executors.newFixedThreadPool(1000);
for(int i=0; i<1000; i++) {
exec.execute(new Runnable() {
#Override
public void run() {
long start = System.currentTimeMillis();
try{
Thread.sleep(10000);
} catch(Exception ex) {
ex.printStackTrace();
}
long end = System.currentTimeMillis();
System.out.println("[Sample] Thread id : " + Thread.currentThread().getId() + ", time : " + (end - start));
}
});
}
}
}
Each thread sleeps for 10 seconds. So the duration = (end - start) should be 10000. But some threads are taking more time than expected. I am guessing this also includes thread switching time and and blocking time. Is there a way to measure the exectution time in threaded programme in JAVA?
The thing is that I have a programme that makes network calls in threads. So even if the socket time out is say 60 seconds, the thread execution time is close to 2 minutes. My guess is that the above way of calculating execution time also accounts for thread switching time and blocking time. It is not measuring the actual thread execution time
Thanks.
For such cases I use JavaVisualVM, it is a very very handy tool for finding concurrency issues.
I ran your code locally and this is what visual VM shows me.
All of the threads have exactly the same sleep time, so the differences we are seeing in the console log are probably misleading.
I wanted to launch a bunch of process to work simultaneously - so i've created an array of process and the lunched them like this:
Process[] p_ids= new Process[ids.length];
int index = 0;
//launching the processes
for (int id: ids) {
String runProcessCommand = createCommandForProcess();
System.out.println(runProcessCommand);
try {
p_ids[index] = Runtime.getRuntime().exec(runProcessCommand);
index++;
} catch (IOException e) {
}
}
After that I wanted to wait for all of them to finish. So I took the same array of processes and iterate over all the process in it, each iteration I am waiting for the current iterated process to finish or wait for a specific time out to pass.
like this:
for (Process p_id: p_ids) {
try {
//timeout for waiting a process
p_id.waitFor(timeoutForSingleProcess, TimeUnit.HOURS);
//error handling when reaching timeout
} catch (InterruptedException e) {
System.err.println("One of the process's execution time exceeded the timeout limit");
e.printStackTrace();
}
}
The problem is that I want to give a total_time_out - meaning a fixed time out for each one of the processes.
saying I have process1, process2, process3. I want to give a timeout of 1 hour. If each one of the process (1,2 or 3) will take more then an hour to finish I want the timeout to kick in.
The problem in my code that the timeout is starting to count down the time - when it turn arrives in the loop (and not in the same time as the other process). i.e. if process1 takes 0.5 an hour and process 2 takes 1 hour - the two process will be launch at the same time but process 2 timeout will start counting down 0.5 hour after its lunch (because we waited 0.5 hour for process 1 before moving to process 2). that way the timeout that should have been activated - was ignored.
Is there any process pool or something like that which could help me?
I have a thread that takes an object from an ArrayBlockingQueue() connectionPool. The thread may be blocked if ArrayBlockingQueue() is empty. To measure the time for which the calling thread is blocked, I use the following code:
long start = System.nanoTime();
DataBaseEndPoint dbep = connectionPool.take();
long end = System.nanoTime();
long elapsed = (end - start)/1000000;
Now, my concern is that the unblocked thread may start running on a different processor in a multi-processor machine. In that case, I am not entirely sure if the 'System Timer' used is the same on different processors.
This blog-post (http://www.javacodegeeks.com/2012/02/what-is-behind-systemnanotime.html) suggests that Linux uses a different Time-Stamp counter for each processor (also used for System.nanotime()), which can really mess up the elapsed time calculation in the above example.
The value is read from clock_gettime with CLOCK_MONOTONIC flag Which
uses either TSC or HPET. The only difference with Windows is that
Linux not even trying to sync values of TSC read from different CPUs,
it just returns it as it is. It means that value can leap back and
jump forward with dependency of CPU where it is read.
This link (http://lwn.net/Articles/209101/) however, suggests that TSC is no longer used for high-resolution timers.
... the recently-updated high-resolution timers and dynamic tick patch
set includes a change which disables use of the TSC. It seems that the
high-resolution timers and dynamic tick features are incompatible with
the TSC...
So, the question is, what is used by a Linux machine to return value to System.nanotime() currently? And, is using System.nanotime() safe for measuring elapsed time in the above case (blocked thread starting on another processor). If it isn't safe, what's the alternative?
One thing invaluable about virtual machines (and life in general) is abstraction. The threads' execution time do not differ based on the number of cores; not in Linux, nor in Windows, etc... I hope I am not misunderstanding your question.
(Although I am using currentTimeMillis(), nanotime is the same in a different scale, of course)
Check the following example I crafted:
public class SynchThreads {
public static void main(String[] args) throws InterruptedException {
GreedyTask gtA = new GreedyTask("A");
GreedyTask gtB = new GreedyTask("B");
Thread a = new Thread(gtA);
Thread b = new Thread(gtB);
a.start();
b.start();
a.join();
b.join();
System.out.println(gtA.toString()+" running time: "+gtA.getRunningTime());
System.out.println(gtB.toString()+" running time: "+gtB.getRunningTime());
}
private static class GreedyTask implements Runnable {
private long startedTime, finishedTime, totalRunTime;
private String myName;
public GreedyTask(String pstrName) {
myName = pstrName;
}
public void run() {
try {
startedTime = System.currentTimeMillis();
randomPowerNap(this);
finishedTime = System.currentTimeMillis();
totalRunTime = finishedTime - startedTime;
} catch (Exception e) { System.err.println(e.getMessage()); }
}
public String toString() { return ("Task: " + myName); }
public long getRunningTime() { return this.totalRunTime; }
}
private static synchronized void randomPowerNap(GreedyTask gt) throws InterruptedException {
System.out.println("Executing: "+gt.toString());
long random = Math.round(Math.random()*15000);
System.out.println("Random time for "+gt+" is: "+random);
Thread.sleep(random);
}
}
The following is the output of a run in a 4 cores windows machine:
Executing: Task: A
Random time for Task: A is: 1225
Executing: Task: B
Random time for Task: B is: 4383
Task: A running time: 1226
Task: B running time: 5609 // what's funny about this? this is equal to Btime - Atime
This was run in a 4 cores Linux machine:
Executing: Task: A
Random time for Task: A is: 13577
Executing: Task: B
Random time for Task: B is: 5340
Task: A running time: 13579
Task: B running time: 18920 // same results
Conclusions: B total time adds the time it had to wait while randomPowerNap was blocked by A, hence due to the hardware abstraction of the virtual machine, threads see no difference in their running times since they all run in a 'VIRTUAL BIG CORE', if you know what I meant.
I hope this helped.
I'm trying to perform a task every 5 minute.
The task need to start from: xx:00, xx:05, xx:10, xx:15 and so on so if the time is xx:37 the task will start in xx:40.
I'm used the following code to do that:
Date d1 = new Date();
d1.setMinutes(d1.getMinutes() + 5 - d1.getMinutes()%5);
d1.setSeconds(0);
this.timer.schedule(new Send(), d1, TEN_MINUTES/2);
Send looks like that:
class Send extends TimerTask
{
public void run()
{
if(SomeCondition)
{
Timestamp ts1 = new Timestamp(new java.util.Date().getTime());
SendToDB(ts1);
}
}
}
So the result should be records that if you % the minutes the result would be 0.
But the records time I have is:
*05:35:00
*07:44:40
*07:54:40
*09:05:31
*09:50:00
As you can see the first task start perfectly but then something went wrong.
My guess is that the task calculateds the 5 minute jump after the previous task is finished so the task run time effects, but it's just a guess.
The time a task takes to execute will delay the schedule. From the docs for schedule:
If an execution is delayed for any reason (such as garbage collection or other background activity), subsequent executions will be delayed as well.
You will be better off using scheduleAtFixedRate.
Alternatively, you might try using a simple Thread with a loop to repeatedly perform the task. The last step in the loop can be to sleep the necessary time until you want to start the task again. Assuming that no one iteration of the loop takes five minutes, this will eliminate cumulative delays.
public void run() {
long start = System.currentTimeMillis();
while (shouldRun()) {
doTask();
long next = start + FIVE_MINUTES;
try {
Thread.sleep(next - System.currentTimeMillis());
start = next;
} catch (InterruptedException e) {
. . .
}
}
}
This will start each iteration at the next five-minute interval and will not accumulate delays due to the running time of doTask() or any system delays. I haven't looked at the sources, but I suspect that this is close to what's in Timer.scheduleAtFixedRate.
Why dont you use a Task scheduler or simply a sleep command in a loop which lets the thread sleep for 5 minutes then continue.
An alternative would be to use a Timer class
I would probably make use of ScheduleExecutorService.scheduleAtFixedRate which is a more modern approach than using a Timer and would allow for having multiple worker threads in case there are many tasks being scheduled.
I adopted a the concurrency strategy from this post. However mine looks like this:
ExecutorService executorService = Executors.newFixedThreadPool(NUMBER_OF_CREATE_KNOWLEDGE_THREADS);
List<Callable<Collection<Triple>>> todo = new ArrayList<Callable<Collection<Triple>>>(this.patternMappingList.size());
for (PatternMapping mapping : this.patternMappingList ) {
todo.add(new CreateKnowledgeCallable(mapping, i++));
}
try {
List<Future<Collection<Triple>>> answers = executorService.invokeAll(todo);
for (Future<Collection<Triple>> future : answers) {
Collection<Triple> triples = future.get();
this.writeNTriplesFile(triples);
}
}
catch (InterruptedException e) { ... }
catch (ExecutionException e) { ... }
executorService.shutdown();
executorService.shutdownNow();
But the ExecutorService never shuts down. I tried to debug how many of the CreateKnowledgeCallable are finished, but this number seems to vary (after no new threads/callables are executed but the service keeps running). I am sure a logged and printed every possible exception but I can't see one happening. It also seems that after a while nothing happens anymore except that NUMBER_OF_CREATE_KNOWLEDGE_THREADS cpus are spinning at 100% forever. What am I doing wrong?
If you need to more specific infos I would be happy to provide them for you!
Kind regards,
Daniel
When you perform a shutdownNow() it interrupts all the threads in the pool. However, if your code ignores interrupts, they won't stop. You need to make your tasks honour interrupts with tests like
while(!Thread.currentThread.isInterrupted()) {
}
or
Thread.sleep(0);
executorService.invokeAll
should return only when all tasks are finished. As well as future.get()
Are you sure, that call to executorService.invokeAll(todo); ever returns and not blocks forever waiting for tasks to complete?
are you sure that you submitted tasks actually finish? If you check the API for shutdownNow() and shutdown() you'll see that they do not guarantee termination.
Have you tried using a call to awaitTermination(long timeout,
TimeUnit unit) with a reasonable amount of time as timeout parameter? (edit: "reasonable amount of time" depends of course on the mean process time of your tasks as well as the number of tasks executing at the time you call for termination)
Edit2: I hope the following example from my own code might help you out (note that it probably isn't the optimal, or most gracious, way to solve this problem)
try {
this.started = true;
pool.execute(new QueryingAction(pcqs));
for(;;){
MyObj p = bq.poll(timeout, TimeUnit.MINUTES); // poll from a blocking queue
if(p != null){
if (p.getId().equals("0"))
break;
pool.submit(new AnalysisAction(ds, p, analyzedObjs));
}else
drc.log("Timed out while waiting...");
}
} catch (Exception ex) {
ex.printStackTrace();
}finally{
drc.log("--DEBUG: Termination criteria found, shutdown initiated..");
pool.shutdown();
int mins = 2;
int nCores = poolSize -1 ;
long totalTasks = pool.getTaskCount(),
compTasks = pool.getCompletedTaskCount(),
tasksRemaining = totalTasks - compTasks,
timeout = mins * tasksRemaining / nCores;
drc.log( "--DEBUG: Shutdown commenced, thread pool will terminate once all objects are processed, " +
"or will timeout in : " + timeout + " minutes... \n" + compTasks + " of " + (totalTasks -1) +
" objects have been analyzed so far, " + "mean process time is: " +
drc.getMeanProcTimeAsString() + " milliseconds.");
pool.awaitTermination(timeout, TimeUnit.MINUTES);
}
Everyone with this sort of problems should try to implement the same algorithm without concurrency. With the help of this method, I found that a component has thrown a runtime exception which was swallowed.