What is the most efficient way to simulate a countdown in java? - java

I want to print to the console every second. So far, I've been able to think of two ways to do this.
long start_time = System.currentTimeMillis();
while(true){
if ((System.currentTimeMillis() - start_time) >= 1000) {
System.out.println("One second passed");
start_time = System.currentTimeMillis();
}
}
And this
while(true){
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("One second passed");
}
Is any method safer or more reliable or more efficient than the other?
or maybe there are use cases for each?
Thanks

This is not as simple as it seems.
The problem with your solutions are that the first is very inefficient, and the second is liable to drift because sleep(1000) does not guarantee to sleep for exactly 1 second. (The javadoc says that sleep(1000) sleeps for at least one second.)
One possibility is to use ScheduledExecutorService.scheduleWithFixedDelay and compute the delay each time by looking at the difference between the millisecond clock and your start time (or end time). That will keep your messages in sync with real time (more or less).
Or maybe you can simplify that using scheduleAtFixedRate, because it looks like ScheduledThreadPoolExecutor will take care of the clock syncing problem.
Another possibility is to look for a 3rd-party library that implements a "cron-like" scheduler in Java.

Related

What is the point of Apache Lang3 StopWatch.split()?

I am currently evaluating implementations between Apache StopWatch and Guava's Stopwatch and the split functionality in the former intrigued me, but I am struggling to understand what exactly it does, and what value it has.
According to the documentation for StopWatch: https://commons.apache.org/proper/commons-lang/javadocs/api-3.9/org/apache/commons/lang3/time/StopWatch.html
split() the watch to get the time whilst the watch continues in the background. unsplit() will remove the effect of the split. At this point, these three options are available again.
And I found some examples, such as this, which offer very little, since it appears split is just cumulative. The page says the method is for "splitting the time", which I figured as much based on the method, but the page makes no mention to what that actually means. It would even appear that this example is utterly wrong, because the docs suggest that you should unsplit before you split again.
I initially thought it was for the following usecase:
StopWatch stopwatch = StopWatch.createStarted();
do something for 5 seconds
stopwatch.split();
do something for 10 seconds
stopwatch.stop();
System.out.println(stopwatch.getTime());
System.out.println(stopwatch.getSplitTime());
I thought that stopwatch total time would read as 15 seconds, and stopwatch split time would read as either 10 or 5 seconds, but it appears that both methods output 15 seconds.
Next, I thought maybe the split value is a delta you can take, and then remove from the total timer, something like:
StopWatch stopwatch = StopWatch.createStarted();
do something for 5 seconds
stopwatch.split();
do something for 10 seconds
stopwatch.unsplit();
stopwatch.stop();
System.out.println(stopwatch.getTime());
// System.out.println(stopwatch.getSplitTime()); errors because of unsplit
My thought here was that the split time would be 10 seconds, and when unsplit from the main timer, the main timer would read as 5 seconds... but this seems no different from a suspend() call... I also tried this, and the timings remain the same nonetheless for me.
Am I missing something here, or is my interpretation of what this is supposed to do all wrong?
This is the source code for getSplitTime() (it calls this other function internally):
public long getSplitNanoTime() {
if (this.splitState != SplitState.SPLIT) {
throw new IllegalStateException("Stopwatch must be split to get the split time. ");
}
return this.stopTime - this.startTime;
}
So it will return stopTime-startTime. Beware of stopTime. It's the liar that's confusing you.
This is the code for stop():
public void stop() {
if (this.runningState != State.RUNNING && this.runningState != State.SUSPENDED) {
throw new IllegalStateException("Stopwatch is not running. ");
}
if (this.runningState == State.RUNNING)
{
//is this the same stopTime getSplitTime uses? yep, it is
this.stopTime = System.nanoTime();
}
this.runningState = State.STOPPED;
}
What's happenning then?
Calling stop() updates the stopTime variable and makes the stopwatch "forget" the last time it was splitted.
Both split() and stop() modify the same variable, stopTime, which is overrided when you call stop() at the end of your process.
Although sharing the same variable may look wierd, it really makes sense, as an splittedTime of an StopWatch should never be bigger than the total elapsed time. So it's a game regarding the order of the functions executed in the StopWatch.
This is the code for split(), in order to see that both methods do use stopTime:
public void split() {
if (this.runningState != State.RUNNING) {
throw new IllegalStateException("Stopwatch is not running. ");
}
this.stopTime = System.nanoTime(); // you again little f..
this.splitState = SplitState.SPLIT;
}
That's why this little adorable Apache liar shows you 15 seconds on the splittedTime: because stop() updated the stopTime variable getSplitTime() will use to return its value. (the first code snippet)
Note the simplicity of the split() function (this also barely answers OP's question). It is responsible of:
Checking wether the StopWatch is running.
Marking a new stopTime.
Setting the splitState to SPLIT.
TLDR lol
Calling getSplitTime() before stopping the StopWatch should show you the desired value:
stopTime won't be updated by stop() yet.
The returning value will now match the time elapsed between the last split() and the startTime.
Some examples: (yes, editing at saturday night cause I need a social life)
StopWatch stopwatch = StopWatch.createStarted();
do something for 5 seconds
stopwatch.split(); //stopTime is updated [by split()]
System.out.println(stopwatch.getSplitTime()); // will show 5 seconds
do something for 10 seconds
System.out.println(stopwatch.getSplitTime()); // will also show 5 seconds
stopwatch.stop(); //stopTime is updated again [by stop()]
System.out.println(stopwatch.getTime()); // 15s
System.out.println(stopwatch.getSplitTime()); // 15s
Another one:
StopWatch stopwatch = StopWatch.createStarted();
do something for 5 seconds
stopwatch.split();
System.out.println(stopwatch.getSplitTime()); // 5s
do something for 10 seconds
stopwatch.split();
System.out.println(stopwatch.getSplitTime()); // 15s
do something for 1 second
stopwatch.stop();
System.out.println(stopwatch.getTime()); // 16s
And a last one. Mocked the time with sleeps, just for the fun, you know. I was so bored I really imported the apache jar in order to test this locally:
StopWatch stopwatch = StopWatch.createStarted();
Thread.sleep(5000);
stopwatch.split();
System.out.println(stopwatch.getSplitTime()); // 5s
Thread.sleep(10000);
stopwatch.split();
System.out.println(stopwatch.getSplitTime()); // 15s
stopwatch.reset(); // allows the stopWatch to be started again
stopwatch.start(); // updates startTime
Thread.sleep(2000);
stopwatch.split();
System.out.println(stopwatch.getSplitTime()); // 2s
Thread.sleep(1000);
stopwatch.stop();
System.out.println(stopwatch.getTime()); // 3s
System.out.println(stopwatch.getSplitTime()); // 3s
//it was so fun putting the sleeps
Note that calling getSplitTime() on an stopped Watch won't throw any exception, because the method will only check wheter the splitState is SPLIT.
The confusion may be caused by these two facts:
The code allows you to stop() regardless of the SplitState, making your last split() futile without you being aware. Futile, I love that word. Had to include it in my answer somehow. Futileeee
It also allows you to check the splittedTime on an stopped watch (if it is still on SPLIT state), when it really just will return the total elapsed time between the last start() and the stopping time. (little liar)
In this scenario, where the stopwatch is stopped and splitted at the same time, getTime() and getSplitTime() will always show the same value when called after stop().
[Personal and subjective opinion]
Let's say you have a Counters class with different variables to check elapsed times. You also want to output the total elapsed time for each operation, every 60 seconds . In this example, counters is an instance of a Counters class that owns two long variables: fileTime and sendTime, that will accumulate the elapsed time within each operation during an specific interval (60s). This is just an example that assumes each iteration takes less than 1000 ms (so it will always show 60 seconds on the elapsed time):
long statsTimer = System.currentTimeMillis();
while (mustWork)
{
long elapsedStatsTimer = System.currentTimeMillis()-statsTimer; //hits 60185
if (elapsedStatsTimer > 60000)
{
//counters.showTimes()
System.out.println("Showing elapsed times for the last "+
(elapsedStatsTimer/1000)+ " secs"); //(60185/1000) - 60 secs
System.out.println("Files time: "+counters.fileTime+" ms"); //23695 ms
System.out.println("Send time : "+counters.sendTime+" ms"); //36280 ms
long workTime = counters.sendTime+counters.fileTime;
System.out.println("Work time : "+workTime+" ms"); //59975 ms
System.out.println("Others : "+(elapsedStatsTimer-workTime)+" ms"); //210 ms
//counters.reset()
counters.fileTime=0;
counters.sendTime=0;
statsTimer= System.currentTimeMillis();
}
long timer = System.currentTimeMillis();
//do something with a file
counters.fileTime+=System.currentTimeMillis-timer;
timer = System.currentTimeMillis();
//send a message
counters.sendTime+=System.currentTimeMillis()-timer;
}
That Counters class could implement the reset() and showTimes() functions, in order to clean up the code above. It could also manage the elapsedStatsTimer variable. This is just an example to simplify its behaviour.
For this use case, in which you need to measure different operations persistently, I think this way is easier to use and seems to have a similar performance, as the StopWatch internally makes the exact same thing. But hey, it's just my way to do it : ).
I will accept downvotes for this opinion in an honorable and futile way.
I would love to finish with a minute of silence in honour of unsplit(), which may be one of the most irrelevant methods ever existed.
[/Personal and subjective opinion]
Just noticed TLDR section is actually bigger than the previous section :_ )

Java - repeatedly run a function in a given number of milliseconds accurately?

Does anyone have a Fairly effective way of running a function repetitively in a precise and accurate number of milliseconds. I have tried to accomplish this by using the code below to try to run a function called wave() once a second for 30 seconds:
startTime = System.nanoTime();
wholeTime = System.nanoTime();
while (loop) {
if (startTime >= time2) {
startTime = System.nanoTime();
wave();
sec++;
}
if (sec == 30) {
loop = false;
endTime = System.nanoTime();
System.out.println(wholeTime - System.nanoTime());
}
}
}
This code did not work and am wondering why this code didn't work and if their is a better approach to the problem. Any ideas on how to improve fix the above code or other successful ways of accomplishing the problem are all welcome. Thank you for your help!
more simple:
long start=System.currentTimeMillis(); // Not very very accurate
while (System.currentTimeMillis()-start<30000)
{
wave();
// count something
}
You can use a Timer+TimerTask: https://docs.oracle.com/javase/7/docs/api/java/util/Timer.html
https://docs.oracle.com/javase/7/docs/api/java/util/TimerTask.html
http://bioportal.weizmann.ac.il/course/prog2/tutorial/essential/threads/timer.html
You may use Thread.sleep():
public static void main (String[] args) throws InterruptedException {
int count = 30;
long start = System.currentTimeMillis();
for(int i=0; i<count; i++) {
wave();
// how many milliseconds till the end of the second?
long sleep = start+(i+1)*1000-System.currentTimeMillis();
if(sleep > 0) // condition might be false if wave() runs longer than second
Thread.sleep(sleep);
}
}
Does anyone have a Fairly effective way of running a function repetitively in a precise and accurate number of milliseconds.
There is no way to do this kind of thing reliably and accurately in standard Java. The problem is that there is no way that you can guarantee that your thread will run when you want ti to run. For example:
your thread could be suspended to allow the GC to run
your thread could be preempted to allow another thread in your application to run
your thread could be suspended by the OS while it fetches pages by the JVM back from disk.
You can only get reliable behavior for this kind of code if you run on a hard realtime OS, and an realtime Java.
Note that this is not an issue with clock accuracy. The real problem is that the scheduler does not give you the kind of guarantees you need. For instance, none of the "sleep until X" functionality in a JVM can guarantee that your thread will wake up at time X exactly ... for any useful meaning of "exactly".
The other answers suggest various ways to do this, but beware that they are not (and cannot be) reliable and accurate in all circumstances .. or even on a typical machine running other things as well as your application.

How to use the current time in a Java Program?

Say, for example, I want to run the following program
double x = 15.6
System.out.println(x);
But I wanted to repeat the program until a certain time has elapsed, such as the following:
do{
double x = 15.6
System.out.println(x);
}while (current time is earlier than 12.00pm)
Even though the example is completely hypothetical, how would I make that do while loop so that the program would keep running over and over again until a certain time, say 3pm, or9.30pm.
If this is not possible, is there any way I can simulate this, by running the program every so many seconds, until that time has been reached?
a) You usually don't need the code to actually run until a time has come - you wouldn't have any control over the amount of times the code executed this way. Regular code has to sleep sometimes, to give control to OS and other processes so that they don't clog the system with 100% CPU load. As such, actually running the code constantly is a wrong approach to 99% of the possible problems related to timings. Please describe the exact problem you want to solve using this code.
b) For those 99% of problems, use a Timer instance. Simple as that. https://docs.oracle.com/javase/7/docs/api/java/util/Timer.html - schedule the task to run e.g. 1000 times a second, and check the time in each event, terminating the Timer instance when the time threshold has been exceeded.
For example, this code above will give you continuous execution of Do something part, every 1 second, until 16.11.2014 20:00 GMT. By changing delayMs you can easily achieve higher/lower time granularity. If you expect your code to be run more often than 1000/sec, you should probably use JNI anyway, since Java timers/clocks are known to have <10ms granularity on some (older) platforms, see How can I measure time with microsecond precision in Java? etc.
Timer timer = new Timer();
int delayMs = 1000; // check time every one second
long timeToStop;
try {
timeToStop = new SimpleDateFormat( "DD.MM.YYYY HH:mm" ).parse( "16.11.2014 20:00" ).getTime(); // GMT time, needs to be offset by TZ
} catch (ParseException ex) {
throw new RuntimeException( ex );
}
timer.scheduleAtFixedRate( new TimerTask() {
#Override
public void run() {
if ( System.currentTimeMillis() < timeToStop ) {
System.out.println( "Do something every " + delayMs + " milliseconds" );
} else {
timer.cancel();
}
}
}, 0, delayMs );
or you can use e.g. ExecutorService service = Executors.newSingleThreadExecutor(); etc. - but it's virtually impossible to give you a good way to solve your problem without explicitly knowing what the problem is.
Something like this
//get a Date object for the time to stop, then get milliseconds
long timeToStop = new SimpleDateFormat("DD:MM:HH:mm").parse("16:11:12:00").getTime();
//get milliseconds now, and compare to milliseconds from before
do {
//do stuff
} while(System.currentTimeMillis() < timeToStop)

Run method for all values in Array at once

I am currently trying to run multiple methods of the same method at the same time. Right now it is only doing one at a time and then sleeping once it loops through all of them. I need it to do all the values in the array at the same time via the method. Here is my current code:
public static void checkTimer(TS3Api api) {
for (String keys : admins) {
//What I need: Check Groups for all values in string AT THE SAME TIME
checkGroup(api, keys);
}
try {
//Sleep for 10 second
Thread.sleep(10000);
} catch (InterruptedException e) {
// Do nothing
}
}
Thread.sleep(10000) causes the current thread to sleep for 10 seconds. This would be the primary thread. You have not split off any threads from the primary one, so this is working as you wrote it.
Take a look through the Java documentation http://docs.oracle.com/javase/7/docs/api/java/lang/Thread.html
There are some examples of splitting off threads. This should get you on your way to a solution.
In Java 8 you can write something like:
admins.parallelStream().forEach(keys -> checkGroup(api, keys));
The number of items it will do in parallel are system dependent, however. In any case, it is unlikely you can do all of them in parallel unless your system has at least as many processors as there are items in admins, no matter what approach you take.

Java BlockingQueue latency high on Linux

I am using BlockingQueue:s (trying both ArrayBlockingQueue and LinkedBlockingQueue) to pass objects between different threads in an application I’m currently working on. Performance and latency is relatively important in this application, so I was curious how much time it takes to pass objects between two threads using a BlockingQueue. In order to measure this, I wrote a simple program with two threads (one consumer and one producer), where I let the producer pass a timestamp (taken using System.nanoTime()) to the consumer, see code below.
I recall reading somewhere on some forum that it took about 10 microseconds for someone else who tried this (don’t know on what OS and hardware that was on), so I was not too surprised when it took ~30 microseconds for me on my windows 7 box (Intel E7500 core 2 duo CPU, 2.93GHz), whilst running a lot of other applications in the background. However, I was quite surprised when I did the same test on our much faster Linux server (two Intel X5677 3.46GHz quad-core CPUs, running Debian 5 with kernel 2.6.26-2-amd64). I expected the latency to be lower than on my windows box , but on the contrary it was much higher - ~75 – 100 microseconds! Both tests were done with Sun’s Hotspot JVM version 1.6.0-23.
Has anyone else done any similar tests with similar results on Linux? Or does anyone know why it is so much slower on Linux (with better hardware), could it be that thread switching simply is this much slower on Linux compared to windows? If that’s the case, it’s seems like windows is actually much better suited for some kind of applications. Any help in helping me understanding the relatively high figures are much appreciated.
Edit:
After a comment from DaveC, I also did a test where I restricted the JVM (on the Linux machine) to a single core (i.e. all threads running on the same core). This changed the results dramatically - the latency went down to below 20 microseconds, i.e. better than the results on the Windows machine. I also did some tests where I restricted the producer thread to one core and the consumer thread to another (trying both to have them on the same socket and on different sockets), but this did not seem to help - the latency was still ~75 microseconds. Btw, this test application is pretty much all I'm running on the machine while performering test.
Does anyone know if these results make sense? Should it really be that much slower if the producer and the consumer are running on different cores? Any input is really appreciated.
Edited again (6 January):
I experimented with different changes to the code and running environment:
I upgraded the Linux kernel to 2.6.36.2 (from 2.6.26.2). After the kernel upgrade, the measured time changed to 60 microseconds with very small variations, from 75-100 before the upgrade. Setting CPU affinity for the producer and consumer threads had no effect, except when restricting them to the same core. When running on the same core, the latency measured was 13 microseconds.
In the original code, I had the producer go to sleep for 1 second between every iteration, in order to give the consumer enough time to calculate the elapsed time and print it to the console. If I remove the call to Thread.sleep () and instead let both the producer and consumer call barrier.await() in every iteration (the consumer calls it after having printed the elapsed time to the console), the measured latency is reduced from 60 microseconds to below 10 microseconds. If running the threads on the same core, the latency gets below 1 microsecond. Can anyone explain why this reduced the latency so significantly? My first guess was that the change had the effect that the producer called queue.put() before the consumer called queue.take(), so the consumer never had to block, but after playing around with a modified version of ArrayBlockingQueue, I found this guess to be false – the consumer did in fact block. If you have some other guess, please let me know. (Btw, if I let the producer call both Thread.sleep() and barrier.await(), the latency remains at 60 microseconds).
I also tried another approach – instead of calling queue.take(), I called queue.poll() with a timeout of 100 micros. This reduced the average latency to below 10 microseconds, but is of course much more CPU intensive (but probably less CPU intensive that busy waiting?).
Edited again (10 January) - Problem solved:
ninjalj suggested that the latency of ~60 microseconds was due to the CPU having to wake up from deeper sleep states - and he was completely right! After disabling C-states in BIOS, the latency was reduced to <10 microseconds. This explains why I got so much better latency under point 2 above - when I sent objects more frequently the CPU was kept busy enough not to go to the deeper sleep states. Many thanks to everyone who has taken time to read my question and shared your thoughts here!
...
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.CyclicBarrier;
public class QueueTest {
ArrayBlockingQueue<Long> queue = new ArrayBlockingQueue<Long>(10);
Thread consumerThread;
CyclicBarrier barrier = new CyclicBarrier(2);
static final int RUNS = 500000;
volatile int sleep = 1000;
public void start() {
consumerThread = new Thread(new Runnable() {
#Override
public void run() {
try {
barrier.await();
for(int i = 0; i < RUNS; i++) {
consume();
}
} catch (Exception e) {
e.printStackTrace();
}
}
});
consumerThread.start();
try {
barrier.await();
} catch (Exception e) { e.printStackTrace(); }
for(int i = 0; i < RUNS; i++) {
try {
if(sleep > 0)
Thread.sleep(sleep);
produce();
} catch (Exception e) {
e.printStackTrace();
}
}
}
public void produce() {
try {
queue.put(System.nanoTime());
} catch (InterruptedException e) {
}
}
public void consume() {
try {
long t = queue.take();
long now = System.nanoTime();
long time = (now - t) / 1000; // Divide by 1000 to get result in microseconds
if(sleep > 0) {
System.out.println("Time: " + time);
}
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
QueueTest test = new QueueTest();
System.out.println("Starting...");
// Run first once, ignoring results
test.sleep = 0;
test.start();
// Run again, printing the results
System.out.println("Starting again...");
test.sleep = 1000;
test.start();
}
}
Your test is not a good measure of queue handoff latency because you have a single thread reading off the queue which writes synchronously to System.out (doing a String and long concatenation while it is at it) before it takes again. To measure this properly you need to move this activity out of this thread and do as little work as possible in the taking thread.
You'd be better off just doing the calculation (then-now) in the taker and adding the result to some other collection which is periodically drained by another thread that outputs the results. I tend to do this by adding to an appropriately presized array backed structure accessed via an AtomicReference (hence the reporting thread just has to getAndSet on that reference with another instance of that storage structure in order to grab the latest batch of results; e.g. make 2 lists, set one as active, every x s a thread wakes up and swaps the active and the passive ones). You can then report some distribution instead of every single result (e.g. a decile range) which means you don't generate vast log files with every run and get useful information printed for you.
FWIW I concur with the times Peter Lawrey stated & if latency is really critical then you need to think about busy waiting with appropriate cpu affinity (i.e. dedicate a core to that thread)
EDIT after Jan 6
If I remove the call to Thread.sleep () and instead let both the producer and consumer call barrier.await() in every iteration (the consumer calls it after having printed the elapsed time to the console), the measured latency is reduced from 60 microseconds to below 10 microseconds. If running the threads on the same core, the latency gets below 1 microsecond. Can anyone explain why this reduced the latency so significantly?
You're looking at the difference between java.util.concurrent.locks.LockSupport#park (and corresponding unpark) and Thread#sleep. Most j.u.c. stuff is built on LockSupport (often via an AbstractQueuedSynchronizer that ReentrantLock provides or directly) and this (in Hotspot) resolves down to sun.misc.Unsafe#park (and unpark) and this tends to end up in the hands of the pthread (posix threads) lib. Typically pthread_cond_broadcast to wake up and pthread_cond_wait or pthread_cond_timedwait for things like BlockingQueue#take.
I can't say I've ever looked at how Thread#sleep is actually implemented (cos I've never come across something low latency that isn't a condition based wait) but I would imagine that it causes it to be demoted by the schedular in a more aggressive way than the pthread signalling mechanism and that is what accounts for the latency difference.
I would use just an ArrayBlockingQueue if you can. When I have used it the latency was between 8-18 microseconds on Linux. Some point of note.
The cost is largely the time it takes to wake up the thread. When you wake up a thread its data/code won't be in cache so you will find that if you time what happens after a thread has woken that can take 2-5x longer than if you were to run the same thing repeatedly.
Certain operations use OS calls (such as locking/cyclic barriers) these are often more expensive in a low latency scenario than busy waiting. I suggest trying to busy wait your producer rather than use a CyclicBarrier. You could busy wait your consumer as well but this could be unreasonably expensive on a real system.
#Peter Lawrey
Certain operations use OS calls (such as locking/cyclic barriers)
Those are NOT OS (kernel) calls. Implemented via simple CAS (which on x86 comes w/ free memory fence as well)
One more: dont use ArrayBlockingQueue unless you know why (you use it).
#OP:
Look at ThreadPoolExecutor, it offers excellent producer/consumer framework.
Edit below:
to reduce the latency (baring the busy wait), change the queue to SynchronousQueue add the following like before starting the consumer
...
consumerThread.setPriority(Thread.MAX_PRIORITY);
consumerThread.start();
This is the best you can get.
Edit2:
Here w/ sync. queue. And not printing the results.
package t1;
import java.math.BigDecimal;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.SynchronousQueue;
public class QueueTest {
static final int RUNS = 250000;
final SynchronousQueue<Long> queue = new SynchronousQueue<Long>();
int sleep = 1000;
long[] results = new long[0];
public void start(final int runs) throws Exception {
results = new long[runs];
final CountDownLatch barrier = new CountDownLatch(1);
Thread consumerThread = new Thread(new Runnable() {
#Override
public void run() {
barrier.countDown();
try {
for(int i = 0; i < runs; i++) {
results[i] = consume();
}
} catch (Exception e) {
return;
}
}
});
consumerThread.setPriority(Thread.MAX_PRIORITY);
consumerThread.start();
barrier.await();
final long sleep = this.sleep;
for(int i = 0; i < runs; i++) {
try {
doProduce(sleep);
} catch (Exception e) {
return;
}
}
}
private void doProduce(final long sleep) throws InterruptedException {
produce();
}
public void produce() throws InterruptedException {
queue.put(new Long(System.nanoTime()));//new Long() is faster than value of
}
public long consume() throws InterruptedException {
long t = queue.take();
long now = System.nanoTime();
return now-t;
}
public static void main(String[] args) throws Throwable {
QueueTest test = new QueueTest();
System.out.println("Starting + warming up...");
// Run first once, ignoring results
test.sleep = 0;
test.start(15000);//10k is the normal warm-up for -server hotspot
// Run again, printing the results
System.gc();
System.out.println("Starting again...");
test.sleep = 1000;//ignored now
Thread.yield();
test.start(RUNS);
long sum = 0;
for (long elapsed: test.results){
sum+=elapsed;
}
BigDecimal elapsed = BigDecimal.valueOf(sum, 3).divide(BigDecimal.valueOf(test.results.length), BigDecimal.ROUND_HALF_UP);
System.out.printf("Avg: %1.3f micros%n", elapsed);
}
}
If latency is critical and you do not require strict FIFO semantics, then you may want to consider JSR-166's LinkedTransferQueue. It enables elimination so that opposing operations can exchange values instead of synchronizing on the queue data structure. This approach helps reduce contention, enables parallel exchanges, and avoids thread sleep/wake-up penalties.

Categories