I need a component/class that throttles execution of some method to maximum M calls in N seconds (or ms or nanos, does not matter).
In other words I need to make sure that my method is executed no more than M times in a sliding window of N seconds.
If you don't know existing class feel free to post your solutions/ideas how you would implement this.
I'd use a ring buffer of timestamps with a fixed size of M. Each time the method is called, you check the oldest entry, and if it's less than N seconds in the past, you execute and add another entry, otherwise you sleep for the time difference.
What worked out of the box for me was Google Guava RateLimiter.
// Allow one request per second
private RateLimiter throttle = RateLimiter.create(1.0);
private void someMethod() {
throttle.acquire();
// Do something
}
In concrete terms, you should be able to implement this with a DelayQueue. Initialize the queue with M Delayed instances with their delay initially set to zero. As requests to the method come in, take a token, which causes the method to block until the throttling requirement has been met. When a token has been taken, add a new token to the queue with a delay of N.
Read up on the Token bucket algorithm. Basically, you have a bucket with tokens in it. Every time you execute the method, you take a token. If there are no more tokens, you block until you get one. Meanwhile, there is some external actor that replenishes the tokens at a fixed interval.
I'm not aware of a library to do this (or anything similar). You could write this logic into your code or use AspectJ to add the behavior.
If you need a Java based sliding window rate limiter that will operate across a distributed system you might want to take a look at the https://github.com/mokies/ratelimitj project.
A Redis backed configuration, to limit requests by IP to 50 per minute would look like this:
import com.lambdaworks.redis.RedisClient;
import es.moki.ratelimitj.core.LimitRule;
RedisClient client = RedisClient.create("redis://localhost");
Set<LimitRule> rules = Collections.singleton(LimitRule.of(1, TimeUnit.MINUTES, 50)); // 50 request per minute, per key
RedisRateLimit requestRateLimiter = new RedisRateLimit(client, rules);
boolean overLimit = requestRateLimiter.overLimit("ip:127.0.0.2");
See https://github.com/mokies/ratelimitj/tree/master/ratelimitj-redis fore further details on Redis configuration.
This depends in the application.
Imagine the case in which multiple threads want a token to do some globally rate-limited action with no burst allowed (i.e. you want to limit 10 actions per 10 seconds but you don't want 10 actions to happen in the first second and then remain 9 seconds stopped).
The DelayedQueue has a disadvantage: the order at which threads request tokens might not be the order at which they get their request fulfilled. If multiple threads are blocked waiting for a token, it is not clear which one will take the next available token. You could even have threads waiting forever, in my point of view.
One solution is to have a minimum interval of time between two consecutive actions, and take actions in the same order as they were requested.
Here is an implementation:
public class LeakyBucket {
protected float maxRate;
protected long minTime;
//holds time of last action (past or future!)
protected long lastSchedAction = System.currentTimeMillis();
public LeakyBucket(float maxRate) throws Exception {
if(maxRate <= 0.0f) {
throw new Exception("Invalid rate");
}
this.maxRate = maxRate;
this.minTime = (long)(1000.0f / maxRate);
}
public void consume() throws InterruptedException {
long curTime = System.currentTimeMillis();
long timeLeft;
//calculate when can we do the action
synchronized(this) {
timeLeft = lastSchedAction + minTime - curTime;
if(timeLeft > 0) {
lastSchedAction += minTime;
}
else {
lastSchedAction = curTime;
}
}
//If needed, wait for our time
if(timeLeft <= 0) {
return;
}
else {
Thread.sleep(timeLeft);
}
}
}
My implementation below can handle arbitrary request time precision, it has O(1) time complexity for each request, does not require any additional buffer, e.g. O(1) space complexity, in addition it does not require background thread to release token, instead tokens are released according to time passed since last request.
class RateLimiter {
int limit;
double available;
long interval;
long lastTimeStamp;
RateLimiter(int limit, long interval) {
this.limit = limit;
this.interval = interval;
available = 0;
lastTimeStamp = System.currentTimeMillis();
}
synchronized boolean canAdd() {
long now = System.currentTimeMillis();
// more token are released since last request
available += (now-lastTimeStamp)*1.0/interval*limit;
if (available>limit)
available = limit;
lastTimeStamp = now;
if (available<1)
return false;
else {
available--;
return true;
}
}
}
Although it's not what you asked, ThreadPoolExecutor, which is designed to cap to M simultaneous requests instead of M requests in N seconds, could also be useful.
I have implemented a simple throttling algorithm.Try this link,
http://krishnaprasadas.blogspot.in/2012/05/throttling-algorithm.html
A brief about the Algorithm,
This algorithm utilizes the capability of Java Delayed Queue.
Create a delayed object with the expected delay (here 1000/M for millisecond TimeUnit).
Put the same object into the delayed queue which will intern provides the moving window for us.
Then before each method call take the object form the queue, take is a blocking call which will return only after the specified delay, and after the method call don't forget to put the object into the queue with updated time(here current milliseconds).
Here we can also have multiple delayed objects with different delay. This approach will also provide high throughput.
Try to use this simple approach:
public class SimpleThrottler {
private static final int T = 1; // min
private static final int N = 345;
private Lock lock = new ReentrantLock();
private Condition newFrame = lock.newCondition();
private volatile boolean currentFrame = true;
public SimpleThrottler() {
handleForGate();
}
/**
* Payload
*/
private void job() {
try {
Thread.sleep(Math.abs(ThreadLocalRandom.current().nextLong(12, 98)));
} catch (InterruptedException e) {
e.printStackTrace();
}
System.err.print(" J. ");
}
public void doJob() throws InterruptedException {
lock.lock();
try {
while (true) {
int count = 0;
while (count < N && currentFrame) {
job();
count++;
}
newFrame.await();
currentFrame = true;
}
} finally {
lock.unlock();
}
}
public void handleForGate() {
Thread handler = new Thread(() -> {
while (true) {
try {
Thread.sleep(1 * 900);
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
currentFrame = false;
lock.lock();
try {
newFrame.signal();
} finally {
lock.unlock();
}
}
}
});
handler.start();
}
}
Apache Camel also supports comes with Throttler mechanism as follows:
from("seda:a").throttle(100).asyncDelayed().to("seda:b");
This is an update to the LeakyBucket code above.
This works for a more that 1000 requests per sec.
import lombok.SneakyThrows;
import java.util.concurrent.TimeUnit;
class LeakyBucket {
private long minTimeNano; // sec / billion
private long sched = System.nanoTime();
/**
* Create a rate limiter using the leakybucket alg.
* #param perSec the number of requests per second
*/
public LeakyBucket(double perSec) {
if (perSec <= 0.0) {
throw new RuntimeException("Invalid rate " + perSec);
}
this.minTimeNano = (long) (1_000_000_000.0 / perSec);
}
#SneakyThrows public void consume() {
long curr = System.nanoTime();
long timeLeft;
synchronized (this) {
timeLeft = sched - curr + minTimeNano;
sched += minTimeNano;
}
if (timeLeft <= minTimeNano) {
return;
}
TimeUnit.NANOSECONDS.sleep(timeLeft);
}
}
and the unittest for above:
import com.google.common.base.Stopwatch;
import org.junit.Ignore;
import org.junit.Test;
import java.util.concurrent.TimeUnit;
import java.util.stream.IntStream;
public class LeakyBucketTest {
#Test #Ignore public void t() {
double numberPerSec = 10000;
LeakyBucket b = new LeakyBucket(numberPerSec);
Stopwatch w = Stopwatch.createStarted();
IntStream.range(0, (int) (numberPerSec * 5)).parallel().forEach(
x -> b.consume());
System.out.printf("%,d ms%n", w.elapsed(TimeUnit.MILLISECONDS));
}
}
Here is a little advanced version of simple rate limiter
/**
* Simple request limiter based on Thread.sleep method.
* Create limiter instance via {#link #create(float)} and call {#link #consume()} before making any request.
* If the limit is exceeded cosume method locks and waits for current call rate to fall down below the limit
*/
public class RequestRateLimiter {
private long minTime;
private long lastSchedAction;
private double avgSpent = 0;
ArrayList<RatePeriod> periods;
#AllArgsConstructor
public static class RatePeriod{
#Getter
private LocalTime start;
#Getter
private LocalTime end;
#Getter
private float maxRate;
}
/**
* Create request limiter with maxRate - maximum number of requests per second
* #param maxRate - maximum number of requests per second
* #return
*/
public static RequestRateLimiter create(float maxRate){
return new RequestRateLimiter(Arrays.asList( new RatePeriod(LocalTime.of(0,0,0),
LocalTime.of(23,59,59), maxRate)));
}
/**
* Create request limiter with ratePeriods calendar - maximum number of requests per second in every period
* #param ratePeriods - rate calendar
* #return
*/
public static RequestRateLimiter create(List<RatePeriod> ratePeriods){
return new RequestRateLimiter(ratePeriods);
}
private void checkArgs(List<RatePeriod> ratePeriods){
for (RatePeriod rp: ratePeriods ){
if ( null == rp || rp.maxRate <= 0.0f || null == rp.start || null == rp.end )
throw new IllegalArgumentException("list contains null or rate is less then zero or period is zero length");
}
}
private float getCurrentRate(){
LocalTime now = LocalTime.now();
for (RatePeriod rp: periods){
if ( now.isAfter( rp.start ) && now.isBefore( rp.end ) )
return rp.maxRate;
}
return Float.MAX_VALUE;
}
private RequestRateLimiter(List<RatePeriod> ratePeriods){
checkArgs(ratePeriods);
periods = new ArrayList<>(ratePeriods.size());
periods.addAll(ratePeriods);
this.minTime = (long)(1000.0f / getCurrentRate());
this.lastSchedAction = System.currentTimeMillis() - minTime;
}
/**
* Call this method before making actual request.
* Method call locks until current rate falls down below the limit
* #throws InterruptedException
*/
public void consume() throws InterruptedException {
long timeLeft;
synchronized(this) {
long curTime = System.currentTimeMillis();
minTime = (long)(1000.0f / getCurrentRate());
timeLeft = lastSchedAction + minTime - curTime;
long timeSpent = curTime - lastSchedAction + timeLeft;
avgSpent = (avgSpent + timeSpent) / 2;
if(timeLeft <= 0) {
lastSchedAction = curTime;
return;
}
lastSchedAction = curTime + timeLeft;
}
Thread.sleep(timeLeft);
}
public synchronized float getCuRate(){
return (float) ( 1000d / avgSpent);
}
}
And unit tests
import org.junit.Assert;
import org.junit.Test;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
public class RequestRateLimiterTest {
#Test(expected = IllegalArgumentException.class)
public void checkSingleThreadZeroRate(){
// Zero rate
RequestRateLimiter limiter = RequestRateLimiter.create(0);
try {
limiter.consume();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
#Test
public void checkSingleThreadUnlimitedRate(){
// Unlimited
RequestRateLimiter limiter = RequestRateLimiter.create(Float.MAX_VALUE);
long started = System.currentTimeMillis();
for ( int i = 0; i < 1000; i++ ){
try {
limiter.consume();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
long ended = System.currentTimeMillis();
System.out.println( "Current rate:" + limiter.getCurRate() );
Assert.assertTrue( ((ended - started) < 1000));
}
#Test
public void rcheckSingleThreadRate(){
// 3 request per minute
RequestRateLimiter limiter = RequestRateLimiter.create(3f/60f);
long started = System.currentTimeMillis();
for ( int i = 0; i < 3; i++ ){
try {
limiter.consume();
Thread.sleep(20000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
long ended = System.currentTimeMillis();
System.out.println( "Current rate:" + limiter.getCurRate() );
Assert.assertTrue( ((ended - started) >= 60000 ) & ((ended - started) < 61000));
}
#Test
public void checkSingleThreadRateLimit(){
// 100 request per second
RequestRateLimiter limiter = RequestRateLimiter.create(100);
long started = System.currentTimeMillis();
for ( int i = 0; i < 1000; i++ ){
try {
limiter.consume();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
long ended = System.currentTimeMillis();
System.out.println( "Current rate:" + limiter.getCurRate() );
Assert.assertTrue( (ended - started) >= ( 10000 - 100 ));
}
#Test
public void checkMultiThreadedRateLimit(){
// 100 request per second
RequestRateLimiter limiter = RequestRateLimiter.create(100);
long started = System.currentTimeMillis();
List<Future<?>> tasks = new ArrayList<>(10);
ExecutorService exec = Executors.newFixedThreadPool(10);
for ( int i = 0; i < 10; i++ ) {
tasks.add( exec.submit(() -> {
for (int i1 = 0; i1 < 100; i1++) {
try {
limiter.consume();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}) );
}
tasks.stream().forEach( future -> {
try {
future.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
});
long ended = System.currentTimeMillis();
System.out.println( "Current rate:" + limiter.getCurRate() );
Assert.assertTrue( (ended - started) >= ( 10000 - 100 ) );
}
#Test
public void checkMultiThreaded32RateLimit(){
// 0,2 request per second
RequestRateLimiter limiter = RequestRateLimiter.create(0.2f);
long started = System.currentTimeMillis();
List<Future<?>> tasks = new ArrayList<>(8);
ExecutorService exec = Executors.newFixedThreadPool(8);
for ( int i = 0; i < 8; i++ ) {
tasks.add( exec.submit(() -> {
for (int i1 = 0; i1 < 2; i1++) {
try {
limiter.consume();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}) );
}
tasks.stream().forEach( future -> {
try {
future.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
});
long ended = System.currentTimeMillis();
System.out.println( "Current rate:" + limiter.getCurRate() );
Assert.assertTrue( (ended - started) >= ( 10000 - 100 ) );
}
#Test
public void checkMultiThreadedRateLimitDynamicRate(){
// 100 request per second
RequestRateLimiter limiter = RequestRateLimiter.create(100);
long started = System.currentTimeMillis();
List<Future<?>> tasks = new ArrayList<>(10);
ExecutorService exec = Executors.newFixedThreadPool(10);
for ( int i = 0; i < 10; i++ ) {
tasks.add( exec.submit(() -> {
Random r = new Random();
for (int i1 = 0; i1 < 100; i1++) {
try {
limiter.consume();
Thread.sleep(r.nextInt(1000));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}) );
}
tasks.stream().forEach( future -> {
try {
future.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
});
long ended = System.currentTimeMillis();
System.out.println( "Current rate:" + limiter.getCurRate() );
Assert.assertTrue( (ended - started) >= ( 10000 - 100 ) );
}
}
My solution: A simple util method, you can modify it to create a wrapper class.
public static Runnable throttle (Runnable realRunner, long delay) {
Runnable throttleRunner = new Runnable() {
// whether is waiting to run
private boolean _isWaiting = false;
// target time to run realRunner
private long _timeToRun;
// specified delay time to wait
private long _delay = delay;
// Runnable that has the real task to run
private Runnable _realRunner = realRunner;
#Override
public void run() {
// current time
long now;
synchronized (this) {
// another thread is waiting, skip
if (_isWaiting) return;
now = System.currentTimeMillis();
// update time to run
// do not update it each time since
// you do not want to postpone it unlimited
_timeToRun = now+_delay;
// set waiting status
_isWaiting = true;
}
try {
Thread.sleep(_timeToRun-now);
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
// clear waiting status before run
_isWaiting = false;
// do the real task
_realRunner.run();
}
}};
return throttleRunner;
}
Take from JAVA Thread Debounce and Throttle
Here is a rate limiter implementation based on #tonywl (and somewhat relates to Duarte Meneses's leaky bucket). The idea is the same - use a "token pool" to allow both rate limiting and bursting (make multiple calls in a short time after idling for a bit).
This implementation offers two main differences:
Lock-less concurrent access using atomic operations.
Instead of blocking a request, calculate a delay needed to enforce the rate limit and offers that as the response, allow the caller to enforce the delay - this will work better with asynchronous programming that you can find in modern networking frameworks.
The full implementation with documentation can be found in this Github Gist, which is where I'll also post updates, but here's the gist of it:
import java.util.concurrent.atomic.AtomicLong;
public class RateLimiter {
private final static long TOKEN_SIZE = 1_000_000 /* tockins per token */;
private final double tokenRate; // measured in tokens per ms
private final double tockinRate; // measured in tockins per ms
private final long tockinsLimit;
private AtomicLong available;
private AtomicLong lastTimeStamp;
public RateLimiter(int prefill, int limit, int fill, long interval) {
this.tokenRate = (double)fill / interval;
this.tockinsLimit = TOKEN_SIZE * limit;
this.tockinRate = tokenRate * TOKEN_SIZE;
this.lastTimeStamp = new AtomicLong(System.nanoTime());
this.available = new AtomicLong(Math.max(prefill, limit) * TOKEN_SIZE);
}
public boolean allowRequest() {
return whenNextAllowed(1, false) == 0;
}
public boolean allowRequest(int cost) {
return whenNextAllowed(cost, false) == 0;
}
public long whenNextAllowed(boolean alwaysConsume) {
return whenNextAllowed(1, alwaysConsume);
}
/**
* Check when will the next call be allowed, according to the specified rate.
* The value returned is in milliseconds. If the result is 0 - or if {#code alwaysConsume} was
* specified then the RateLimiter has recorded that the call has been allowed.
* #param cost How costly is the requested action. The base rate is 1 token per request,
* but the client can declare a more costly action that consumes more tokens.
* #param alwaysConsume if set to {#code true} this method assumes that the caller will delay
* the action that is rate limited but will perform it without checking again - so it will
* consume the specified number of tokens as if the action has gone through. This means that
* the pool can get into a deficit, which will further delay additional actions.
* #return how long before this request should be let through.
*/
public long whenNextAllowed(int cost, boolean alwaysConsume) {
var now = System.nanoTime();
var last = lastTimeStamp.getAndSet(now);
// calculate how many tockins we got since last call
// if the previous call was less than a microsecond ago, we still accumulate at least
// one tockin, which is probably more than we should, but this is too small to matter - right?
var add = (long)Math.ceil(tokenRate * (now - last));
var nowAvailable = available.addAndGet(add);
while (nowAvailable > tockinsLimit) {
available.compareAndSet(nowAvailable, tockinsLimit);
nowAvailable = available.get();
}
// answer the question
var toWait = (long)Math.ceil(Math.max(0, (TOKEN_SIZE - nowAvailable) / tockinRate));
if (alwaysConsume || toWait == 0) // the caller will let the request go through, so consume a token now
available.addAndGet(-TOKEN_SIZE);
return toWait;
}
}
Related
This TestCode is supposed to create an stream of numbers in seconds.
Collect 10 samples, and average the time which each samples comes out.
I did try to use if-else, but the variable from if doesn't share with else.
Please correct me if I'm wrong.
I don't understand lambda just yet.
public class TestCode {
private int eachTwoSec;
// supposed to aList.add 10 items
// average the time needed in between each aList.add (2 obviously)
public void avgTimeTaken() {
ArrayList aList = new ArrayList();
for (int i = 0; i < 10; i++) {
aList.add(eachTwoSec);
}
}
// return a number every two seconds (endless stream of samples)
// samples 50,52,54,56,58,60,2,4,6,8,10
public void twoSecTime() {
try {
Thread.sleep(2000);
} catch (InterruptedException ex) {
Logger.getLogger(Dummies.class.getName()).log(Level.SEVERE, null, ex);
}
LocalDateTime ldt = LocalDateTime.now();
DateTimeFormatter dtf = DateTimeFormatter.ofPattern("ss");
eachTwoSec = Integer.parseInt(ldt.format(dtf));
System.out.println(eachTwoSec);
twoSecTime();
}
public TestCode() {
// construct
avgTimeTaken();
new Thread(this::twoSecTime).start();
}
public static void main(String[] args) {
// just a start point
new TestCode();
}
}
The literal answer to the question "How do I average the contents in ArrayList?" for a List<Integer> is:
list.stream().mapToInt(Integer::intValue).average();
Though I suspect that's not really what you need to know given the concurrency issues in your code.
This may help to do what you want (or give you a place from which to proceed).
I use a timer to take action every 2000 ms. I prefer using the Swing timer and not messing around with TimerTasks.
I don't just add 2 sec but grab the current nanoSecond time
This introduces latency errors introduced by various parts of the code and
of synchronicities.
I add the microseconds to the ArrayList. These are in the form of delta from the most recent to the previously recorded value.
and when count == 10 I stop the timer and invoke the averaging method.
Most of the work is done on the EDT (normally a bad thing but okay for this exercise). If that were a problem, another thread could be started to handle the load.
I then use the original main thread to signal wait to leave the JVM. Imo, preferred over System.exit(0);
The gathered data and final average are all in microseconds.
import java.util.ArrayList;
import javax.swing.Timer;
public class TestCode {
Timer timer;
int delay = 2000; // milliseconds
int count = 0;
long last;
ArrayList<Integer> aList = new ArrayList<>();
Object mainThread;
public void avgTimeTaken() {
double sum = 0;
for (Integer secs : aList) {
sum += secs;
}
System.out.println("Avg = " + sum / aList.size());
}
public void twoSecTime() {
long now = System.nanoTime();
int delta = (int) (now / 1000 - last / 1000); // microseconds
last = now;
aList.add(delta);
System.out.println(delta);
count++;
if (count == 10) {
// stop the time
timer.stop();
// get the averages
avgTimeTaken();
// wake up the wait to exit the JVM
// twoSecTime is run on the EDT via timer
// so need to use mainThread
synchronized (mainThread) {
mainThread.notifyAll();
}
}
}
public static void main(String[] args) {
new TestCode().start();
}
public void start() {
mainThread = this;
timer = new Timer(2000, (ae) -> twoSecTime());
last = System.nanoTime(); // initialize last
timer.start();
synchronized (this) {
try {
wait(); // main thread waiting until all is done
} catch (InterruptedException ie) {
ie.printStackTrace();
}
}
}
}
i have a List of Objects "Cell" which represent a current status.
List<Cell> currentCells = new ArrayList<Cell>();
I want to update the current status in a loop by calculating a future status (a new List of Cells) an then do replacing by "current status = future status" and so on. For that every Cell has a method which returns a new Cell Object representing its future. This method has simple math operations. There are no dependencies to other calculations. So, in theory, future Cells could be computet in parallel Threads.
My Problem is to find the fastet way of computing the future status cause the "List cells" is a very large ArrayList(4000000) and i want to reach up to 25 loops /second.
If i compute it sequentially my loop repeats 3 times / second. If i do high parallelisation by making each Cell a callable and putting them in an ExecuterService my loop repeats only with 0.5 per second.
List<Future<Cell>> futures = taskExecutor.invokeAll(cells);
I know that solving a problem in parallel always needs an overhead. In this case it speeds down my loop. What method can you recommend to speed up my loop?
There seems to be no difference in speed anyway if i change numberOfThreads from 1 up to 12 (i have maximum of 12 processors). The code of my loop is:
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import view.ConsolePrinter;
public class Engine implements Runnable {
private int numberOfThreads = 8;
private boolean shouldRun = false;
private ExecutorService taskExecutor;
private Thread loopThread;
private List<Cell> cells;
private World world;
private int numberOfRunningTimeSteps = 0;
private int FPS = 30;
private double averageFPS;
public Engine(List<Cell> cells, World world) {
this.cells = cells;
taskExecutor = Executors.newFixedThreadPool(numberOfThreads);
this.world = world;
}
#Override
public void run() {
world.resetTime();
shouldRun = true;
long startTime;
long estimatedTimeMillis;
long waitTime;
long totalTime = 0;
long targetTime = 1000 / FPS;
int frameCount = 0;
int maxFrameCount = 1;
while (shouldRun) {
startTime = System.nanoTime();
try {
List<Future<Cell>> futures = taskExecutor.invokeAll(cells);
List<Cell> futureCells = new ArrayList<Cell>();
futures.forEach((future) -> {
try {
futureCells.add(future.get());
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
});
world.setCells(futureCells);
} catch (InterruptedException e) {
e.printStackTrace();
}
world.timeProceeded();
if (numberOfRunningTimeSteps != 0 && world.getTime() == numberOfRunningTimeSteps) {
shouldRun = false;
}
estimatedTimeMillis = (System.nanoTime() - startTime) / 1000000;
waitTime = targetTime - estimatedTimeMillis;
try {
if (waitTime > 0.0) {
Thread.sleep(waitTime);
}
} catch (InterruptedException e) {
e.printStackTrace();
}
totalTime += System.nanoTime() - startTime;
frameCount++;
if (frameCount == maxFrameCount) {
averageFPS = 1000.0 / ((totalTime / frameCount) / 1000000);
frameCount = 0;
totalTime = 0;
}
}
}
public void start(int n) {
loopThread = new Thread(this);
loopThread.start();
numberOfRunningTimeSteps = n;
}
public void stop() {
shouldRun = false;
}
}
I need to know how frequency different events occur. For example how many HTTP requests have occurred in the last 15 minutes. Because there can be a large count of events (millions) this must be use a limited amount of memory.
It there any util class in Java that can do this?
How can I implement this self in Java?
Theoretical usage code can look like:
FrequencyCounter counter = new FrequencyCounter( 15, TimeUnit.Minutes );
...
counter.add();
...
int count = counter.getCount();
Edit: It must be a real time value which can changed thousand times the minute and will be query thousands times the minute. That a database or file based solution are not possible.
Here is my implementation of such a counter. The memory usage with the default precision is fewer as 100 bytes. The memory usage is independent of the event count.
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
/**
* A counter that counts events within the past time interval. All events that occurred before this interval will be
* removed from the counter.
*/
public class FrequencyCounter {
private final long monitoringInterval;
private final int[] details;
private final AtomicInteger currentCount = new AtomicInteger();
private long startInterval;
private int total;
/**
* Create a new instance of the counter for the given interval.
*
* #param interval the time to monitor/count the events.
* #param unit the time unit of the {#code interval} argument
*/
FrequencyCounter( long interval, TimeUnit unit ) {
this( interval, unit, 16 );
}
/**
* Create a new instance of the counter for the given interval.
*
* #param interval the time to monitor/count the events.
* #param unit the time unit of the {#code interval} argument
* #param precision the count of time slices for the for the measurement
*/
FrequencyCounter( long interval, TimeUnit unit, int precision ) {
monitoringInterval = unit.toMillis( interval );
if( monitoringInterval <= 0 ) {
throw new IllegalArgumentException( "Interval mus be a positive value:" + interval );
}
details = new int[precision];
startInterval = System.currentTimeMillis() - monitoringInterval;
}
/**
* Count a single event.
*/
public void increment() {
checkInterval( System.currentTimeMillis() );
currentCount.incrementAndGet();
}
/**
* Get the current value of the counter.
*
* #return the counter value
*/
public int getCount() {
long currentTime = System.currentTimeMillis();
checkInterval( currentTime );
long diff = currentTime - startInterval - monitoringInterval;
double partFactor = (diff * details.length / (double)monitoringInterval);
int part = (int)(details[0] * partFactor);
return total + currentCount.get() - part;
}
/**
* Check the interval of the detail counters and move the interval if needed.
*
* #param time the current time
*/
private void checkInterval( final long time ) {
if( (time - startInterval - monitoringInterval) > monitoringInterval / details.length ) {
synchronized( details ) {
long detailInterval = monitoringInterval / details.length;
while( (time - startInterval - monitoringInterval) > detailInterval ) {
int currentValue = currentCount.getAndSet( 0 );
if( (total | currentValue) == 0 ) {
// for the case that the counter was not used for a long time
startInterval = time - monitoringInterval;
return;
}
int size = details.length - 1;
total += currentValue - details[0];
System.arraycopy( details, 1, details, 0, size );
details[size] = currentValue;
startInterval += detailInterval;
}
}
}
}
}
The best way I can think to implement this is using another "time counting" thread.
If you're concerned about the amount of memory, you can add a threshold for the size of eventsCounter (Integer.MAX_VALUE seems like the natural choice).
Here's an example for an implementation, that is also thread-safe:
public class FrequencyCounter {
private AtomicInteger eventsCounter = new AtomicInteger(0);
private int timeCounter;
private boolean active;
public FrequencyCounter(int timeInSeconds) {
timeCounter = timeInSeconds;
active = true;
}
// Call this method whenever an interesting event occurs
public int add() {
if(active) {
int current;
do {
current = eventsCounter.get();
} while (eventsCounter.compareAndSet(current, current + 1));
return current + 1;
}
else return -1;
}
// Get current number of events
public int getCount() {
return eventsCounter.get();
}
// Start the FrequencyCounter
public void run() {
Thread timer = new Thread(() -> {
while(timeCounter > 0) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
timeCounter --;
}
active = false;
});
timer.start();
}
}
How about a scheduled executor service.
class TimedValue{
int startValue;
int finishedValue;
TimedValue(int start){
startValue = start;
}
}
List<TimedValue> intervals = new CopyOnWriteArrayList<>();
//then when starting a measurement.
TimeValue value = new TimedValue();
//set the start value.
Callable<TimedValue> callable = ()->{
//performs the task.
value.setValueAtFinish(getCount());
return value;
}
ScheduledExecutorService executor = Executors.newScheduledThreadPool(2);
ScheduledFuture<TimedValue> future = executor.schedule(
callable,
TimeUnit.MINUTES,
15);
executor.schedule(()->itervals.add(
future.get(),
TimeUnit.MINUTES,
future.getDelay(TimeUnit.MINUTES
);
This is a bit of a complicated method.
I would probably just have a List<LoggedValues> and accumulate values in that list at a fixed rate. Then it could be inspected whenever you want to know an intervals.
I am currently making a hypothetical producer consumer problem using java. The object is to have an operating system which is 1000 bytes, but only 500 bytes available to use for threads as 500 bytes have already been consumed by drivers and other operations. The threads are as follows:
A thread to start a BubbleWitch2 session of 10 seconds, which requires 100 bytes of RAM per
second
A thread to start a Spotify stream of 20 seconds, which requires 250 bytes of RAM per second
You should also take into account the fact that the operating system is simultaneously supporting system
activity and managing the processor, memory and disk space of the device on which it is installed.
Therefore, additionally create:
System and management threads, which, together, require 50 bytes of RAM per second, and
execute for a random length of time, once invoked.
A thread to install a new security update of 2 KB, which will be stored to disk, and requires 150
bytes of RAM per second while installing. Assume sufficient disk capacity in the system to support
this thread.
The operating system has only capacity for 200 bytes per second, therefore a larger thread such as spotify will experience delays or be forced to wait. I have used code which as far as I can tell, implements this. I am also required to generate exit times which I have done with timestamps and to calculate average waiting times for threads.
I have included code in my solution for the average waiting times with system.out.print but no matter what I do, it is not actually outputting the times at all-as if they did not exist.
I am also not sure if the buffer size limitations are working as it is a matter of milliseconds-is there any way to tell if this is working from the code below?
My main method.
public class ProducerConsumerTest {
public static void main(String[] args) throws InterruptedException {
Buffer c = new Buffer();
BubbleWitch2 p1 = new BubbleWitch2(c,1);
Processor c1 = new Processor(c, 1);
Spotify p2 = new Spotify(c, 2);
SystemManagement p3 = new SystemManagement(c, 3);
SecurityUpdate p4 = new SecurityUpdate(c, 4, p1, p2, p3);
p1.setName("BubbleWitch2 ");
p2.setName("Spotify ");
p3.setName("System Management ");
p4.setName("Security Update ");
p1.setPriority(10);
p2.setPriority(10);
p3.setPriority(10);
p4.setPriority(5);
c1.start();
p1.start();
p2.start();
p3.start();
p4.start();
p2.join();
p3.join();
p4.join();
System.exit(0);
}
}
My buffer class
import java.text.DateFormat;
import java.text.SimpleDateFormat;
/**
* Created by Rory on 10/08/2014.
*/
class Buffer {
private int contents, count = 0, process = 0;
private boolean available = false;
private long start, end, wait, request= 0;
private DateFormat time = new SimpleDateFormat("mm:ss:SSS");
public synchronized int get() {
while (process <= 500) {
try {
wait();
} catch (InterruptedException e) {
}
}
process -= 200;
System.out.println("CPU After Process " + process);
notifyAll();
return contents;
}
public synchronized void put(int value) {
while (process >= 1000) {
start = System.currentTimeMillis();
try {
wait();
} catch (InterruptedException e) {
}
end = System.currentTimeMillis();
wait = end - start;
count++;
request += wait;
System.out.println("Application Request Wait Time: " + time.format(wait));
process += value;
contents = value;
notifyAll();
}
}
}
My security update class
import java.lang.*;
import java.lang.System;
/**
* Created by Rory on 11/08/2014.
*/
class SecurityUpdate extends Thread {
private Buffer buffer;
private int number;
private int bytes = 150;
private int process = 0;
public SecurityUpdate(Buffer c, int number, BubbleWitch2 bubbleWitch2, Spotify spotify, SystemManagement systemManagement) throws InterruptedException {
buffer = c;
this.number = number;
bubbleWitch2.join();
spotify.join();
systemManagement.join();
}
public void run() {
for (int i = 0; i < 15; i++) {
buffer.put(i);
System.out.println(getName() + this.number
+ " put: " + i);
try {
sleep(1500);
} catch (InterruptedException e) {
}
}
System.out.println("-----------------------------");
System.out.println("Security Update has finished executing.");
System.out.println("------------------------------");
}
}
My processor class
class Processor extends Thread {
private Buffer processor;
private int number;
public Processor(Buffer c, int number) {
processor = c;
this.number = number;
}
public void run() {
int value = 0;
for (int i = 0; i < 60; i++) {
value = processor.get();
System.out.println("Processor #"
+ this.number
+ " got: " + value);
}
}
}
My bubblewitch class
import java.lang.*;
import java.lang.System;
import java.sql.Timestamp;
/**
* Created by Rory on 10/08/2014.
*/
class BubbleWitch2 extends Thread {
private Buffer buffer;
private int number;
private int bytes = 100;
private int duration;
public BubbleWitch2(Buffer c, int pduration) {
buffer = c;
duration = pduration;
}
long startTime = System.currentTimeMillis();
public void run() {
for (int i = 0; i < 10; i++) {
buffer.put(bytes);
System.out.println(getName() + this.number
+ " put: " + i);
try {
sleep(1000);
} catch (InterruptedException e) {
}
}
long endTime = System.currentTimeMillis();
long timeTaken = endTime - startTime;
java.util.Date date = new java.util.Date();
System.out.println("-----------------------------");
System.out.println("BubbleWitch2 has finished executing.");
System.out.println("Time taken to execute was " +timeTaken+ " milliseconds");
System.out.println("Time Bubblewitch2 thread exited Processor was " + new Timestamp(date.getTime()));
System.out.println("-----------------------------");
}
}
My system management
class SystemManagement extends Thread {
private Buffer buffer;
private int number, min = 1, max = 15;
private int loopCount = (int) (Math.random() * (max - min));
private int bytes = 50;
private int process = 0;
public SystemManagement(Buffer c, int number) {
buffer = c;
this.number = number;
}
public void run() {
for (int i = 0; i < loopCount; i++) {
buffer.put(50);
System.out.println(getName() + this.number
+ " put: " + i);
try {
sleep(1000);
} catch (InterruptedException e) {
}
}
System.out.println("-----------------------------");
System.out.println("System Management has finished executing.");
System.out.println("-----------------------------");
}
}
My spotify class
import java.sql.Timestamp;
/**
* Created by Rory on 11/08/2014.
*/
class Spotify extends Thread {
private Buffer buffer;
private int number;
private int bytes = 250;
public Spotify(Buffer c, int number) {
buffer = c;
this.number = number;
}
long startTime = System.currentTimeMillis();
public void run() {
for (int i = 0; i < 20; i++) {
buffer.put(bytes);
System.out.println(getName() + this.number
+ " put: " + i);
try {
sleep(1000);
} catch (InterruptedException e) {
}
}
long endTime = System.currentTimeMillis();
long timeTaken = endTime - startTime;
java.util.Date date = new java.util.Date();
System.out.println(new Timestamp(date.getTime()));
System.out.println("-----------------------------");
System.out.println("Spotify has finished executing.");
System.out.println("Time taken to execute was " + timeTaken + " milliseconds");
System.out.println("Time that Spotify thread exited Processor was " + date);
System.out.println("-----------------------------");
}
}
I may need to add timestamps to one or two classes yet but does anyone have any idea how to get my average times to actually print out? Or what is preventing it and if the buffer limitation is effectively being shown here(given that we are talking about milliseconds?)
Thanks.
The reason why sys out's are not printing is because of the below condition in your buffer class:-
public synchronized void put(int value) {
while (process >= 1000) {
.....
notifyAll();
}
}
this condition never gets satisified as the process never is greater than 1000
This is the reason why your Processor thread also gets stuck because when it calls get() it finds that the process is less than 500 and hence it indefinitely waits when it reaches the wait() line of code.
Rectifying the process condition appropriately in your put should let your missing sys out get printed
public synchronized void put(int value) {
if(process <= 500) {
process+=value;
} else {
//while (process >= 1000) {
start = System.currentTimeMillis();
try {
wait();
} catch (InterruptedException e) {
}
end = System.currentTimeMillis();
wait = end - start;
count++;
request += wait;
System.out.println("Application Request Wait Time: " + time.format(wait));
process += value;
contents = value;
//}
}
notifyAll();
}
If you want securityupdate thread to always run at the last then the correct way of using join within that thread is as below:-
class SecurityUpdate extends Thread {
private Buffer buffer;
private int number;
private int bytes = 150;
private int process = 0;
private BubbleWitch2 bubbleWitch2;
private Spotify spotify;
private SystemManagement systemManagement;
public SecurityUpdate(Buffer c, int number, BubbleWitch2 bubbleWitch2, Spotify spotify, SystemManagement systemManagement) throws InterruptedException {
buffer = c;
this.number = number;
this.bubbleWitch2 = bubbleWitch2;
this.spotify = spotify;
this.systemManagement = systemManagement;
}
public void run() {
try {
bubbleWitch2.join();
spotify.join();
systemManagement.join();
} catch (InterruptedException e) {
}
System.out.println("Finally starting the security update");
for (int i = 0; i < 15; i++) {
buffer.put(bytes); // Paul check if it should be i or bytes
System.out.println(getName() + this.number
+ " put: " + i);
try {
sleep(1500); // Paul why is this made to sleep 1500 seconds?
} catch (InterruptedException e) {
}
}
System.out.println("-----------------------------");
System.out.println("Security Update has finished executing.");
System.out.println("------------------------------");
}
}
I am trying to understand the utilities in java.util.concurrent package and learnt that we can submit callable objects to the ExecutorService, which returns Future, which is filled with the value returned by the callable, after successful completion of task within call() method.
I am understanding that all the callables are executed concurrently using multiple threads.
When I wanted to see how much improvement ExecutorService gives over the batch task execution, i thought of capturing time.
Following is the code which i tried to execute -
package concurrency;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
public class ExecutorExample {
private static Callable<String> callable = new Callable<String>() {
#Override
public String call() throws Exception {
StringBuilder builder = new StringBuilder();
for(int i=0; i<5; i++) {
builder.append(i);
}
return builder.toString();
}
};
public static void main(String [] args) {
long start = System.currentTimeMillis();
ExecutorService service = Executors.newFixedThreadPool(5);
List<Future<String>> futures = new ArrayList<Future<String>>();
for(int i=0; i<5; i++) {
Future<String> value = service.submit(callable);
futures.add(value);
}
for(Future<String> f : futures) {
try {
System.out.println(f.isDone() + " " + f.get());
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ExecutionException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
long end = System.currentTimeMillis();
System.out.println("Executer callable time - " + (end - start));
service.shutdown();
start = System.currentTimeMillis();
for(int i=0; i<5; i++) {
StringBuilder builder = new StringBuilder();
for(int j=0; j<5; j++) {
builder.append(j);
}
System.out.println(builder.toString());
}
end = System.currentTimeMillis();
System.out.println("Normal time - " + (end - start));
}
}
and here is the output of this -
true 01234
true 01234
true 01234
true 01234
true 01234
Executer callable time - 5
01234
01234
01234
01234
01234
Normal time - 0
Please let me know if I am missing something OR understanding something in a wrong way.
Thanks in advance for your time and help for this thread.
If you task in Callable is to small, you won't get benefits from concurrency due task switching and overhead for initialisation. Try to add more heavier loop in callable, say 1000000 iterations, and you can see difference
When you run any code esp for the first time, it takes time. If you pass a task to another thread it can take 1-10 micro-seconds and if your task take less time than this, the overhead can be greater than the benefit. i.e. using multiple threads can be much slower than using a single thread if your overhead is high enough.
I suggest you
increase the cost of the task to 1000 iterations.
make sure the result is not discarded in the single threaded example
run both tests for at least a couple of seconds to ensure the code has warmed up.
Not an answer (but I am not sure the code will fit a comment). To expand a bit on what Peter said, there is usually a sweet spot for the size of your jobs (measured in execution time), to balance pool/queue overhead with fair work distribution among workers. The code example helps find an estimate for that sweet spot. Run on your target hardware.
import java.util.concurrent.*;
import java.util.concurrent.atomic.*;
public class FibonacciFork extends RecursiveTask<Long> {
private static final long serialVersionUID = 1L;
public FibonacciFork( long n) {
super();
this.n = n;
}
static ForkJoinPool fjp = new ForkJoinPool( Runtime.getRuntime().availableProcessors());
static long fibonacci0( long n) {
if ( n < 2) {
return n;
}
return fibonacci0( n - 1) + fibonacci0( n - 2);
}
static int rekLimit = 8;
private static long stealCount;
long n;
private long forkCount;
private static AtomicLong forks = new AtomicLong( 0);
public static void main( String[] args) {
int n = 45;
long times[] = getSingleThreadNanos( n);
System.out.println( "Single Thread Times complete");
for ( int r = 2; r <= n; r++) {
runWithRecursionLimit( r, n, times[ r]);
}
}
private static long[] getSingleThreadNanos( int n) {
final long times[] = new long[ n + 1];
ExecutorService es = Executors.newFixedThreadPool( Math.max( 1, Runtime.getRuntime().availableProcessors() / 2));
for ( int i = 2; i <= n; i++) {
final int arg = i;
Runnable runner = new Runnable() {
#Override
public void run() {
long start = System.nanoTime();
final int minRuntime = 1000000000;
long runUntil = start + minRuntime;
long result = fibonacci0( arg);
long end = System.nanoTime();
int ntimes = Math.max( 1, ( int) ( minRuntime / ( end - start)));
if ( ntimes > 1) {
start = System.nanoTime();
for ( int i = 0; i < ntimes; i++) {
result = fibonacci0( arg);
}
end = System.nanoTime();
}
times[ arg] = ( end - start) / ntimes;
}
};
es.execute( runner);
}
es.shutdown();
try {
es.awaitTermination( 1, TimeUnit.HOURS);
} catch ( InterruptedException e) {
System.out.println( "Single Timeout");
}
return times;
}
private static void runWithRecursionLimit( int r, int arg, long singleThreadNanos) {
rekLimit = r;
long start = System.currentTimeMillis();
long result = fibonacci( arg);
long end = System.currentTimeMillis();
// Steals zählen
long currentSteals = fjp.getStealCount();
long newSteals = currentSteals - stealCount;
stealCount = currentSteals;
long forksCount = forks.getAndSet( 0);
System.out.println( "Fib(" + arg + ")=" + result + " in " + ( end-start) + "ms, recursion limit: " + r +
" at " + ( singleThreadNanos / 1e6) + "ms, steals: " + newSteals + " forks " + forksCount);
}
static long fibonacci( final long arg) {
FibonacciFork task = new FibonacciFork( arg);
long result = fjp.invoke( task);
forks.set( task.forkCount);
return result;
}
#Override
protected Long compute() {
if ( n <= rekLimit) {
return fibonacci0( n);
}
FibonacciFork ff1 = new FibonacciFork( n-1);
FibonacciFork ff2 = new FibonacciFork( n-2);
ff1.fork();
long r2 = ff2.compute();
long r1 = ff1.join();
forkCount = ff2.forkCount + ff1.forkCount + 1;
return r1 + r2;
}
}