This TestCode is supposed to create an stream of numbers in seconds.
Collect 10 samples, and average the time which each samples comes out.
I did try to use if-else, but the variable from if doesn't share with else.
Please correct me if I'm wrong.
I don't understand lambda just yet.
public class TestCode {
private int eachTwoSec;
// supposed to aList.add 10 items
// average the time needed in between each aList.add (2 obviously)
public void avgTimeTaken() {
ArrayList aList = new ArrayList();
for (int i = 0; i < 10; i++) {
aList.add(eachTwoSec);
}
}
// return a number every two seconds (endless stream of samples)
// samples 50,52,54,56,58,60,2,4,6,8,10
public void twoSecTime() {
try {
Thread.sleep(2000);
} catch (InterruptedException ex) {
Logger.getLogger(Dummies.class.getName()).log(Level.SEVERE, null, ex);
}
LocalDateTime ldt = LocalDateTime.now();
DateTimeFormatter dtf = DateTimeFormatter.ofPattern("ss");
eachTwoSec = Integer.parseInt(ldt.format(dtf));
System.out.println(eachTwoSec);
twoSecTime();
}
public TestCode() {
// construct
avgTimeTaken();
new Thread(this::twoSecTime).start();
}
public static void main(String[] args) {
// just a start point
new TestCode();
}
}
The literal answer to the question "How do I average the contents in ArrayList?" for a List<Integer> is:
list.stream().mapToInt(Integer::intValue).average();
Though I suspect that's not really what you need to know given the concurrency issues in your code.
This may help to do what you want (or give you a place from which to proceed).
I use a timer to take action every 2000 ms. I prefer using the Swing timer and not messing around with TimerTasks.
I don't just add 2 sec but grab the current nanoSecond time
This introduces latency errors introduced by various parts of the code and
of synchronicities.
I add the microseconds to the ArrayList. These are in the form of delta from the most recent to the previously recorded value.
and when count == 10 I stop the timer and invoke the averaging method.
Most of the work is done on the EDT (normally a bad thing but okay for this exercise). If that were a problem, another thread could be started to handle the load.
I then use the original main thread to signal wait to leave the JVM. Imo, preferred over System.exit(0);
The gathered data and final average are all in microseconds.
import java.util.ArrayList;
import javax.swing.Timer;
public class TestCode {
Timer timer;
int delay = 2000; // milliseconds
int count = 0;
long last;
ArrayList<Integer> aList = new ArrayList<>();
Object mainThread;
public void avgTimeTaken() {
double sum = 0;
for (Integer secs : aList) {
sum += secs;
}
System.out.println("Avg = " + sum / aList.size());
}
public void twoSecTime() {
long now = System.nanoTime();
int delta = (int) (now / 1000 - last / 1000); // microseconds
last = now;
aList.add(delta);
System.out.println(delta);
count++;
if (count == 10) {
// stop the time
timer.stop();
// get the averages
avgTimeTaken();
// wake up the wait to exit the JVM
// twoSecTime is run on the EDT via timer
// so need to use mainThread
synchronized (mainThread) {
mainThread.notifyAll();
}
}
}
public static void main(String[] args) {
new TestCode().start();
}
public void start() {
mainThread = this;
timer = new Timer(2000, (ae) -> twoSecTime());
last = System.nanoTime(); // initialize last
timer.start();
synchronized (this) {
try {
wait(); // main thread waiting until all is done
} catch (InterruptedException ie) {
ie.printStackTrace();
}
}
}
}
I'm a beginner in java, and I just started using threads yesterday. I'm having a problem accessing variables in a thread, which I want to display.
Long story short, I made a (shitty) clock loop, that runs when the program starts. After certain actions, I have a method that checks the time, but I don't know how to access the variables in the thread to make this possible.
This is my main method:
public static void main(String[] args) {
Thread timeRunning = new Thread(new Clock());
Clock clock = new Clock();
Commands command = new Commands();
timeRunning.start();
command.whatsTheTime();
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
command.whatsTheTime();
This is my clock loop:
public class Clock implements Runnable{
public void run(){
//Time
int seconds = 0;
int minutes = 0;
int hours = 0;
//"Clock" loop
while(hours < 24) {
while (minutes < 59) {
while (seconds < 59) {
seconds++;
try {
Thread.sleep(30);
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
if(seconds == 59) {
minutes++;
seconds = 0;
}
if(minutes == 59) {
hours++;
minutes = 0;
}
}
}
}
}
This is the "whatsTheTime()" method:
public void whatsTheTime() {
System.out.println("The time is: " + clock.hours + ":" + clock.minutes + ":" + clock.seconds);
}
But it doesn't work. What I want to know is, how can I access the seconds, minutes and hours from the run() in the Clock class.
I'm sorry if this is a very basic question, but I don't know how to access it. I tried googling the solution, but can't seem to find it. I might be seraching for the wrong thing, but as I said, I'm only a few months into java, so searching for the solution is one of the things I need to get better at aswell.
My code is about the game "Minecraft". I want a Array item list to drop random items, what works fine.
I am trying to set up a kind of scheduler for an EventHandler.
I want the EventHandler to be executed only 5 times a minute, or every 12 seconds or something.
If I work with a "Bukkit" "runTaskLater" function, the Code is executed with a delay, but after the delay it's running permanent.
Here you have my raw code without any Scheduler.
#EventHandler
public void on(PlayerMoveEvent e) {
Player p = e.getPlayer();
if(p.getLocation().getBlock().getType() == Material.STONE_PLATE) {
if(p.getLocation().subtract(0D, 1D, 0D).getBlock().getType() == Material.STAINED_CLAY) {
Block block = p.getLocation().getBlock();
Random ran = new Random();
int auswahl = ran.nextInt(2);
int zahl = ran.nextInt(main.Drops.size());
ItemStack itemstack = main.Drops.get(zahl);
block.getWorld().dropItemNaturally(p.getLocation(), itemstack);
}
}
}
and now this handler should be performed only every 12 seconds.
Do anyone have a solution for me?
Thanks a lot!
As i understand it, you want to have a cooldown. Just store the time of the last event in a variable and check if the current time is 12 seconds higher:
private long lastTime = System.currentTimeMillis();
#EventHandler
public void on(PlayerMoveEvent e) {
if (lastTime < System.currentTimeMillis() - 12000) {
Player p = e.getPlayer();
if(p.getLocation().getBlock().getType() == Material.STONE_PLATE) {
if(p.getLocation().subtract(0D, 1D, 0D).getBlock().getType() == Material.STAINED_CLAY) {
Block block = p.getLocation().getBlock();
Random ran = new Random();
int auswahl = ran.nextInt(2);
int zahl = ran.nextInt(main.Drops.size());
ItemStack itemstack = main.Drops.get(zahl);
block.getWorld().dropItemNaturally(p.getLocation(), itemstack);
}
}
lastTime = System.currentTimeMillis();
}
}
If it does not work, please comment :)
I would like to write a test for a method, that calls observers in a specific intervall, so that they will execute a method. The timer-object runs in its own thread.
Method of timer to be tested
private long waitTime;
public Metronome(int bpm) {
this.bpm = bpm;
this.waitTime = calculateWaitTime();
this.running = false;
}
public void run() {
long startTime = 0, estimatedTime = 0, threadSleepTime = 0;
running = true;
while (running) {
startTime = System.nanoTime();
tick();// notify observers here
estimatedTime = System.nanoTime() - startTime;
threadSleepTime = waitTime -estimatedTime;
threadSleepTime = threadSleepTime < 0 ? 0 : threadSleepTime;
try {
Thread.sleep(threadSleepTime / 1000000l);
} catch (InterruptedException e) {
// sth went wrong
}
}
}
Snippet from my testclass
private int ticks;
private long startTime;
private long stopTime;
#Test
public void tickTest(){
metronome.setBpm(600);
startTime = System.nanoTime();
metronome.run();
long duration = stopTime - startTime;
long lowThreshold = 800000000;
long highThreshold = 900000000;
System.out.println(duration);
assertTrue(lowThreshold < duration);
assertTrue(duration <= highThreshold);
}
#Override
public void update(Observable o, Object arg) {
ticks ++;
if(ticks == 10){
metronome.stop();
stopTime = System.nanoTime();
}
}
Right now, my testclass registers as an observer at the object in question, so that i can count the number of times tick() was executed. The test measures the time before and after the execution, but it feels awkward to me, to test the behaviour this way.
Any suggestions for improving the test?
Sometimes the solution is to use something from a standard library that is sufficiently simple such that it does not need to be tested. I think SchedulerExecuterService will do the trick for replacing the home made Timer being tested here. Note that it is pretty rare to be bit by a bug in library code, but they do exist.
In general though, I think it is okay to create a helper class or use a mocking framework (Mockito) to do something simple like counting "ticks".
P.S. You can replace Thread.sleep(threadSleepTime / 1000000l) with TimeUnit.NANOSECONDS.sleep(threadSleepTime) ... which moves some logic from your code into the standard library.
Based on your comments I changed my code. Instead of implementing the Observer-interface in my testclass, I now created a private class, that implements the interface an registers at my timer.
Thanks for your time and thoughts.
Here is what the code now looks like:
revised testcode
#Test(timeout = 2000)
public void tickTest(){
long lowThreshold = 400000000;
long highThreshold = 600000000;
TickCounter counter = new TickCounter();
metronome.addObserver(counter);
metronome.setBpm(600);
startTime = System.nanoTime();
metronome.run();
long duration = System.nanoTime() - startTime;
assertTrue(lowThreshold <= duration);
assertTrue(duration <= highThreshold);
}
private class TickCounter implements Observer{
private int ticks;
public TickCounter(){
ticks = 0;
}
#Override
public void update(Observable o, Object arg) {
ticks++;
if(ticks == 5){
metronome.stop();
}
}
}
snippet from my revised timer
private long expectedTime; // calculated when bpm of timer is set
#Override
public void run() {
long startTime = 0, elapsedTime = 0, threadSleepTime = 0;
running = true;
while (running) {
startTime = System.nanoTime();
tick();
elapsedTime = System.nanoTime() - startTime;
threadSleepTime = expectedTime - elapsedTime;
threadSleepTime = threadSleepTime < 0 ? 0 : threadSleepTime;
try { TimeUnit.NANOSECONDS.sleep(threadSleepTime); } catch (Exception e) { }
}
}
My biggest issue might have been, that I implemented the observer-interface in my JUnit testcase. So I created a private observer, that specifically counts the number of times, the tick was executed. The counter then stops my timer.
The testmethod measures the timing and asserts, that the needed time is somewhere between my defined limits.
It depends on how accurately you need to measure the time.
If you feel that it's "awkward" is that because you're not sure that the measurement is accurate enough? Do you fear that the OS is getting in the way with overhead?
If so, you may need an external timing board that's synchronized to an accurate source (GPS, atomic standard, etc.) to either test your code, or possibly to provide the trigger for your firing event.
Try this. You also need the time you are expecting. The expected time will be 1000000000/n where n is the number of times your timer needs to tick() per second.
public void run(){
long time = System.nanotime();
long elapsedTime = 0;
// Hope you need to tick 30 times per second
long expectedTime = 1000000000/30;
long waitTime = 0;
while (running){
tick();
elapsedTime = System.nanotime()-time;
waitTime = expectedTime-elapsedTime();
if (waitTime>0){
try { Thread.sleep(waitTime) } catch (Exception e){}
}
time = System.nanotime();
}
}
I need a component/class that throttles execution of some method to maximum M calls in N seconds (or ms or nanos, does not matter).
In other words I need to make sure that my method is executed no more than M times in a sliding window of N seconds.
If you don't know existing class feel free to post your solutions/ideas how you would implement this.
I'd use a ring buffer of timestamps with a fixed size of M. Each time the method is called, you check the oldest entry, and if it's less than N seconds in the past, you execute and add another entry, otherwise you sleep for the time difference.
What worked out of the box for me was Google Guava RateLimiter.
// Allow one request per second
private RateLimiter throttle = RateLimiter.create(1.0);
private void someMethod() {
throttle.acquire();
// Do something
}
In concrete terms, you should be able to implement this with a DelayQueue. Initialize the queue with M Delayed instances with their delay initially set to zero. As requests to the method come in, take a token, which causes the method to block until the throttling requirement has been met. When a token has been taken, add a new token to the queue with a delay of N.
Read up on the Token bucket algorithm. Basically, you have a bucket with tokens in it. Every time you execute the method, you take a token. If there are no more tokens, you block until you get one. Meanwhile, there is some external actor that replenishes the tokens at a fixed interval.
I'm not aware of a library to do this (or anything similar). You could write this logic into your code or use AspectJ to add the behavior.
If you need a Java based sliding window rate limiter that will operate across a distributed system you might want to take a look at the https://github.com/mokies/ratelimitj project.
A Redis backed configuration, to limit requests by IP to 50 per minute would look like this:
import com.lambdaworks.redis.RedisClient;
import es.moki.ratelimitj.core.LimitRule;
RedisClient client = RedisClient.create("redis://localhost");
Set<LimitRule> rules = Collections.singleton(LimitRule.of(1, TimeUnit.MINUTES, 50)); // 50 request per minute, per key
RedisRateLimit requestRateLimiter = new RedisRateLimit(client, rules);
boolean overLimit = requestRateLimiter.overLimit("ip:127.0.0.2");
See https://github.com/mokies/ratelimitj/tree/master/ratelimitj-redis fore further details on Redis configuration.
This depends in the application.
Imagine the case in which multiple threads want a token to do some globally rate-limited action with no burst allowed (i.e. you want to limit 10 actions per 10 seconds but you don't want 10 actions to happen in the first second and then remain 9 seconds stopped).
The DelayedQueue has a disadvantage: the order at which threads request tokens might not be the order at which they get their request fulfilled. If multiple threads are blocked waiting for a token, it is not clear which one will take the next available token. You could even have threads waiting forever, in my point of view.
One solution is to have a minimum interval of time between two consecutive actions, and take actions in the same order as they were requested.
Here is an implementation:
public class LeakyBucket {
protected float maxRate;
protected long minTime;
//holds time of last action (past or future!)
protected long lastSchedAction = System.currentTimeMillis();
public LeakyBucket(float maxRate) throws Exception {
if(maxRate <= 0.0f) {
throw new Exception("Invalid rate");
}
this.maxRate = maxRate;
this.minTime = (long)(1000.0f / maxRate);
}
public void consume() throws InterruptedException {
long curTime = System.currentTimeMillis();
long timeLeft;
//calculate when can we do the action
synchronized(this) {
timeLeft = lastSchedAction + minTime - curTime;
if(timeLeft > 0) {
lastSchedAction += minTime;
}
else {
lastSchedAction = curTime;
}
}
//If needed, wait for our time
if(timeLeft <= 0) {
return;
}
else {
Thread.sleep(timeLeft);
}
}
}
My implementation below can handle arbitrary request time precision, it has O(1) time complexity for each request, does not require any additional buffer, e.g. O(1) space complexity, in addition it does not require background thread to release token, instead tokens are released according to time passed since last request.
class RateLimiter {
int limit;
double available;
long interval;
long lastTimeStamp;
RateLimiter(int limit, long interval) {
this.limit = limit;
this.interval = interval;
available = 0;
lastTimeStamp = System.currentTimeMillis();
}
synchronized boolean canAdd() {
long now = System.currentTimeMillis();
// more token are released since last request
available += (now-lastTimeStamp)*1.0/interval*limit;
if (available>limit)
available = limit;
lastTimeStamp = now;
if (available<1)
return false;
else {
available--;
return true;
}
}
}
Although it's not what you asked, ThreadPoolExecutor, which is designed to cap to M simultaneous requests instead of M requests in N seconds, could also be useful.
I have implemented a simple throttling algorithm.Try this link,
http://krishnaprasadas.blogspot.in/2012/05/throttling-algorithm.html
A brief about the Algorithm,
This algorithm utilizes the capability of Java Delayed Queue.
Create a delayed object with the expected delay (here 1000/M for millisecond TimeUnit).
Put the same object into the delayed queue which will intern provides the moving window for us.
Then before each method call take the object form the queue, take is a blocking call which will return only after the specified delay, and after the method call don't forget to put the object into the queue with updated time(here current milliseconds).
Here we can also have multiple delayed objects with different delay. This approach will also provide high throughput.
Try to use this simple approach:
public class SimpleThrottler {
private static final int T = 1; // min
private static final int N = 345;
private Lock lock = new ReentrantLock();
private Condition newFrame = lock.newCondition();
private volatile boolean currentFrame = true;
public SimpleThrottler() {
handleForGate();
}
/**
* Payload
*/
private void job() {
try {
Thread.sleep(Math.abs(ThreadLocalRandom.current().nextLong(12, 98)));
} catch (InterruptedException e) {
e.printStackTrace();
}
System.err.print(" J. ");
}
public void doJob() throws InterruptedException {
lock.lock();
try {
while (true) {
int count = 0;
while (count < N && currentFrame) {
job();
count++;
}
newFrame.await();
currentFrame = true;
}
} finally {
lock.unlock();
}
}
public void handleForGate() {
Thread handler = new Thread(() -> {
while (true) {
try {
Thread.sleep(1 * 900);
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
currentFrame = false;
lock.lock();
try {
newFrame.signal();
} finally {
lock.unlock();
}
}
}
});
handler.start();
}
}
Apache Camel also supports comes with Throttler mechanism as follows:
from("seda:a").throttle(100).asyncDelayed().to("seda:b");
This is an update to the LeakyBucket code above.
This works for a more that 1000 requests per sec.
import lombok.SneakyThrows;
import java.util.concurrent.TimeUnit;
class LeakyBucket {
private long minTimeNano; // sec / billion
private long sched = System.nanoTime();
/**
* Create a rate limiter using the leakybucket alg.
* #param perSec the number of requests per second
*/
public LeakyBucket(double perSec) {
if (perSec <= 0.0) {
throw new RuntimeException("Invalid rate " + perSec);
}
this.minTimeNano = (long) (1_000_000_000.0 / perSec);
}
#SneakyThrows public void consume() {
long curr = System.nanoTime();
long timeLeft;
synchronized (this) {
timeLeft = sched - curr + minTimeNano;
sched += minTimeNano;
}
if (timeLeft <= minTimeNano) {
return;
}
TimeUnit.NANOSECONDS.sleep(timeLeft);
}
}
and the unittest for above:
import com.google.common.base.Stopwatch;
import org.junit.Ignore;
import org.junit.Test;
import java.util.concurrent.TimeUnit;
import java.util.stream.IntStream;
public class LeakyBucketTest {
#Test #Ignore public void t() {
double numberPerSec = 10000;
LeakyBucket b = new LeakyBucket(numberPerSec);
Stopwatch w = Stopwatch.createStarted();
IntStream.range(0, (int) (numberPerSec * 5)).parallel().forEach(
x -> b.consume());
System.out.printf("%,d ms%n", w.elapsed(TimeUnit.MILLISECONDS));
}
}
Here is a little advanced version of simple rate limiter
/**
* Simple request limiter based on Thread.sleep method.
* Create limiter instance via {#link #create(float)} and call {#link #consume()} before making any request.
* If the limit is exceeded cosume method locks and waits for current call rate to fall down below the limit
*/
public class RequestRateLimiter {
private long minTime;
private long lastSchedAction;
private double avgSpent = 0;
ArrayList<RatePeriod> periods;
#AllArgsConstructor
public static class RatePeriod{
#Getter
private LocalTime start;
#Getter
private LocalTime end;
#Getter
private float maxRate;
}
/**
* Create request limiter with maxRate - maximum number of requests per second
* #param maxRate - maximum number of requests per second
* #return
*/
public static RequestRateLimiter create(float maxRate){
return new RequestRateLimiter(Arrays.asList( new RatePeriod(LocalTime.of(0,0,0),
LocalTime.of(23,59,59), maxRate)));
}
/**
* Create request limiter with ratePeriods calendar - maximum number of requests per second in every period
* #param ratePeriods - rate calendar
* #return
*/
public static RequestRateLimiter create(List<RatePeriod> ratePeriods){
return new RequestRateLimiter(ratePeriods);
}
private void checkArgs(List<RatePeriod> ratePeriods){
for (RatePeriod rp: ratePeriods ){
if ( null == rp || rp.maxRate <= 0.0f || null == rp.start || null == rp.end )
throw new IllegalArgumentException("list contains null or rate is less then zero or period is zero length");
}
}
private float getCurrentRate(){
LocalTime now = LocalTime.now();
for (RatePeriod rp: periods){
if ( now.isAfter( rp.start ) && now.isBefore( rp.end ) )
return rp.maxRate;
}
return Float.MAX_VALUE;
}
private RequestRateLimiter(List<RatePeriod> ratePeriods){
checkArgs(ratePeriods);
periods = new ArrayList<>(ratePeriods.size());
periods.addAll(ratePeriods);
this.minTime = (long)(1000.0f / getCurrentRate());
this.lastSchedAction = System.currentTimeMillis() - minTime;
}
/**
* Call this method before making actual request.
* Method call locks until current rate falls down below the limit
* #throws InterruptedException
*/
public void consume() throws InterruptedException {
long timeLeft;
synchronized(this) {
long curTime = System.currentTimeMillis();
minTime = (long)(1000.0f / getCurrentRate());
timeLeft = lastSchedAction + minTime - curTime;
long timeSpent = curTime - lastSchedAction + timeLeft;
avgSpent = (avgSpent + timeSpent) / 2;
if(timeLeft <= 0) {
lastSchedAction = curTime;
return;
}
lastSchedAction = curTime + timeLeft;
}
Thread.sleep(timeLeft);
}
public synchronized float getCuRate(){
return (float) ( 1000d / avgSpent);
}
}
And unit tests
import org.junit.Assert;
import org.junit.Test;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
public class RequestRateLimiterTest {
#Test(expected = IllegalArgumentException.class)
public void checkSingleThreadZeroRate(){
// Zero rate
RequestRateLimiter limiter = RequestRateLimiter.create(0);
try {
limiter.consume();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
#Test
public void checkSingleThreadUnlimitedRate(){
// Unlimited
RequestRateLimiter limiter = RequestRateLimiter.create(Float.MAX_VALUE);
long started = System.currentTimeMillis();
for ( int i = 0; i < 1000; i++ ){
try {
limiter.consume();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
long ended = System.currentTimeMillis();
System.out.println( "Current rate:" + limiter.getCurRate() );
Assert.assertTrue( ((ended - started) < 1000));
}
#Test
public void rcheckSingleThreadRate(){
// 3 request per minute
RequestRateLimiter limiter = RequestRateLimiter.create(3f/60f);
long started = System.currentTimeMillis();
for ( int i = 0; i < 3; i++ ){
try {
limiter.consume();
Thread.sleep(20000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
long ended = System.currentTimeMillis();
System.out.println( "Current rate:" + limiter.getCurRate() );
Assert.assertTrue( ((ended - started) >= 60000 ) & ((ended - started) < 61000));
}
#Test
public void checkSingleThreadRateLimit(){
// 100 request per second
RequestRateLimiter limiter = RequestRateLimiter.create(100);
long started = System.currentTimeMillis();
for ( int i = 0; i < 1000; i++ ){
try {
limiter.consume();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
long ended = System.currentTimeMillis();
System.out.println( "Current rate:" + limiter.getCurRate() );
Assert.assertTrue( (ended - started) >= ( 10000 - 100 ));
}
#Test
public void checkMultiThreadedRateLimit(){
// 100 request per second
RequestRateLimiter limiter = RequestRateLimiter.create(100);
long started = System.currentTimeMillis();
List<Future<?>> tasks = new ArrayList<>(10);
ExecutorService exec = Executors.newFixedThreadPool(10);
for ( int i = 0; i < 10; i++ ) {
tasks.add( exec.submit(() -> {
for (int i1 = 0; i1 < 100; i1++) {
try {
limiter.consume();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}) );
}
tasks.stream().forEach( future -> {
try {
future.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
});
long ended = System.currentTimeMillis();
System.out.println( "Current rate:" + limiter.getCurRate() );
Assert.assertTrue( (ended - started) >= ( 10000 - 100 ) );
}
#Test
public void checkMultiThreaded32RateLimit(){
// 0,2 request per second
RequestRateLimiter limiter = RequestRateLimiter.create(0.2f);
long started = System.currentTimeMillis();
List<Future<?>> tasks = new ArrayList<>(8);
ExecutorService exec = Executors.newFixedThreadPool(8);
for ( int i = 0; i < 8; i++ ) {
tasks.add( exec.submit(() -> {
for (int i1 = 0; i1 < 2; i1++) {
try {
limiter.consume();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}) );
}
tasks.stream().forEach( future -> {
try {
future.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
});
long ended = System.currentTimeMillis();
System.out.println( "Current rate:" + limiter.getCurRate() );
Assert.assertTrue( (ended - started) >= ( 10000 - 100 ) );
}
#Test
public void checkMultiThreadedRateLimitDynamicRate(){
// 100 request per second
RequestRateLimiter limiter = RequestRateLimiter.create(100);
long started = System.currentTimeMillis();
List<Future<?>> tasks = new ArrayList<>(10);
ExecutorService exec = Executors.newFixedThreadPool(10);
for ( int i = 0; i < 10; i++ ) {
tasks.add( exec.submit(() -> {
Random r = new Random();
for (int i1 = 0; i1 < 100; i1++) {
try {
limiter.consume();
Thread.sleep(r.nextInt(1000));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}) );
}
tasks.stream().forEach( future -> {
try {
future.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
});
long ended = System.currentTimeMillis();
System.out.println( "Current rate:" + limiter.getCurRate() );
Assert.assertTrue( (ended - started) >= ( 10000 - 100 ) );
}
}
My solution: A simple util method, you can modify it to create a wrapper class.
public static Runnable throttle (Runnable realRunner, long delay) {
Runnable throttleRunner = new Runnable() {
// whether is waiting to run
private boolean _isWaiting = false;
// target time to run realRunner
private long _timeToRun;
// specified delay time to wait
private long _delay = delay;
// Runnable that has the real task to run
private Runnable _realRunner = realRunner;
#Override
public void run() {
// current time
long now;
synchronized (this) {
// another thread is waiting, skip
if (_isWaiting) return;
now = System.currentTimeMillis();
// update time to run
// do not update it each time since
// you do not want to postpone it unlimited
_timeToRun = now+_delay;
// set waiting status
_isWaiting = true;
}
try {
Thread.sleep(_timeToRun-now);
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
// clear waiting status before run
_isWaiting = false;
// do the real task
_realRunner.run();
}
}};
return throttleRunner;
}
Take from JAVA Thread Debounce and Throttle
Here is a rate limiter implementation based on #tonywl (and somewhat relates to Duarte Meneses's leaky bucket). The idea is the same - use a "token pool" to allow both rate limiting and bursting (make multiple calls in a short time after idling for a bit).
This implementation offers two main differences:
Lock-less concurrent access using atomic operations.
Instead of blocking a request, calculate a delay needed to enforce the rate limit and offers that as the response, allow the caller to enforce the delay - this will work better with asynchronous programming that you can find in modern networking frameworks.
The full implementation with documentation can be found in this Github Gist, which is where I'll also post updates, but here's the gist of it:
import java.util.concurrent.atomic.AtomicLong;
public class RateLimiter {
private final static long TOKEN_SIZE = 1_000_000 /* tockins per token */;
private final double tokenRate; // measured in tokens per ms
private final double tockinRate; // measured in tockins per ms
private final long tockinsLimit;
private AtomicLong available;
private AtomicLong lastTimeStamp;
public RateLimiter(int prefill, int limit, int fill, long interval) {
this.tokenRate = (double)fill / interval;
this.tockinsLimit = TOKEN_SIZE * limit;
this.tockinRate = tokenRate * TOKEN_SIZE;
this.lastTimeStamp = new AtomicLong(System.nanoTime());
this.available = new AtomicLong(Math.max(prefill, limit) * TOKEN_SIZE);
}
public boolean allowRequest() {
return whenNextAllowed(1, false) == 0;
}
public boolean allowRequest(int cost) {
return whenNextAllowed(cost, false) == 0;
}
public long whenNextAllowed(boolean alwaysConsume) {
return whenNextAllowed(1, alwaysConsume);
}
/**
* Check when will the next call be allowed, according to the specified rate.
* The value returned is in milliseconds. If the result is 0 - or if {#code alwaysConsume} was
* specified then the RateLimiter has recorded that the call has been allowed.
* #param cost How costly is the requested action. The base rate is 1 token per request,
* but the client can declare a more costly action that consumes more tokens.
* #param alwaysConsume if set to {#code true} this method assumes that the caller will delay
* the action that is rate limited but will perform it without checking again - so it will
* consume the specified number of tokens as if the action has gone through. This means that
* the pool can get into a deficit, which will further delay additional actions.
* #return how long before this request should be let through.
*/
public long whenNextAllowed(int cost, boolean alwaysConsume) {
var now = System.nanoTime();
var last = lastTimeStamp.getAndSet(now);
// calculate how many tockins we got since last call
// if the previous call was less than a microsecond ago, we still accumulate at least
// one tockin, which is probably more than we should, but this is too small to matter - right?
var add = (long)Math.ceil(tokenRate * (now - last));
var nowAvailable = available.addAndGet(add);
while (nowAvailable > tockinsLimit) {
available.compareAndSet(nowAvailable, tockinsLimit);
nowAvailable = available.get();
}
// answer the question
var toWait = (long)Math.ceil(Math.max(0, (TOKEN_SIZE - nowAvailable) / tockinRate));
if (alwaysConsume || toWait == 0) // the caller will let the request go through, so consume a token now
available.addAndGet(-TOKEN_SIZE);
return toWait;
}
}