I'm building an Java REST API that requires clients to use api keys in order to access. I want to implement rate limiting based on the API key, so that a single api cannot use the api too many times within a given timeframe. I'm been looking to using resilience4j for this, but I can't seem to find if resilience 4j supports implementing rate limiting based on criteria such as ip address or api key, instead of just rate limiting in general. Does anyone know if this possible, and or know of any resources that show how to do this? Thanks.
You need to have a RateLimiter instance for each IP/apiKey instead of only one RateLimiter instance for all IP/apiKey.
See the guide below:
// Define one instance of LimiterManager ( Singleton | ApplicationScoped )
// NOTES: Reuse this instance
LimiterManager limiterManager = new LimiterManager();
String apiKey = "abc"; // // Get apiKey from current client request
final RateLimiter rateLimiter = limiterManager.getLimiter(apiKey);
// You can use other RateLimiter.decorateXXX depends on your logic
Runnable runnable = RateLimiter.decorateRunnable(rateLimiter, new Runnable() {
#Override
public void run() {
// TODO: Your allowed code here
}
});
Try.runRunnable(runnable).onFailure(
error -> System.out.print(error)
);
// Define LimiterManager utility class
public static class LimiterManager {
final ConcurrentMap<String, RateLimiter> keyRateLimiters = new ConcurrentHashMap<String, RateLimiter>();
final RateLimiterConfig config = RateLimiterConfig.custom().timeoutDuration(Duration.ofMillis(100))
.limitRefreshPeriod(Duration.ofSeconds(1))
.limitForPeriod(3) // Max 3 accesses per 1 second
.build();
public RateLimiter getLimiter(String apiKey) {
return keyRateLimiters.compute(apiKey, (key, limiter) -> {
return (limiter == null) ? RateLimiter.of(apiKey, config) : limiter;
});
}
}
The problem with resilience4j is that it doesn't work with sharding, give a look at https://github.blog/2021-04-05-how-we-scaled-github-api-sharded-replicated-rate-limiter-redis/
Related
I am currently working on creating the EventhubTriggered Java function app which listenes to the default-endpoint of the IotHub. Currently following the tutorials, I donot see any sample codes for Async implementation for Java Function Apps while it is recommended to use async/await for C# function apps.
Should I consider/ is it possible to add Async implementation for Function Apps in Java? Is there any sample code I can take a reference from? Should I consider adding parallel programming/Multithreading logic in the function app?
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-event-hubs-trigger?tabs=java#example
https://learn.microsoft.com/en-us/java/api/com.microsoft.azure.functions.annotation.eventhubtrigger?view=azure-java-stable
Java does not have async/await but it has reactive/webflux.
When you create default project azure function it should be packaged with reactive so you just need to make your calls in reactive way.
So lets say if you want to do some call to external sources your code will look like
public Mono<ResponseEntity<WishlistDto>> getList(String profileId, String listId) {
return service.getWishList(profileId, listId)
.map(w -> ResponseEntity.ok().body(DtoMapper.convertToDto(w, true)))
.defaultIfEmpty(ResponseEntity.notFound().build());
}
But I would recommend you to use input/output bindings as much as you can
#FunctionName("DocByIdFromQueryString")
public HttpResponseMessage run(
#HttpTrigger(name = "req",
methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
#CosmosDBInput(name = "database",
databaseName = "ToDoList",
collectionName = "Items",
id = "{Query.id}",
partitionKey = "{Query.partitionKeyValue}",
connectionStringSetting = "Cosmos_DB_Connection_String")
Optional<String> item,
final ExecutionContext context)
In this case you dont need to worry much about reactive since your function starts as soon as every thing is ready and java sdk will take care of it
Another example to use output bindings
#FunctionName("sbtopicsend")
public HttpResponseMessage run(
#HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
#ServiceBusTopicOutput(name = "message", topicName = "mytopicname", subscriptionName = "mysubscription", connection = "ServiceBusConnection") OutputBinding<String> message,
final ExecutionContext context) {
String name = request.getBody().orElse("Azure Functions");
message.setValue(name);
return request.createResponseBuilder(HttpStatus.OK).body("Hello, " + name).build();
}
I release a new project JAsync implement async-await fashion in java which use Reactor as its low level framework. It is in the alpha stage. I need more suggest and test case.
This project makes the developer's asynchronous programming experience as close as possible to the usual synchronous programming, including both coding and debugging.
Here is an example:
#RestController
#RequestMapping("/employees")
public class MyRestController {
#Inject
private EmployeeRepository employeeRepository;
#Inject
private SalaryRepository salaryRepository;
// The standard JAsync async method must be annotated with the Async annotation, and return a JPromise object.
#Async()
private JPromise<Double> _getEmployeeTotalSalaryByDepartment(String department) {
double money = 0.0;
// A Mono object can be transformed to the JPromise object. So we get a Mono object first.
Mono<List<Employee>> empsMono = employeeRepository.findEmployeeByDepartment(department);
// Transformed the Mono object to the JPromise object.
JPromise<List<Employee>> empsPromise = Promises.from(empsMono);
// Use await just like es and c# to get the value of the JPromise without blocking the current thread.
for (Employee employee : empsPromise.await()) {
// The method findSalaryByEmployee also return a Mono object. We transform it to the JPromise just like above. And then await to get the result.
Salary salary = Promises.from(salaryRepository.findSalaryByEmployee(employee.id)).await();
money += salary.total;
}
// The async method must return a JPromise object, so we use just method to wrap the result to a JPromise.
return JAsync.just(money);
}
// This is a normal webflux method.
#GetMapping("/{department}/salary")
public Mono<Double> getEmployeeTotalSalaryByDepartment(#PathVariable String department) {
// Use unwrap method to transform the JPromise object back to the Mono object.
return _getEmployeeTotalSalaryByDepartment(department).unwrap(Mono.class);
}
}
In addition to coding, JAsync also greatly improves the debugging experience of async code.
When debugging, you can see all variables in the monitor window just like when debugging normal code.
I am trying to find answer to a very specific question. Trying to go through documentation but so far no luck.
Imagine this piece of code
#Override
public void handleRequest(InputStream input, OutputStream output, Context context) throws IOException {
Request request = parseRequest(input);
List<String> validationErrors = validate(request);
if (validationErrors.size() == 0){
ordersManager.getOrderStatusForStore(orderId, storeId);
} else {
generateBadRequestResponse(output, "Invalid Request", null);
}
}
private List<String> validate(Request request) {
orderId = request.getPathParameters().get(PATH_PARAM_ORDER_ID);
programId = request.getPathParameters().get(PATH_PARAM_STORE_ID);
return new ArrayList<>();
}
Here, I am storing orderId and storeId in field variables. Is this okay? I am not sure if AWS will cache this function and hence cache the field variables or would it initiate a new Java object for every request. If its a new object, then storing in field variable is fine but not sure.
AWS will spin up a JVM and instantiate an instance of your code on the first request. AWS has an undocumented spin down time, where if you do not invoke your Lambda again within this time limit, it will shut down the JVM. You will notice these initial requests can take significantly longer but once your function is "warmed up", then it will be much quicker.
So to directly answer your question, your instance will be reused if the next request comes in quick enough. Otherwise, a new instance will be stood up.
A simple Lambda function that can illustrate this point:
/**
* A Lambda handler to see where this runs and when instances are reused.
*/
public class LambdaStatus {
private String hostname;
private AtomicLong counter;
public LambdaStatus() throws UnknownHostException {
this.counter = new AtomicLong(0L);
this.hostname = InetAddress.getLocalHost().getCanonicalHostName();
}
public void handle(Context context) {
counter.getAndIncrement();
context.getLogger().log("hostname=" + hostname + ",counter=" + counter.get());
}
}
Logs from invoking the above.
22:49:20 hostname=ip-10-12-169-156.ec2.internal,counter=1
22:49:27 hostname=ip-10-12-169-156.ec2.internal,counter=2
22:49:39 hostname=ip-10-12-169-156.ec2.internal,counter=3
01:19:05 hostname=ip-10-33-101-18.ec2.internal,counter=1
Strongly not recommended.
Multiple invocations may use the same Lambda function instance and this will break your current functionality.
You need to ensure your instance variables are thread safe and can be accessed by multiple threads when it comes to Lambda. Limit your instance variable writes to initialization - once only.
ok, so i'm trying to implement rxJava2 with retrofit2. The goal is to make a call only once and broadcast the results to different classes. For exmaple: I have a list of geofences in my backend. I need that list in my MapFragment to dispaly them on the map, but I also need that data to set the pendingIntent service for the actual trigger.
I tried following this awnser, but I get all sorts of errors:
Single Observable with Multiple Subscribers
The current situation is as follow:
GeofenceRetrofitEndpoint:
public interface GeofenceEndpoint {
#GET("geofences")
Observable<List<Point>> getGeofenceAreas();
}
GeofenceDAO:
public class GeofenceDao {
#Inject
Retrofit retrofit;
private final GeofenceEndpoint geofenceEndpoint;
public GeofenceDao(){
InjectHelper.getRootComponent().inject(this);
geofenceEndpoint = retrofit.create(GeofenceEndpoint.class);
}
public Observable<List<Point>> loadGeofences() {
return geofenceEndpoint.getGeofenceAreas().subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.share();
}
}
MapFragment / any other class where I need the results
private void getGeofences() {
new GeofenceDao().loadGeofences().subscribe(this::handleGeoResponse, this::handleGeoError);
}
private void handleGeoResponse(List<Point> points) {
// handle response
}
private void handleGeoError(Throwable error) {
// handle error
}
What am I doing wrong, because when I call new GeofenceDao().loadGeofences().subscribe(this::handleGeoResponse, this::handleGeoError); it's doing a separate call each time. Thx
new GeofenceDao().loadGeofences() returns two different instances of the Observable. share() only applies to the instance, not the the method. If you want to actually share the observable, you'd have to subscribe to the same instance. You could share the it with a (static) member loadGeofences.
private void getGeofences() {
if (loadGeofences == null) {
loadGeofences = new GeofenceDao().loadGeofences();
}
loadGeofences.subscribe(this::handleGeoResponse, this::handleGeoError);
}
But be careful not to leak the Obserable.
Maybe it's not answering your question directly, however I'd like to suggest you a little different approach:
Create a BehaviourSubject in your GeofenceDao and subscribe your retrofit request to this subject. This subject will act as a bridge between your clients and api, by doing this you will achieve:
Response cache - handy for screen rotations
Replaying response for every interested observer
Subscription between clients and subject doesn't rely on subscription between subject and API so you can break one without breaking another
I am working on measuing my application metrics using below class in which I increment and decrement metrics.
public class AppMetrics {
private final AtomicLongMap<String> metricCounter = AtomicLongMap.create();
private static class Holder {
private static final AppMetrics INSTANCE = new AppMetrics();
}
public static AppMetrics getInstance() {
return Holder.INSTANCE;
}
private AppMetrics() {}
public void increment(String name) {
metricCounter.getAndIncrement(name);
}
public AtomicLongMap<String> getMetricCounter() {
return metricCounter;
}
}
I am calling increment method of AppMetrics class from multithreaded code to increment the metrics by passing the metric name.
Problem Statement:
Now I want to have metricCounter for each clientId which is a String. That means we can also get same clientId multiple times and sometimes it will be a new clientId, so somehow then I need to extract the metricCounter map for that clientId and increment metrics on that particular map (which is what I am not sure how to do that).
What is the right way to do that keeping in mind it has to be thread safe and have to perform atomic operations. I was thinking to make a map like that instead:
private final Map<String, AtomicLongMap<String>> clientIdMetricCounterHolder = Maps.newConcurrentMap();
Is this the right way? If yes then how can I populate this map by passing clientId as it's key and it's value will be the counter map for each metric.
I am on Java 7.
If you use a map then you'll need to synchronize on the creation of new AtomicLongMap instances. I would recommend using a LoadingCache instead. You might not end up using any of the actual "caching" features but the "loading" feature is extremely helpful as it will synchronizing creation of AtomicLongMap instances for you. e.g.:
LoadingCache<String, AtomicLongMap<String>> clientIdMetricCounterCache =
CacheBuilder.newBuilder().build(new CacheLoader<String, AtomicLongMap<String>>() {
#Override
public AtomicLongMap<String> load(String key) throws Exception {
return AtomicLongMap.create();
}
});
Now you can safely start update metric counts for any client without worrying about whether the client is new or not. e.g.
clientIdMetricCounterCache.get(clientId).incrementAndGet(metricName);
A Map<String, Map<String, T>> is just a Map<Pair<String, String>, T> in disguise. Create a MultiKey class:
class MultiKey {
public String clientId;
public String name;
// be sure to add hashCode and equals
}
Then just use an AtomicLongMap<MultiKey>.
Edited:
Provided the set of metrics is well defined, it wouldn't be too hard to use this data structure to view metrics for one client:
Set<String> possibleMetrics = // all the possible values for "name"
Map<String, Long> getMetricsForClient(String client) {
return Maps.asMap(possibleMetrics, m -> metrics.get(new MultiKey(client, m));
}
The returned map will be a live unmodifiable view. It might be a bit more verbose if you're using an older Java version, but it's still possible.
How to set TTL (Time to Live) for a specific couchbase document using spring-data-couchbase?
I know there is way to set expiry time using Document notation as follows
#Document(expiry = 10)
http://docs.spring.io/spring-data/couchbase/docs/1.1.1.RELEASE/reference/html/couchbase.entity.html
It will set TTL for all the documents save through the Entity class.
But it seems there is way to set expiration(TTL) time for a specific document
"Get and touch: Fetch a specified document and update the document expiration."
mentioned in
http://docs.couchbase.com/developer/dev-guide-3.0/read-write.html
How can I achieve above functionality through spring-data-couchbase
Even If I can achieve the functionality using Java SDK, would be fine.
Any help.....
Using Spring data couchbase, this is a simple way you can configure ttl per document.
public class CouchbaseConfig extends AbstractCouchbaseConfiguration {
#Override
protected List<String> bootstrapHosts() {
return Arrays.asList("localhost");
}
#Override
protected String getBucketName() {
return "default";
}
#Override
protected String getBucketPassword() {
return "password1";
}
#Bean
public MappingCouchbaseConverter mappingCouchbaseConverter() throws Exception {
MappingCouchbaseConverter converter = new ExpiringDocumentCouchbaseConverter(couchbaseMappingContext());
converter.setCustomConversions(customConversions());
return converter;
}
class ExpiringDocumentCouchbaseConverter extends MappingCouchbaseConverter {
/**
* Create a new {#link MappingCouchbaseConverter}.
*
* #param mappingContext the mapping context to use.
*/
public ExpiringDocumentCouchbaseConverter(MappingContext<? extends CouchbasePersistentEntity<?>, CouchbasePersistentProperty> mappingContext) {
super(mappingContext);
}
// Setting custom TTL on documents.
#Override
public void write(final Object source, final CouchbaseDocument target) {
super.write(source, target);
if (source instanceof ClassContainingTTL) {
target.setExpiration(((ClassContainingTTL) source).getTimeToLive());
}
}
}
}
Using Spring-Data-Couchbase, you cannot set a TTL on a particular instance. Inserting (mutating) and setting the TTL in one go would be quite complicated given the transcoding steps that are hidden away in the CouchbaseTemplate save method.
However, if what you want to do is just update the TTL of an already persisted document (which is what getAndTouch does), there is a way that doesn't involve any transcoding and so can be applied easily:
From the CouchbaseTemplate, get access to the underlying SDK client via getCouchbaseClient() (note for now sdc is built on top of the previous generation of SDK, 1.4.x, but there'll be a preview of sdc-2.0 soon ;) )
Using the SDK, perform a touch operation on the document's ID, give it the new TTL
The touch() method returns an OperationFuture (it is async), so make sure to either block on it or consider the touch done only if notified so in the callback.
As of spring-data-couchbase:4.3.0 the code should look like yourRepository.getOperations().getCouchbaseClientFactory().getCollection(null).touch(id, ttl) or alternatively this can be done through CouchbaseTemplate as couchbaseTemplate.getCollection(null).touch(id, ttl)
findById() has a withExpiry() method that results in getAndTouch() being used and the expiration being set
User foundUser = couchbaseTemplate.findById(User.class).withExpiry(Duration.ofSeconds(1)).one(id);