How will multi-threading affect the Easy Rules engine? - java

I am looking for a rule engine for my web application and I found Easy Rules. However, in the FAQ section, it states that the limitation on thread safety.
Is a Web Container considered as a multi-threaded environment? For HTTP request is probably processed by a worker thread created by the application server.
How does thread safety comes into place?
How to deal with thread safety?
If you run Easy Rules in a multi-threaded environment, you should take into account the following considerations:
Easy Rules engine holds a set of rules, it is not thread safe.
By design, rules in Easy Rules encapsulate the business object model they operate on, so they are not thread safe neither.
Do not try to make everything synchronized or locked down!
Easy Rules engine is a very lightweight object and you can create an instance per thread, this is by far the easiest way to avoid thread safety problems
http://www.easyrules.org/get-involved/faq.html
http://www.easyrules.org/tutorials/shop-tutorial.html
Based on this example, how will multi-threading affects the rule engine?
public class AgeRule extends BasicRule {
private static final int ADULT_AGE = 18;
private Person person;
public AgeRule(Person person) {
super("AgeRule",
"Check if person's age is > 18 and
marks the person as adult", 1);
this.person = person;
}
#Override
public boolean evaluate() {
return person.getAge() > ADULT_AGE;
}
#Override
public void execute() {
person.setAdult(true);
System.out.printf("Person %s has been marked as adult",
person.getName());
}
}
public class AlcoholRule extends BasicRule {
private Person person;
public AlcoholRule(Person person) {
super("AlcoholRule",
"Children are not allowed to buy alcohol",
2);
this.person = person;
}
#Condition
public boolean evaluate() {
return !person.isAdult();
}
#Action
public void execute(){
System.out.printf("Shop: Sorry %s,
you are not allowed to buy alcohol",
person.getName());
}
}
public class Launcher {
public void someMethod() {
//create a person instance
Person tom = new Person("Tom", 14);
System.out.println("Tom:
Hi! can I have some Vodka please?");
//create a rules engine
RulesEngine rulesEngine = aNewRulesEngine()
.named("shop rules engine")
.build();
//register rules
rulesEngine.registerRule(new AgeRule(tom));
rulesEngine.registerRule(new AlcoholRule(tom));
//fire rules
rulesEngine.fireRules();
}
}

Yes, a web application is multithreaded. As you expect, there is a pool of threads maintained by the server. When the serversocket gets an incoming request on the port it's listening to, it delegates the request to a thread from the pool.Typically the request is executed on that thread until the response is completed.
If you try to create a single rules engine and let multiple threads access it, then either
the rules engine data is corrupted as a result of being manipulated by multiple threads (because data structures not meant to be threadsafe can perform operations in multiple steps that can be interfered with by other threads as they're accessing and changing the same data), or
you use locking to make sure only one thread at a time can use the rules engine, avoiding having your shared object get corrupted, but in the process creating a bottleneck. All of your requests will need to wait for the rules engine to be available and only one thread at a time can make progress.
It's much better to give each request its own copy of the rules engine, so it doesn't get corrupted and there is no need for locking. The ideal situation for threads is for each to be able to execute independently without needing to contend for shared resources.

Related

Dependency injection of IHttpContextAccessor vs passing parameter up the method chain

Our application calls many external API's which take a session token of the current user as input. So what we currently do is in a controller, get the session token for the user and pass it into a service which in turn might call another service or some API client. To give an idea, we end up with something like this (example is .NET but something similar is I think possible in Java)
public IActionResult DoSomething(string something)
{
this.someService.DoSomethingForUser(this.HttpContext.SessionToken, something);
return View();
}
And then we have
public class SomeService
{
private readonly IApiClient apiClient;
public SomeService(IApiClient apiClient)
{
this.apiClient = apiClient;
}
public void DoSomethingForUser(string sessionToken, something)
{
this.apiClient.DoSomethingForUser(sessionToken, something);
}
}
It can also happen that in SomeService another service is injected which in turn calls the IApiClient instead of SomeService calling IApiClient directly, basically adding another "layer".
We had a discussion with the team if it isn't better to instead of passing the session token, inject it using DI so you get something like this:
public IActionResult DoSomething(string something)
{
this.someService.DoSomethingForUser(something);
return View();
}
And then we have
public class SomeService
{
private readonly IUserService userService;
private readonly IApiClient apiClient;
public SomeService(IUserService userService, IApiClient apiClient)
{
this.userService = userService;
this.apiClient = apiClient;
}
public void DoSomethingForUser(string something)
{
this.apiClient.DoSomethingForUser(userService.SessionToken, something);
}
}
The IUserService would have an IHttpContextAccessor injected:
public class UserService : IUserService
{
private readonly IHttpContextAccessor httpContextAccessor;
public UserService(IHttpContextAccessor httpContextAccessor)
{
this.httpContextAccessor = httpContextAccessor;
}
public string SessionToken => httpContextAccessor.HttpContext.SessionToken;
}
The benefits of this pattern are I think pretty clear. Especially with many services, it keeps the code "cleaner" and you end up with less boilerplate code to pass a token around.
Still, I don't like it. To me the downsides of this pattern are more important than its benefit:
I like that passing the token in the methods is concise. It is clear that the service needs some sort of authentication token for it to function. I'm not sure if you can call it a side effect but the fact that a session token is magically injected three layers deep is impossible to tell just by reading the code
Unit testing is a bit more tedious if you have to Mock the IUserService
You run into problems when calling this in another thread, e.g. calling SomeService from another thread. Although these problems can be mitigated by injecting another concrete type of IUserService which gets the token from some place else, it feels like a chore.
To me it strongly feels like an anti pattern but apart from the arguments above it is mostly a feeling. There was a lot of discussion and not everybody was convinced that it was a bad idea. Therefor, my question is, is it an anti pattern or is it perfectly valid? What are some strong arguments for and against it, hopefully so there can be not much debate that this pattern is indeed, either perfectly valid or something to avoid.
I would say the main point is to enable your desired separation of concerns. I think it is a good question if expressed in those terms. As Kit says, different people may prefer different solutions.
REQUEST SCOPED OBJECTS
These occur quite naturally in APIs. Consider the following example, where a UI calls an Orders API, then the Orders API forwards the JWT to an upstream Billing API. A unique Request ID is also sent, in case the flow experiences a temporary problem. If the flow is retried, the Request ID can be used by APIs to prevent data duplication. Yet business logic should not need to know about either the Request ID or the JWT.
BUSINESS LOGIC CLASS DESIGN
I would start by designing my logic classes with my desired inputs, then work out the DI later. In my example the OrderService class might use claims to get the user identity and also for authorization. But I would not want it to know about HTTP level concerns:
public class OrderService
{
private readonly IBillingApiClient billingClient;
public OrderService(IBillingApiClient billingClient, ClaimsPrincipal user)
{
this.billingClient = billingClient;
}
public async void CreateOrder(OrderInput data)
{
this.Authorize();
var order = this.CreateOrder(data);
await this.billingClient.CreateInvoice(order);
}
}
DI SETUP
To enable my preferred business logic, I would write a little DI plumbing, so that I could inject request scoped dependencies in my preferred way. First, when the app starts, I would create a small middleware class. This will run early in the HTTP request pipeline:
private void ConfigureApiMiddleware(IApplicationBuilder api)
{
api.UseMiddleware<ClientContextMiddleware>();
}
In the middleware class I would then create a ClientContext object from runtime data. The OrderService class will run later, after next() is called:
public class ClientContextMiddleware
{
public async Task Invoke(HttpContext context)
{
var jwt = readJwt(context.Request);
var requestId = readRequestId(context.Request);
var holder = context.RequestServices.GetService<ClientContextHolder>();
holder.ClientContext = new ClientContext(jwt, requestIO);
await this.next(context);
}
}
In my DI composition at application startup I would express that the API client should be created when it is first referenced. In the HTTP request pipeline, the OrderService request scoped object will be constructed after the middleware has run. The below lambda will then be invoked:
private void RegisterDependencies(IServiceCollection services)
{
this.services.AddScoped<IApiClient>(
ctx =>
{
var holder = ctx.GetService<ClientContextHolder>();
return new ApiClient(holder.context);
});
this.services.AddScoped<ClientContextHolder>();
}
The holder object is just due to a technology limitation. The MS stack does not allow you to create new request scoped injectable objects at runtime, so you have to update an existing one. In a previous .NET tech stack, the concept of child container per request was made available to developers, so the holder object was not needed.
ASYNC AWAIT
Request scoped objects are stored against the HTTP request object, which is the correct behaviour when using async await. The current thread ID may switch, eg from 4 to 6 after the call to the Billing API.
If the OrderService class has a transient scope, it could get recreated when the flow resumes on thread 6. If this is the case, then resolution will continue to work.
SUMMARY
Designing inputs first, then writing some support code if needed is a good approach I think, and it is also useful to know the DI techniques. Personally I think natural request scoped objects that need to be created at runtime should be usable in DI. Some people may prefer a different approach though.
See in dotnet the area that I am an expert is not an anti standard on the contrary it is the model that many adopt but it is not a model that I would follow for the following reasons
it is not clear where is the token for those who read and use it being an anti clean code
you load important information in a place that is frequently accessed by the framework in the case of .netCore
your classes will reference a large property carrying a lot of unnecessary information when you could have created a more clean model that costs less memory and allocation time, I'm saying this because the HttpAcessor carries all the information relevant to your request
As I would take care of readability (clean code) and improve my performance
I would make a middleware or filter in my flow mvc where I would do the authentication part and create a class like:
public class TokenAuthenciationValues
{
public string TokenClient { get; set; }
public string TokenValue { get; set; }
}
Of course my method is an example but in my middleware I would implement it by loading its token values ​​after calling the necessary apis (of course this model needs an interface and it needs to be configured as .AddScoped() in the case of .net)
That way I would use it in my methods only instantiating my ITokenAuthenciationValues ​​in the constructor and I would have clear and clean information loaded in memory during the entire request
If it is necessary in the middle of the request to change the token any class can access it and change its value
I would have less memory allocated unused in my classes since the IHttpAcessor contract the ITokenAuthenciationValues ​​only has relevant information
Hope this helps

Are local classes that are queried from database, causes race condition?

public class A {
String name = "foo";
}
#Service
public class B {
private final Repository repo;
public void someMethod(){
A a = repo.findByName("foo");
a.name = Thread.currentThread().getName();
repo.save(a);
}
}
Imagine that 2 thread executed someMethod at the same time. Especially in jpa hibernate implementation.
My opinion is there is a race condition.
First and second threads are obtained same object with name foo. If i am not wrong, without optimistic lock there will be error in this scenario.
Also optimistic lock throws exception so do i need to use pessimistic lock to work correctly?
Also if i open second level cache with distributed in memory cache(Redis, hazelcast), what will happen?
It is a IOT project and million devices are calling this api and service.
Do i need to approach with eventual consistency?

Java Servlets - block all threads using common timer

On Tomcat 6, I have a servlet running which accepts requests and passes these onto an external system.
There is a throttling limitation on the external system - if the number of requests exceed a certain number per second, then the external system responds with a Http 503.
No further requests may hit the external system for at least 2 seconds or else the external system will restart its throttling timer.
Initially, I detected the 503 HttpResponse and did a Thread.sleep(2000) but that is wrong as it doesn't prevent the servlet servicing other requests using other threads - once a 503 response is detected, I need to block all threads for at least the 2 seconds.
Ideally, I would prefer the blocked threads not to wake up all at the same time but say a 100ms apart so that requests would be handled in order.
I've looked at the Condition and ReentrantLock but unsure if these are appropriate.
Just create a global (static) date variable in the servlet. When you get a 503, change this variable from null to the local time. The servlet should always check this variable before contacting the external system. If the variable is null, or more than 2 seconds have passed, then you can proceed. Otherwise block the thread (or throw an exception).
Looks like calling Amazon services to me, and it can be managed so easy.
You need a central and managed module for doing it, and it comes like a single module.
The important thing is you should not reach the throttling limitation at all, and if you get too much requests which would reach this value, so you should respond to your client check the result later(as async work).
If the request is kinda important business(such as capturing a payment), so you have to implement a failover with the module too, simply by persisting the request data into the database, so if there is any fail, you will have the data from the database.
If you are familiar with MQ arch, so it would be the best solution where they are designed for this kind of stuffs, but you like to have your own, you may accept and process all requests to call teh external system by the module manage.
first you may have a entity class which carries the request info like
class entity{public String id,srv,blah_blah;}
Second, a stand-alone module for accepting and processing the requests, which would be the context for the requests too. like following
class business{private business(){}// fan of OOP? K, go for singleton
private static final ArrayList<entity> ctx=new ArrayList<entity>();
static public void accept_request(entity e){_persist(e);ctx.add(e);}
static private void _persist(entity e){/*persist it to the db*/}
static private void _done(entity e){_remove(e);/*informing 3rd. parties if any*/}
static private void _remove(entity e){/*remove it from the db, it's done*/}
final private static int do_work(e){/*do the real business*/return 0;}//0 as success, 1, fail, 2....
}
But it's not completed yet, now you need a way to call the do_work() guy, so I suggest a background thread(would be daemon too!)
So clients just push the requests to this context-like class, and here we need the thread, like following
class business{...
static public void accept_request(entity e){_persist(e);ctx.add(e);synchronized(ctx){ctx.notify();}}
...
private static final Runnable r=new Runnable(){public void run(){try{
while(!Thread.currentThread().interrupt()){
if(ctx.size()==0){synchronized(ctx){if(ctx.size()==0){ctx.wait();}}}
while(ctx.size()>0){entity e=ctx.get(0);ctx.remove(0);
if(do_work(e)==0){_done(e);}else{ctx.add(e);/*give him another chance maybe!*/}end-else
Thread.Sleep(100/*appreciate sleep time*/);}//end-loop
}
}catch(Throwable wt){/*catch signals, maybe death thread*/}}};
static private Thread t;
void static public start_module(){t=new Thread(r);t.start();}
void static public stop_module(){t.interrupt();t.stop();}
...}
Tip: try not start the thread(calling start_module()) out of container initiation process, or you will have memory leak! best solution would call the thread by init() method of servlet(s) would call this module(once), and of course halting the the thread by application halt (destroy())

Is it acceptable to introduce volatile state into singleton beans?

I want to create proper stateless services, i.e., beans with singleton scope, but sometimes state creeps in. Current candidates in the application I work on are
caching data
keeping Futures for scheduled tasks around to be able to cancel them on demand later
A simplified example service for the Futures could be
class SchedulingService {
#Autowired
TaskScheduler scheduler;
Map<String, ScheduledFuture> scheduledEvents = new HashMap<>();
public void scheduleTask(String id, MyTask task) {
if (scheduledEvents.containsKey(id)) {
scheduledEvents.remove(id).cancel(false);
}
persistTask(task)
scheduledEvents.put(id, scheduler.schedule(task, task.createTrigger()));
}
void persistTask(MyTask task) { /* persist logic here */ }
}
I'm sure requirements like these will pop up all the time. Since the data that should be kept in memory doesn't have to be persisted because it is just derived information from data in the database, is it acceptable to keep state this way? And if not is there a better way of doing this?

Akka: Cleanup of dynamically created actors necessary when they have finished?

I have implemented an Actor system using Akka and its Java API UntypedActor. In it, one actor (type A) starts other actors (type B) dynamically on demand, using getContext().actorOf(...);. Those B actors will do some computation which A doesn't really care about anymore. But I'm wondering: is it necessary to clean up those actors of type B when they have finished? If so, how?
By having B actors call getContext().stop(getSelf()) when they're done?
By having B actors call getSelf().tell(Actors.poisonPill()); when they're done? [this is what I'm using now].
By doing nothing?
By ...?
The docs are not clear on this, or I have overlooked it. I have some basic knowledge of Scala, but the Akka sources aren't exactly entry-level stuff...
What you are describing are single-purpose actors created per “request” (defined in the context of A), which handle a sequence of events and then are done, right? That is absolutely fine, and you are right to shut those down: if you don’t, they will accumulate over time and you run into a memory leak. The best way to do this is the first of the possibilities you mention (most direct), but the second is also okay.
A bit of background: actors are registered within their parent in order to be identifyable (e.g. needed in remoting but also in other places) and this registration keeps them from being garbage collected. OTOH, each parent has a right to access the children it created, hence no automatic termination (i.e. by Akka) makes sense, instead requiring explicit shutdown in user code.
In addition to Roland Kuhn's answer, rather than create a new actor for every request, you could create a predefined set of actors that share the same dispatcher, or you can use a router that distributes requests to a pool of actors.
The Balancing Pool Router, for example, allows you to have a fixed set of actors of a particular type share the same mailbox:
akka.actor.deployment {
/parent/router9 {
router = balancing-pool
nr-of-instances = 5
}
}
Read the documentation on dispatchers and on routing for further detail.
I was profiling(visualvm) one of the sample cluster application from AKKA documentation and I see garbage collection cleaning up the per request actors during every GC. Unable to completely understand the recommendation of explicitly killing the actor after use. My actorsystem and actors are managed by SPRING IOC container and I use spring extension in-direct actor-producer to create actors. The "aggregator" actor is getting garbage collected on every GC, i did monitor the # of instances in visual VM.
#Component
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class StatsService extends AbstractActor {
private final LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
#Autowired
private ActorSystem actorSystem;
private ActorRef workerRouter;
#Override
public void preStart() throws Exception {
System.out.println("Creating Router" + this.getClass().getCanonicalName());
workerRouter = getContext().actorOf(SPRING_PRO.get(actorSystem)
.props("statsWorker").withRouter(new FromConfig()), "workerRouter");
super.preStart();
}
#Override
public Receive createReceive() {
return receiveBuilder()
.match(StatsJob.class, job -> !job.getText().isEmpty(), job -> {
final String[] words = job.getText().split(" ");
final ActorRef replyTo = sender();
final ActorRef aggregator = getContext().actorOf(SPRING_PRO.get(actorSystem)
.props("statsAggregator", words.length, replyTo));
for (final String word : words) {
workerRouter.tell(new ConsistentHashableEnvelope(word, word),
aggregator);
}
})
.build();
}
}
Actors by default do not consume much memory. If the application intends to use actor b later on, you can keep them alive. If not, you can shut them down via poisonpill. As long your actors are not holding resources, leaving an actor should be fine.

Categories