Queue Server that allows global consume rate for all workers - java

I have many tasks that my servers need to handle. these tasks must be handled at a specific given rate due to API call rate limit that the workers need to meet.
In order to guarantee that these tasks are not executed at a rate higher than the API rate limits, I would like to be able to configure the rate in which the queue sends messages for handling.
Additionally, that queue has to keep the ordering of the pushed messages and release them in FIFO order to provide fairness.
Lastly, It would be great if coding wise this will be kind transparent when used so that a client will my an API call to send the message to the queue and the same client will receive back the message after it is released by the queue according to the work rate and relevant order. e.g. using RxJava
waitForMessageToBeReleased(message, queue)
.subscribe(message -> // do some stuff) // message received to the same
client after it was released by the queue according to the defined work rate.
I am currently using Redis to control execution rate by creating a variable which has a specific amount of TTL and other calls wait until this variable expires. It does not, however, handle ordering and can cause clients to starve in case of a high load.

Cadence Workflow is capable of supporting your use case with minimal effort.
Here is a strawman design that satisfies your requirements:
Send signalWithStart request to a user workfow using userID as the workflow ID. It either delivers the signal to the workflow or first starts the workflow and delivers signal to it.
All request to that workflow are buffered by it. Cadence provides hard gurantee that only one workflow with given ID can exist in open state. So all signals (events) are guaranteed to be buffered in the workflow that belongs to the user.
An internal workflow event loop dispatches these requests one by one.
When the buffer is empty workflow can complete.
Here is the workflow code that implements it in Java (Go client is also supported):
public interface SerializedExecutionWorkflow {
#WorkflowMethod
void execute();
#SignalMethod
void addTask(Task t);
}
public interface TaskProcessorActivity {
#ActivityMethod
void process(Task poll);
}
public class SerializedExecutionWorkflowImpl implements SerializedExecutionWorkflow {
private final Queue<Task> taskQueue = new ArrayDeque<>();
private final TaskProcesorActivity processor = Workflow.newActivityStub(TaskProcesorActivity.class);
#Override
public void execute() {
while(!taskQueue.isEmpty()) {
processor.process(taskQueue.poll());
}
}
#Override
public void addTask(Task t) {
taskQueue.add(t);
}
}
And then the code that enqueues that task to the workflow through signal method:
private void addTask(WorkflowClient cadenceClient, Task task) {
// Set workflowId to userId
WorkflowOptions options = new WorkflowOptions.Builder().setWorkflowId(task.getUserId()).build();
// Use workflow interface stub to start/signal workflow instance
SerializedExecutionWorkflow workflow = cadenceClient.newWorkflowStub(SerializedExecutionWorkflow.class, options);
BatchRequest request = cadenceClient.newSignalWithStartRequest();
request.add(workflow::execute);
request.add(workflow::addTask, task);
cadenceClient.signalWithStart(request);
}
Cadence offers a lot of other advantages over using queues for task processing.
Built it exponential retries with unlimited expiration interval
Failure handling. For example it allows to execute a task that notifies another service if both updates couldn't succeed during a configured interval.
Support for long running heartbeating operations
Ability to implement complex task dependencies. For example to implement chaining of calls or compensation logic in case of unrecoverble failures (SAGA)
Gives complete visibility into current state of the update. For example when using queues all you know if there are some messages in a queue and you need additional DB to track the overall progress. With Cadence every event is recorded.
Ability to cancel an update in flight.
Distributed CRON support
See the presentation that goes over Cadence programming model.

Related

Concept of promises in Java

Is there a concept of using promises in java (just like ut is used in JavaScript) instead of using nested callbacks ?
If so, is there an example of how the callback is implemented in java and handlers are chained ?
Yep! Java 8 calls it CompletableFuture. It lets you implement stuff like this.
class MyCompletableFuture<T> extends CompletableFuture<T> {
static final Executor myExecutor = ...;
public MyCompletableFuture() { }
public <U> CompletableFuture<U> newIncompleteFuture() {
return new MyCompletableFuture<U>();
}
public Executor defaultExecutor() {
return myExecutor;
}
public void obtrudeValue(T value) {
throw new UnsupportedOperationException();
}
public void obtrudeException(Throwable ex) {
throw new UnsupportedOperationException();
}
}
The basic design is a semi-fluent API in which you can arrange:
(sequential or async)
(functions or actions)
triggered on completion of
i) ("then") ,or ii) ("andThen" and "orThen")
others. As in:
MyCompletableFuture<String> f = ...; g = ...
f.then((s -> aStringFunction(s)).thenAsync(s -> ...);
or
f.andThen(g, (s, t) -> combineStrings).or(CompletableFuture.async(()->...)....
UPDATE 7/20/17
I wanted to edit that there is also a Library called "ReactFX" which is supposed to be JavaFX as a reactive framework. There are many Reactive Java libraries from what I've seen, and since Play is based on the Reactive principal, I would assume that these Reactive libraries follow that same principal of non-blocking i/o, async calls from server to client and back while having communication be send by either end.
These libraries seem to be made for the client side of things, but there might be a Server reactive library as well, but I would assume that it would be wiser to use Play! with one of these client side reactive libraries.
You can take a look at https://www.playframework.com/
which implements this functionality here
https://www.playframework.com/documentation/2.2.0/api/java/play/libs/F.Promise.html
Additonal reading https://www.playframework.com/documentation/2.5.x/JavaAsync
Creating non-blocking actions
Because of the way Play works, action code must be as fast as possible, i.e., non-blocking. So what should we return from our action if we are not yet able to compute the result? We should return the promise of a result!
Java 8 provides a generic promise API called CompletionStage. A CompletionStage<Result> will eventually be redeemed with a value of type Result. By using a CompletionStage<Result> instead of a normal Result, we are able to return from our action quickly without blocking anything. Play will then serve the result as soon as the promise is redeemed.
The web client will be blocked while waiting for the response, but nothing will be blocked on the server, and server resources can be used to serve other clients.
How to create a CompletionStage
To create a CompletionStage<Result> we need another promise first: the promise that will give us the actual value we need to compute the result:
CompletionStage<Double> promiseOfPIValue = computePIAsynchronously();
CompletionStage<Result> promiseOfResult = promiseOfPIValue.thenApply(pi ->
ok("PI value computed: " + pi)
);
Play asynchronous API methods give you a CompletionStage. This is the case when you are calling an external web service using the play.libs.WS API, or if you are using Akka to schedule asynchronous tasks or to communicate with Actors using play.libs.Akka.
A simple way to execute a block of code asynchronously and to get a CompletionStage is to use the CompletableFuture.supplyAsync() helper:
CompletionStage<Integer> promiseOfInt = CompletableFuture.supplyAsync(() -> intensiveComputation());
Note: It’s important to understand which thread code runs on which promises. Here, the intensive computation will just be run on another thread.
You can’t magically turn synchronous IO into asynchronous by wrapping it in a CompletionStage. If you can’t change the application’s architecture to avoid blocking operations, at some point that operation will have to be executed, and that thread is going to block. So in addition to enclosing the operation in a CompletionStage, it’s necessary to configure it to run in a separate execution context that has been configured with enough threads to deal with the expected concurrency. See Understanding Play thread pools for more information.
It can also be helpful to use Actors for blocking operations. Actors provide a clean model for handling timeouts and failures, setting up blocking execution contexts, and managing any state that may be associated with the service. Also Actors provide patterns like ScatterGatherFirstCompletedRouter to address simultaneous cache and database requests and allow remote execution on a cluster of backend servers. But an Actor may be overkill depending on what you need.
Async results
We have been returning Result up until now. To send an asynchronous result our action needs to return a CompletionStage<Result>:
public CompletionStage<Result> index() {
return CompletableFuture.supplyAsync(() -> intensiveComputation())
.thenApply(i -> ok("Got result: " + i));
}
Actions are asynchronous by default
Play actions are asynchronous by default. For instance, in the controller code below, the returned Result is internally enclosed in a promise:
public Result index() {
return ok("Got request " + request() + "!");
}
Note: Whether the action code returns a Result or a CompletionStage<Result>, both kinds of returned object are handled internally in the same way. There is a single kind of Action, which is asynchronous, and not two kinds (a synchronous one and an asynchronous one). Returning a CompletionStage is a technique for writing non-blocking code.
Some info on CompletionStage
https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html
which is a subclass of the class mentioned in #Debosmit Ray's answer called CompletableFuture
This Youtube Video by LinkedIn dev Mr. Brikman explains a bit about Promises in
https://youtu.be/8z3h4Uv9YbE?t=15m46s
and
https://www.youtube.com/watch?v=4b1XLka0UIw
I believe the first video gives an example of a promise, the second video might also give some good info, I don't really recall which video had what content.
Either way the information here is very good, and worth looking into.
I personally do not use Play, but I have been looking at it for a long, long time, as it does a lot of really good stuff.
If you want to do Promise even before Java7, "java-promise" may be useful. (Of course it works with Java8)
You can easily control asynchronous operations like JavaScript's Promise.
https://github.com/riversun/java-promise
example
import org.riversun.promise.Promise;
public class Example {
public static void main(String[] args) {
Promise.resolve("foo")
.then(new Promise((action, data) -> {
new Thread(() -> {
String newData = data + "bar";
action.resolve(newData);
}).start();
}))
.then(new Promise((action, data) -> {
System.out.println(data);
action.resolve();
}))
.start();
System.out.println("Promise in Java");
}
}
result:
Promise in Java
foobar

Java Servlets - block all threads using common timer

On Tomcat 6, I have a servlet running which accepts requests and passes these onto an external system.
There is a throttling limitation on the external system - if the number of requests exceed a certain number per second, then the external system responds with a Http 503.
No further requests may hit the external system for at least 2 seconds or else the external system will restart its throttling timer.
Initially, I detected the 503 HttpResponse and did a Thread.sleep(2000) but that is wrong as it doesn't prevent the servlet servicing other requests using other threads - once a 503 response is detected, I need to block all threads for at least the 2 seconds.
Ideally, I would prefer the blocked threads not to wake up all at the same time but say a 100ms apart so that requests would be handled in order.
I've looked at the Condition and ReentrantLock but unsure if these are appropriate.
Just create a global (static) date variable in the servlet. When you get a 503, change this variable from null to the local time. The servlet should always check this variable before contacting the external system. If the variable is null, or more than 2 seconds have passed, then you can proceed. Otherwise block the thread (or throw an exception).
Looks like calling Amazon services to me, and it can be managed so easy.
You need a central and managed module for doing it, and it comes like a single module.
The important thing is you should not reach the throttling limitation at all, and if you get too much requests which would reach this value, so you should respond to your client check the result later(as async work).
If the request is kinda important business(such as capturing a payment), so you have to implement a failover with the module too, simply by persisting the request data into the database, so if there is any fail, you will have the data from the database.
If you are familiar with MQ arch, so it would be the best solution where they are designed for this kind of stuffs, but you like to have your own, you may accept and process all requests to call teh external system by the module manage.
first you may have a entity class which carries the request info like
class entity{public String id,srv,blah_blah;}
Second, a stand-alone module for accepting and processing the requests, which would be the context for the requests too. like following
class business{private business(){}// fan of OOP? K, go for singleton
private static final ArrayList<entity> ctx=new ArrayList<entity>();
static public void accept_request(entity e){_persist(e);ctx.add(e);}
static private void _persist(entity e){/*persist it to the db*/}
static private void _done(entity e){_remove(e);/*informing 3rd. parties if any*/}
static private void _remove(entity e){/*remove it from the db, it's done*/}
final private static int do_work(e){/*do the real business*/return 0;}//0 as success, 1, fail, 2....
}
But it's not completed yet, now you need a way to call the do_work() guy, so I suggest a background thread(would be daemon too!)
So clients just push the requests to this context-like class, and here we need the thread, like following
class business{...
static public void accept_request(entity e){_persist(e);ctx.add(e);synchronized(ctx){ctx.notify();}}
...
private static final Runnable r=new Runnable(){public void run(){try{
while(!Thread.currentThread().interrupt()){
if(ctx.size()==0){synchronized(ctx){if(ctx.size()==0){ctx.wait();}}}
while(ctx.size()>0){entity e=ctx.get(0);ctx.remove(0);
if(do_work(e)==0){_done(e);}else{ctx.add(e);/*give him another chance maybe!*/}end-else
Thread.Sleep(100/*appreciate sleep time*/);}//end-loop
}
}catch(Throwable wt){/*catch signals, maybe death thread*/}}};
static private Thread t;
void static public start_module(){t=new Thread(r);t.start();}
void static public stop_module(){t.interrupt();t.stop();}
...}
Tip: try not start the thread(calling start_module()) out of container initiation process, or you will have memory leak! best solution would call the thread by init() method of servlet(s) would call this module(once), and of course halting the the thread by application halt (destroy())

TomEE chokes on too many #Asynchronous operations

I am using Apache TomEE 1.5.2 JAX-RS, pretty much out of the box, with the predefined HSQLDB.
The following is simplified code. I have a REST-style interface for receiving signals:
#Stateless
#Path("signal")
public class SignalEndpoint {
#Inject
private SignalStore store;
#POST
public void post() {
store.createSignal();
}
}
Receiving a signal triggers a lot of stuff. The store will create an entity, then fire an asynchronous event.
public class SignalStore {
#PersistenceContext
private EntityManager em;
#EJB
private EventDispatcher dispatcher;
#Inject
private Event<SignalEntity> created;
public void createSignal() {
SignalEntity entity = new SignalEntity();
em.persist(entity);
dispatcher.fire(created, entity);
}
}
The dispatcher is very simple, and merely exists to make the event handling asynchronous.
#Stateless
public class EventDispatcher {
#Asynchronous
public <T> void fire(Event<T> event, T parameter) {
event.fire(parameter);
}
}
Receiving the event is something else, which derives data from the signal, stores it, and fires another asynchronous event:
#Stateless
public class DerivedDataCreator {
#PersistenceContext
private EntityManager em;
#EJB
private EventDispatcher dispatcher;
#Inject
private Event<DerivedDataEntity> created;
#Asynchronous
public void onSignalEntityCreated(#Observes SignalEntity signalEntity) {
DerivedDataEntity entity = new DerivedDataEntity(signalEntity);
em.persist(entity);
dispatcher.fire(created, entity);
}
}
Reacting to that is even a third layer of entity creation.
To summarize, I have a REST call, which synchronously creates a SignalEntity, which asynchronously triggers the creation of a DerivedDataEntity, which asynchronously triggers the creation of a third type of entity. It all works perfectly, and the storage processes are beautifully decoupled.
Except for when I programmatically trigger a lot (f.e. 1000) of signals in a for-loop. Depending on my AsynchronousPool size, after processing signals (quite fast) in the amount of about half of that size, the application completely freezes for up to some minutes. Then it resumes, to process about the same amount of signals, quite fast, before freezing again.
I have been playing around with AsynchronousPool settings for the last half hour. Setting it to 2000, for instance, will easily make all my signals be processed at once, without any freezes. But the system isn't sane either, after that. Triggering another 1000 signals, resulted in them being created allright, but the entire creation of derived data never happened.
Now I am completely at a loss as to what to do. I can of course get rid of all those asynchronous events and implement some sort of queue myself, but I always thought the point of an EE container was to relieve me of such tedium. Asynchronous EJB events should already bring their own queue mechanism. One which should not freeze as soon as the queue is too full.
Any ideas?
UPDATE:
I have now tried it with 1.6.0-SNAPSHOT. It behaves a little bit differently: It still doesn't work, but I do get an exception:
Aug 01, 2013 3:12:31 PM org.apache.openejb.core.transaction.EjbTransactionUtil handleSystemException
SEVERE: EjbTransactionUtil.handleSystemException: fail to allocate internal resource to execute the target task
javax.ejb.EJBException: fail to allocate internal resource to execute the target task
at org.apache.openejb.async.AsynchronousPool.invoke(AsynchronousPool.java:81)
at org.apache.openejb.core.ivm.EjbObjectProxyHandler.businessMethod(EjbObjectProxyHandler.java:240)
at org.apache.openejb.core.ivm.EjbObjectProxyHandler._invoke(EjbObjectProxyHandler.java:86)
at org.apache.openejb.core.ivm.BaseEjbProxyHandler.invoke(BaseEjbProxyHandler.java:303)
at <<... my code ...>>
...
Caused by: java.util.concurrent.RejectedExecutionException: Timeout waiting for executor slot: waited 30 seconds
at org.apache.openejb.util.executor.OfferRejectedExecutionHandler.rejectedExecution(OfferRejectedExecutionHandler.java:55)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
at org.apache.openejb.async.AsynchronousPool.invoke(AsynchronousPool.java:75)
... 38 more
It is as though TomEE would not do ANY queueing of operations. If no thread is free to process in the moment of the call, tough luck. Surely, this cannot be intended..?
UPDATE 2:
Okay, I seem to have stumbled upon a semi-solution: Setting the AsynchronousPool.QueueSize property to maxint solves the freeze. But questions remain: Why is the QueueSize so limited in the first place, and, more worryingly: Why would this block the entire application? If the queue is full, it blocks, but as soon as a task is taken from it, another should pop in, right? The queue appears to be blocked until it is completely empty again.
UPDATE 3:
For anyone who wants to have a go: http://github.com/JanDoerrenhaus/tomeefreezetestcase
UPDATE 4:
As it turns out, increasing the queue size does NOT solve the problem, it merely delays it. The problem remains the same: Too many asynchronous operations at once, and TomEE chockes so bad, that it cannot even undeploy the application on termination anymore.
So far, my diagnosis is that the task cleanup does not work properly. My tasks are all very small and fast (see the test case on github). I was already afraid that it would be OpenJPA or HSQLDB slowing down on too many concurrent calls, but I commented out all em.persist calls, and the problem remained the same. So if my tasks are quite small and fast, but still manage to block out TomEE so bad that it could not get any further task in after 30 seconds (javax.ejb.EJBException: fail to allocate internal resource to execute the target task), I would imagine that completed tasks linger, clogging up the pipe, so to speak.
How could I resolve this issue?
Basically BlockingQueues use locks to ensure the consistency of data and avoid data loss, so in too highly concurrent environment it will reject a lot of tasks (your case).
You can play on trunk with the RejectedExecutionHandler implementation to retry to offer the task. One implementation can be:
new RejectedExecutionHandler() {
#Override
public void rejectedExecution(final Runnable r, final ThreadPoolExecutor executor) {
for (int i = 0; i < 10; i++) {
if (executor.getQueue().offer(r)) {
return;
}
try {
Thread.sleep(50);
} catch (final InterruptedException e) {
// no-op
}
}
throw new RejectedExecutionException();
}
}
It even works better with random sleep (between min and max).
The idea is basically: if the queue is full, wait some short time to reduce the concurrency.
configurable through WEB-INF/application.properties https://issues.apache.org/jira/browse/TOMEE-1012

Ashynchronous Multithreaded

I have a centralized socket class which is responsible for sending and retrieving data. I have 2 classes:
one which listens to the input stream and
the other one which takes care of writing to it.
Listening running on an infinite loop and then process the messages. For synchronous i block the read and reset these values once i receive the response from the server.
Now i am stuck with asycnhronous. I have 3 methods in my service.
getSomething
readSomething
saySomething.
In my getSomething i want to implement async functionality based on the boolean flag provided. When my app starts i also start both of my threads and if i send concurrent request.
For example readSomething first and then getSomething then i get the return value for readSomething in getSomething which is not what i desire and i can see in the logs that the output for getSomething comes after a while.
It looks like the Future object requires to submit a new task which will run in it's own thread but the way i have design this app, i just can't create a new thread. Can anyone give me insights on how should i handle this asycnhronous like a flow chart etc ?.
If you're doing work Asynchronously, that means that other part of the application does not care when the async work is done.
What you'll normally want to do is notify the other part, when the async work is done. For this, you'll want to use the "Observer Pattern" (the article includes flow-charts).
The basic idea is, that your app starts the async work and is notified, when the work is done. That way, you can loosely couple two parts of the application. A quick example:
/**
* The observer
*/
public interface AsyncWorkDoneListener{
/**
* This method will be called when the async-thread is
* done.
*/
public void done(Object unit);
}
/**
* The worker (which does the asyc-work on another thread).
*/
public class AsyncWorker{
private AsyncWorkDoneListener listener;
/**
* Set (you might want to maintain a list here) the current
* listener for this "AsyncWorker".
*/
public void setListener(AsyncWorkDoneListener listener){
this.listener = listener;
}
/**
* Will do the async-work
*/
public void doWork(){
// Do the work in another thread...
// When done, notify the registered listener with the
// result of the async work:
this.listener.done(the_object_containing_the_result);
}
}
/**
* The application
*/
public class App implements AsyncWorkDoneListener{
public void someMethod(){
// Work on something asynchronously:
mAsyncWorker.setListener(this);
mAsyncWorker.doWork();
}
#Override
public void done(Object unit){
// The asyc work has finished, do something with
// the result in "unit".
}
}
A couple of insights:
you need dataflow, not flow charts
if you cannot create new thread for each task, you can use fixed-sized thread pool created by java.util.concurrent.Executors.newFixedThreadPool()
you cannot use Future.get() from within a task running in a threadpool, or thread starvation deadlock may occur.
your description of the problem is unclear: too many undeclared notions. "reset these values" - what values? "3 methods in my service" - is it server side or client-side? "boolean flag provided" - do we need to understand who provided that flag and what does it mean?
Please provide a dataflow representation of the program you need to implement in order we could help you.

Akka: Cleanup of dynamically created actors necessary when they have finished?

I have implemented an Actor system using Akka and its Java API UntypedActor. In it, one actor (type A) starts other actors (type B) dynamically on demand, using getContext().actorOf(...);. Those B actors will do some computation which A doesn't really care about anymore. But I'm wondering: is it necessary to clean up those actors of type B when they have finished? If so, how?
By having B actors call getContext().stop(getSelf()) when they're done?
By having B actors call getSelf().tell(Actors.poisonPill()); when they're done? [this is what I'm using now].
By doing nothing?
By ...?
The docs are not clear on this, or I have overlooked it. I have some basic knowledge of Scala, but the Akka sources aren't exactly entry-level stuff...
What you are describing are single-purpose actors created per “request” (defined in the context of A), which handle a sequence of events and then are done, right? That is absolutely fine, and you are right to shut those down: if you don’t, they will accumulate over time and you run into a memory leak. The best way to do this is the first of the possibilities you mention (most direct), but the second is also okay.
A bit of background: actors are registered within their parent in order to be identifyable (e.g. needed in remoting but also in other places) and this registration keeps them from being garbage collected. OTOH, each parent has a right to access the children it created, hence no automatic termination (i.e. by Akka) makes sense, instead requiring explicit shutdown in user code.
In addition to Roland Kuhn's answer, rather than create a new actor for every request, you could create a predefined set of actors that share the same dispatcher, or you can use a router that distributes requests to a pool of actors.
The Balancing Pool Router, for example, allows you to have a fixed set of actors of a particular type share the same mailbox:
akka.actor.deployment {
/parent/router9 {
router = balancing-pool
nr-of-instances = 5
}
}
Read the documentation on dispatchers and on routing for further detail.
I was profiling(visualvm) one of the sample cluster application from AKKA documentation and I see garbage collection cleaning up the per request actors during every GC. Unable to completely understand the recommendation of explicitly killing the actor after use. My actorsystem and actors are managed by SPRING IOC container and I use spring extension in-direct actor-producer to create actors. The "aggregator" actor is getting garbage collected on every GC, i did monitor the # of instances in visual VM.
#Component
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class StatsService extends AbstractActor {
private final LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
#Autowired
private ActorSystem actorSystem;
private ActorRef workerRouter;
#Override
public void preStart() throws Exception {
System.out.println("Creating Router" + this.getClass().getCanonicalName());
workerRouter = getContext().actorOf(SPRING_PRO.get(actorSystem)
.props("statsWorker").withRouter(new FromConfig()), "workerRouter");
super.preStart();
}
#Override
public Receive createReceive() {
return receiveBuilder()
.match(StatsJob.class, job -> !job.getText().isEmpty(), job -> {
final String[] words = job.getText().split(" ");
final ActorRef replyTo = sender();
final ActorRef aggregator = getContext().actorOf(SPRING_PRO.get(actorSystem)
.props("statsAggregator", words.length, replyTo));
for (final String word : words) {
workerRouter.tell(new ConsistentHashableEnvelope(word, word),
aggregator);
}
})
.build();
}
}
Actors by default do not consume much memory. If the application intends to use actor b later on, you can keep them alive. If not, you can shut them down via poisonpill. As long your actors are not holding resources, leaving an actor should be fine.

Categories