Spray scala building non blocking servlet - java

I've builded a scala application using spray with akka actor.
My problem is that the request are synchronized and the server can't manage many requests at once.
Is that a normal behaviour? what can I do to avoid this?
This is my boot code:
object Boot extends App with Configuration {
// create an actor system for application
implicit val system = ActorSystem("my-service")
//context.actorOf(RoundRobinPool(5).props(Props[TestActor]), "router")
// create and start property service actor
val RESTService = system.actorOf(Props[RESTServiceActor], "my-endpoint")
// start HTTP server with property service actor as a handler
IO(Http) ! Http.Bind(RESTService, serviceHost, servicePort)
}
actor code:
class RESTServiceActor extends Actor
with RESTService {
implicit def actorRefFactory = context
def receive = runRoute(rest)
}
trait RESTService extends HttpService with SLF4JLogging{
val myDAO = new MyDAO
val AccessControlAllowAll = HttpHeaders.RawHeader(
"Access-Control-Allow-Origin", "*"
)
val AccessControlAllowHeadersAll = HttpHeaders.RawHeader(
"Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept"
)
val rest = respondWithHeaders(AccessControlAllowAll, AccessControlAllowHeadersAll) {
respondWithMediaType(MediaTypes.`application/json`){
options {
complete {
""
}
} ~
path("some"/"path"){
get {
parameter('parameter){ (parameter) =>
ctx: RequestContext =>
handleRequest(ctx) {
myDAO.getResult(parmeter)
}
}
}
}
}
}
/**
* Handles an incoming request and create valid response for it.
*
* #param ctx request context
* #param successCode HTTP Status code for success
* #param action action to perform
*/
protected def handleRequest(ctx: RequestContext, successCode: StatusCode = StatusCodes.OK)(action: => Either[Failure, _]) {
action match {
case Right(result: Object) =>
println(result)
ctx.complete(successCode,result.toString())
case Left(error: Failure) =>
case _ =>
ctx.complete(StatusCodes.InternalServerError)
}
}
}
I saw that:
Akka Mist provides an excellent basis for building RESTful web
services in Scala since it combines good scalability (enabled by its
asynchronous, non-blocking nature) with general lightweight-ness
Is that what I'm missing? is spray using it as default or I need to add it, and how?
I'm a bit confuse about it. any help is appreciated.

If you are starting from scratch, I suggest using Akka HTTP, documented at http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M4/scala/http/. It is a port of Spray, but using Akka Streams, which will be important moving forward.
As far as making your code completely asynchronous, the key pattern is to return a Future to your result, not the result data itself. In other words, RESTServiceActor should be return a Future that returns the data, not the actual data. This will allow Spray/Akka HTTP accept additional connections and the asynchronous completion of the service actor will return the results when they are finished.

Instead of sending result to the complete method:
ctx.complete(successCode,result.toString())
I used the future method:
import concurrent.Future
import concurrent.ExecutionContext.Implicits.global
ctx.complete(successCode,Future(Option(result.toString())))

Related

Use Webclient with custom HttpMessageReader to synchronously read responses

Problem
I have defined a CustomHttpMessageReader (which implements HttpMessageReader<CustomClass>), which is able to read a multipart response from a server and converts the received parts into an object of a specific class. The CustomHttpMessageReader uses internally the DefaultPartHttpMessageReader to actually read/parse the multipart responses.
The CustomHttpMessageReader accumulates the parts read by the DefaultReader and converts them into the desired class CustomClass.
I've created a CustomHttpMessageConverter that does the same thing for a RestTemplate, but I struggle to do the same for a WebClient.
I always get the following Exception:
block()/blockFirst()/blockLast() are blocking, which is not supported in thread reactor-http-nio-2
java.lang.IllegalStateException: block()/blockFirst()/blockLast() are blocking, which is not supported in thread reactor-http-nio-2
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:83)
at reactor.core.publisher.Flux.blockFirst(Flux.java:2600)
at com.company.project.deserializer.multipart.CustomHttpMessageReader.readMultipartData(CustomHttpMessageReader.java:116)
at com.company.project.deserializer.multipart.CustomHttpMessageReader.readMono(CustomHttpMessageReader.java:101)
at org.springframework.web.reactive.function.BodyExtractors.lambda$readToMono$14(BodyExtractors.java:211)
at java.base/java.util.Optional.orElseGet(Optional.java:369)
...
Mind you, I'm not interested in running WebClient asynchronously. I'm only future proofing my application because RestTemplate is apparently only in maintenance mode and the folks at Pivotal/Spring suggest using WebClient instead.
What I Tried
As I understand, there are threads that are not allowed to be blocked, namely the netty-nio one in the exception. I tried removing netty from my dependencies, so that I can rely solely on Tomcat. That however doesn't seem to help, as I get another exception, explaining me, that no suitable HttpConnector exists (exception thrown by the WebClient.Builder)
No suitable default ClientHttpConnector found
java.lang.IllegalStateException: No suitable default ClientHttpConnector found
at org.springframework.web.reactive.function.client.DefaultWebClientBuilder.initConnector(DefaultWebClientBuilder.java:297)
at org.springframework.web.reactive.function.client.DefaultWebClientBuilder.build(DefaultWebClientBuilder.java:266)
at com.company.project.RestClientUsingWebClient.getWebclient(RestClientUsingWebClient.java:160)
I've tried my code executed in a unit test as well, as starting a whole Spring context. The result is unfortunately the same.
Setup
To provide a bit more details, the following are snippets from the Classes mentioned earlier. The classes are not shown fully in order to understand better what is going on. All necessary methods are implemented (like e.g. canRead() in the Reader).
CustomHttpMessageReader
I also included in the class the usage of CustomPart (in addition to CustomClass) just to show, that the content of the Part is also read i.e. blocked.
public class CustomHttpMessageReader implements HttpMessageReader<CustomClass> {
private final DefaultPartHttpMessageReader defaultPartHttpMessageReader = new DefaultPartHttpMessageReader();
#Override
public Flux<CustomClass> read(final ResolvableType elementType, final ReactiveHttpInputMessage message,
final Map<String, Object> hints) {
return Flux.merge(readMono(elementType, message, hints));
}
#Override
public Mono<CustomClass> readMono(final ResolvableType elementType, final ReactiveHttpInputMessage message,
final Map<String, Object> hints) {
final List<CustomPart> customParts = readMultipartData(message);
return convertToCustomClass(customParts);
}
private List<CustomPart> readMultipartData(final ReactiveHttpInputMessage message) {
final ResolvableType resolvableType = ResolvableType.forClass(byte[].class);
return Optional.ofNullable(
defaultPartHttpMessageReader.read(resolvableType, message, Map.of())
.buffer()
.blockFirst()) // <- EXCEPTION IS THROWN HERE!
.orElse(new ArrayList<>())
.stream()
.map(part -> {
final byte[] content = Optional.ofNullable(part.content().blockFirst()) //<- HERE IS ANOTHER BLOCK
.map(DataBuffer::asByteBuffer)
.map(ByteBuffer::array)
.orElse(new byte[]{});
// Here we cherry pick some header fields
return new CustomPart(content, someHeaderFields);
}).collect(Collectors.toList());
}
}
Usage of WebClient
class RestClientUsingWebClient {
/**
* The "Main" Method for our purposes
*/
public Optional<CustomClass> getResource(final String baseUrl, final String id) {
final WebClient webclient = getWebclient(baseUrl);
//curl -X GET "http://BASE_URL/id" -H "accept: multipart/form-data"
return webclient.get()
.uri(uriBuilder -> uriBuilder.path(id).build()).retrieve()
.toEntity(CustomClass.class)
.onErrorResume(NotFound.class, e -> Mono.empty())
.blockOptional() // <- HERE IS ANOTHER BLOCK
.map(ResponseEntity::getBody);
}
//This exists also as a Bean definition
private WebClient getWebclient(final String baseUrl) {
final ExchangeStrategies exchangeStrategies = ExchangeStrategies.builder()
.codecs(codecs -> {
codecs.defaultCodecs().maxInMemorySize(16 * 1024 * 1024);
codecs.customCodecs().register(new CustomHttpMessageReader()); // <- Our custom reader
})
.build();
return WebClient.builder()
.baseUrl(baseUrl)
.exchangeStrategies(exchangeStrategies)
.build();
}
}
Usage of build.gradle
For the sake of completion, here is what I think is the relevant part of my build.gradle
plugins {
id 'org.springframework.boot' version '2.7.2'
id 'io.spring.dependency-management' version '1.0.13.RELEASE'
...
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-web' // <- This
implementation 'org.springframework.boot:spring-boot-starter-webflux'
// What I tried:
// implementation ('org.springframework.boot:spring-boot-starter-webflux'){
// exclude group: 'org.springframework.boot', module: 'spring-boot-starter-reactor-netty'
//}
...
}
if we look in the stacktrace that you provided we see these 3 lines
at reactor.core.publisher.Flux.blockFirst(Flux.java:2600)
at com.company.project.deserializer.multipart.CustomHttpMessageReader.readMultipartData(CustomHttpMessageReader.java:116)
at com.company.project.deserializer.multipart.CustomHttpMessageReader.readMono(CustomHttpMessageReader.java:101)
They should be read from bottom to top. So what do they tell us?
The bottom line tells us that the function readMono on the line 101 in the class CustomHttpMessageReader.javawas called first.
That function then called the function readMultipartData on line 116 in the class CustomHttpMessageReader(same class as above)
Then the function blockFirst was called on line 2600 in the class Flux.
Thats your blocking call.
So we can tell that there is a blocking call in the function readMultipartData.
So why cant we block in the function? well if we look in the API for the interface that function is overriding HttpMessageReader we can se that the function returns a Mono<T> which means that the function is an async function.
And if it is async and we block we might get very very bad performance.
This interface is used within the Spring WebClient which is a fully async client.
You can use it in a non-async application, but that means you can block outside of the WebClient but internally, it needs to operate completely async if you want it to be as efficient as possible.
So the bottom line is that you should not block in any function that returns a Mono or a Flux.

JHipster React Show List Of One Entity in Details Screen of Another Related Entity

I am attempting an application in JHipster 7.0.1 (Azul JDK 11) and ReactJS as the front-end.
I have 2 entities in my JDL - Domain and BadgeCategory that are related as shown below
relationship OneToMany {
Domain{badgeClass required} to BadgeCategory{domain(name) required}
I want to be able to display all the BadgeCategories for a particular Domain in the Domain Detail screen.
For this, I created a new method in the repository BadgeCategoryRepository.java
#Repository
public interface BadgeCategoryRepository extends JpaRepository<BadgeCategory, Long> {
List<BadgeCategory> findByDomainId(Long id);
}
And then added a new endpoint in BadgeCategoryResource.java
#GetMapping("/badge-categories-domain/{id}")
public ResponseEntity<List<BadgeCategory>> getAllBadgeCategoriesForDomain(#PathVariable Long id) {
log.debug("REST request to get BadgeCategories for Domain : {}", id);
List<BadgeCategory> badgeCategory = badgeCategoryRepository.findByDomainId(id);
return ResponseEntity.ok().body(badgeCategory);
}
Now coming to the React part, I added a constant in badge-category.reducer.ts
export const getEntitiesForDomain = createAsyncThunk(
'badgeCategory/fetch_entity_list_for_domain',
async (id: string) => {
const requestUrl = `api/badge-categories-domain/${id}`;
alert(JSON.stringify(axios.get<IBadgeCategory[]>(requestUrl)));
return axios.get<IBadgeCategory[]>(requestUrl);
});
Then I am using this reducer in the Domain Detail screen component domain-detail.tsx
import { getEntity as getBadgeCategory, getEntitiesForDomain } from '../badge-category/badge-category.reducer';
const domainEntity = useAppSelector(state => state.domain.entity);
const badgeCategoryList = useAppSelector(state => state.badgeCategory.entities);
useEffect(() => {
dispatch(getEntity(props.match.params.id));
dispatch(getEntitiesForDomain(props.match.params.id));
}, []);
I am expecting the constant badgeCategoryList to contain the list of all badge categories for the domain which is being referred to in the domain-detail screen. But I get nothing in return.
On checking the flow, I see that the endpoint is getting hit and the response is being produced by the Java code, but the UI code is not able to consume it.
What am I missing here that is causing this issue?
The Swagger docs show expected response from the Java code
So the issue was with the new API call not being registered with the slice section in the reducer. I had to do the following addition to the slice and it works like a charm
.addMatcher(isFulfilled(getEntitiesForDomain), (state, action) => {
return {
...state,
loading: false,
entities: action.payload.data,
};
})

GRPC Async + Blocking Stub Java

I am running into a bit of a chicken and an egg problem.
Case: A file is generated on a remote client. The client should transmit the file to the server over an asynccstub. The client must also transmit metadata via a blocking stub to be stored in a database.
Problems:
If I do the asynchronous operation first, then the file data is sent prior to the metadata, and therefore the server has no context as to what to name the file is, or where to put it. I originally intended to return this information from the server (bidirectionally), however stream observers do not lend themselves to set variables outside their anonymous definition.
If I do the synchronous operation first, I can get file naming information back from the server;however, I will need to package this into the "Chunks" of data. This would also require constantly opening and closing of the save file while GRPC iterates over it's stream data, as iterators are not easily reset (so i cant just peel off the first request).
As a last option, I could package all of this to the asynchronous request and dispatch with any synchronous call. I believe this will provide a working solution, but am concerned about the amount of data being sent on already large requests as well as the inefficiency mentioned before.
So my question is:
Is there a way to set a global variable to 'value.Message' from the response observer.
Alternatively, is there a way to pass information from the syncronous call to the asynchronous call on the server side?
Async response observer:
StreamObserver<GrpcServerComm.UploadStatus> responseObserver = new StreamObserver<GrpcServerComm.UploadStatus>() {
#Override
public void onNext(GrpcServerComm.UploadStatus value) {
if (value.getCode() != 1) {
Log.d("Error", "Upload Procedure Failure");
finishLatch.countDown();
}
}
#Override
public void onError(Throwable t) {
Log.d("Error", "Upload Response");
finishLatch.countDown();
}
#Override
public void onCompleted() {
finishLatch.countDown();
}
};
Relevant protobufs
message UploadStatus {
string filename=1;
int32 code = 2;
}
message DataChunk
{
string filename=1;
bytes chunk = 2;
}
message VideoMetadata
{
string publisher =1;
string description =2;
string tags = 3;
double videolat= 4;
double videolong=5;
}
service DataUpload
{
rpc UploadData (stream DataChunk) returns(UploadStatus);
}
service ContentMetaData
{
rpc UploadMetaData(VideoMetadata) returns (UploadStatus);
}
Python Server-side functions
class DataUploadServicer(proto_test_pb2_grpc.DataUploadServicer):
def UploadData(self,request_it,context):
response = proto_test_pb2.UploadStatus()
filename = str(random.getrandbits(32)) #server decides filename
response = filestream.writefile(filename,request_it)
return response
def writefile(filename, chunks):
response = proto_test_pb2.UploadStatus()
filename='tmp/'+filename
app_file = open(filename,"ab")
for chunk in chunks:
app_file.write(chunk.chunk)
app_file.close()
print('File Written')
response.Code=1
response.Message = "Succsesful write"
return response
Java users, a detailed article on this here.
I think it is not good idea to have these as 2 separate requests. Instead Metadata and DataChunk should be combined as 1 single type as shown here.
message FileUploadRequest {
VideoMetadata metaData = 1;
DataChunk dataChunk = 2;
}
Now you might ask why we have to send the metadata for every request! This is where gRPC oneof type helps.
message FileUploadRequest {
oneof upload_data {
VideoMetadata metaData = 1;
DataChunk dataChunk = 2;
}
}
Your service would be like this.
service FileuploadService {
rpc UploadData (stream FileUploadRequest) returns(UploadStatus);
}
When you use Oneof, In your generated code, oneof fields have the same getters and setters as regular fields. You also get a special method for checking which value (if any) in the oneof is set. First you send the metatdata and then you send the chunk. Based on, which oneof is set, then you can take the decision accordingly.

Implementing Content-Based Router Pattern in Akka

I am trying to implement a content-based router in my Akka actor system and according to this document the ConsistentHashingRouter is the way to go. After reading through its official docs, I still find myself confused as to how to use this built-in hashing router. I think that’s because the router itself is hash/key-based, and the example the Akka doc author chose to use was a scenario involving key-value based caches…so I can’t tell which keys are used by the cache and which ones are used by the router!
Let’s take a simple example. Say we have the following messages:
interface Notification {
// Doesn’t matter what’s here.
}
// Will eventually be emailed to someone.
class EmailNotification implements Notification {
// Doesn’t matter what’s here.
}
// Will eventually be sent to some XMPP client and on to a chatroom somewhere.
class ChatOpsNotifications implements Notification {
// Doesn’t matter what’s here.
}
etc. In theory we might have 20 Notification impls. I’d like to be able to send a Notification to an actor/router at runtime and have that router route it to the correct NotificationPubisher:
interface NotificationPublisher<NOTIFICATION implements Notification> {
void send(NOTIFICATION notification)
}
class EmailNotificationPublisher extends UntypedActor implements NotificationPubisher<EmailNotification> {
#Override
void onReceive(Object message) {
if(message instanceof EmailNotification) {
send(message as EmailNotification)
}
}
#Override
void send(EmailNotification notification) {
// Use Java Mail, etc.
}
}
class ChatOpsNotificationPublisher extends UntypedActor implements NotificationPubisher<ChatOpsNotification> {
#Override
void onReceive(Object message) {
if(message instanceof ChatOpsNotification) {
send(message as ChatOpsNotification)
}
}
#Override
void send(ChatOpsNotification notification) {
// Use XMPP/Jabber client, etc.
}
}
Now I could do this routing myself, manually:
class ReinventingTheWheelRouter extends UntypedActor {
// Inject these via constructor
ActorRef emailNotificationPublisher
ActorRef chatOpsNotificationPublisher
// ...20 more publishers, etc.
#Override
void onReceive(Object message) {
ActorRef publisher
if(message instanceof EmailNotification) {
publisher = emailNotificationPublisher
} else if(message instanceof ChatOpsNotification) {
publisher = chatOpsNotificationPublisher
} else if(...) { ... } // 20 more publishers, etc.
publisher.tell(message, self)
}
}
Or I could use the Akka-Camel module to defined a Camel-based router and send Notifications off to the Camel router, but it seems that Akka aready has this built-in solution, so why not use it? I just cant figure out how to translate the Cache example from those Akka docs to my Notification example here. What’s the purpose of the “key” in the ConsistentHashingRouter? What would the code look like to make this work?
Of course I would appreciate any answer that helps me solve this, but would greatly prefer Java-based code snippets if at all possible. Scala looks like hieroglyphics to me.
I agree that a Custom Router is more appropriate than ConsistentHashingRouter. After reading the docs on custom routers, it seems I would:
Create a GroupBase impl and send messages to it directly (notificationGroup.tell(notification, self)); then
The GroupBase impl, say, NotificationGroup would provide a Router instance that was injected with my custom RoutingLogic impl
When NotificationGroup receives a message, it executes my custom RoutingLogic#select method which determines which Routee (I presume some kind of an actor?) to send the message to
If this is correct (and please correct me if I’m wrong), then the routing selection magic happens here:
class MessageBasedRoutingLogic implements RoutingLogic {
#Override
Routee select(Object message, IndexedSeq<Routee> candidates) {
// How can I query the Routee interface and deterine whether the message at-hand is in fact
// appropriate to be routed to the candidate?
//
// For instance I'd like to say "If message is an instance of
// an EmailNotification, send it to EmailNotificationPublisher."
//
// How do I do this here?!?
if(message instanceof EmailNotification) {
// Need to find the candidate/Routee that is
// the EmailNotificationPublisher, but how?!?
}
}
}
But as you can see I have a few mental implementation hurdles to cross. The Routee interface doesn’t really give me anything I can intelligently use to decide whether a particular Routee (candidate) is correct for the message at hand.
So I ask: (1) How can I map messages to Routees (effectively performing the route selection/logic)? (2) How do I add my publishers as routees in the first place? And (3) Do my NotificationPublisher impls still need to extend UntypedActor or should they now implement Routee?
Here is a simple little A/B router in Scala. I hope this helps even though you wanted a Java based answer. First the routing logic:
class ABRoutingLogic(a:ActorRef, b:ActorRef) extends RoutingLogic{
val aRoutee = ActorRefRoutee(a)
val bRoutee = ActorRefRoutee(b)
def select(msg:Any, routees:immutable.IndexedSeq[Routee]):Routee = {
msg match{
case "A" => aRoutee
case _ => bRoutee
}
}
}
The key here is that I am passing in my a and b actor refs in the constructor and then those are the ones I am routing to in the select method. Then, a Group for this logic:
case class ABRoutingGroup(a:ActorRef, b:ActorRef) extends Group {
val paths = List(a.path.toString, b.path.toString)
override def createRouter(system: ActorSystem): Router =
new Router(new ABRoutingLogic(a, b))
val routerDispatcher: String = Dispatchers.DefaultDispatcherId
}
Same thing here, I am making the actors I want to route to available via the constructor. Now a simple actor class to act as a and b:
class PrintingActor(letter:String) extends Actor{
def receive = {
case msg => println(s"I am $letter and I received letter $msg")
}
}
I will create two instances of this, each with a different letter assignment so we can verify that the right ones are getting the right messages per the routing logic. Lastly, some test code:
object RoutingTest extends App{
val system = ActorSystem()
val a = system.actorOf(Props(classOf[PrintingActor], "A"))
val b = system.actorOf(Props(classOf[PrintingActor], "B"))
val router = system.actorOf(Props.empty.withRouter(ABRoutingGroup(a,b)))
router ! "A"
router ! "B"
}
If you ran this, you would see:
I am A and I received letter A
I am B and I received letter B
It's a very simple example, but one that shows one way to do what you want to do. I hope you can bridge this code into Java and use it to solve your problem.

How to determine Akka actor/supervisor hierarchy?

I am brand new to Akka (Java lib, v2.3.9). I am trying to follow the supervisor hierarchy best practices, but since this is my first Akka app, am hitting a mental barrier somewhere.
In my first ever Akka app (really a library intended for reuse across multiple apps), input from the outside world manifests itself as a Process message that is passed to an actor. Developers using my app will provide a text-based config file that, ultimately, configures which actors get sent Process instances, and which do not. In other words, say these are my actor classes:
// Groovy pseudo-code
class Process {
private final Input input
Process(Input input) {
super()
this.input = deepClone(input)
}
Input getInput() {
deepClone(this.input)
}
}
class StormTrooper extends UntypedActor {
#Override
void onReceive(Object message) {
if(message instanceof Process) {
// Process the message like a Storm Trooper would.
}
}
}
class DarthVader extends UntypedActor {
#Override
void onReceive(Object message) {
if(message instanceof Process) {
// Process the message like Darth Vader would.
}
}
}
class Emperor extends UntypedActor {
#Override
void onReceive(Object message) {
if(message instanceof Process) {
// Process the message like the Emperor would.
}
}
}
// myapp-config.json -> where the actors are configured, along with other
// app-specific configs
{
"fizzbuzz": "true",
"isYosemite": "false",
"borderColor": "red",
"processors": [
"StormTrooper",
"Emperor"
]
}
As you can see in the config file, only StormTrooper and Emperor were selected to receive Process messages. This ultimately results with zero (0) DarthVader actors being created. It is also my intention that this would result with a Set<ActorRef> being made available to the application that is populated with StormTrooper and Emperor like so:
class SomeApp {
SomeAppConfig config
static void main(String[] args) {
String configFileUrl = args[0] // Nevermind this horrible code
// Pretend here that configFileUrl is a valid path to
// myapp-config.json.
SomeApp app = new SomeApp(configFileUrl)
app.run()
}
SomeApp(String url) {
super()
config = new SomeAppConfig(url)
}
void run() {
// Since the config file only specifies StormTrooper and
// Emperor as viable processors, the set only contains instances of
// these ActorRef types.
Set<ActorRef> processors = config.loadProcessors()
ActorSystem actorSystem = config.getActorSystem()
while(true) {
Input input = scanForInput()
Process process = new Process(input)
// Notify each config-driven processor about the
// new input we've received that they need to process.
processors.each {
it.tell(process, Props.self()) // This isn't correct btw
}
}
}
}
So, as you can (hopefully) see, we have all these actors (in reality, many dozens of UntypedActor impls) that handle Process messages (which, in turn, capture Input from some source). As to which Actors are even alive/online to handle these Process messages are entirely configuration-driven. Finally, every time the app receives an Input, it is injected into a Process message, and that Process message is sent to all configured/living actors.
With this as the given backstory/setup, I am unable to identify what the "actor/supervisor hierarchy" needs to be. It seems like in my use case, all actors are truly equals, with no supervisory structure between them. StormTrooper simply receives a Process message if that type of actor was configured to exist. Same for the other actor subclasses.
Am I completely missing something here? How do I define a supervisory hierarchy (for fault tolerance purposes) if all actors are equal and the hierarchy is intrinsically "flat"/horizontal?
If you want to instantiate no more than one instance for every your actor - you may want to have SenatorPalpatine to supervise those three. If you may have let's say more than one StormTrooper - you may want to have JangoFett actor responsible for creating (and maybe killing) them, some router is also good option (it will supervise them automatically). This will also give you an ability to restart all troopers if one fails (OneForAllStrategy), ability to broadcast, hold some common statistic etc.
Example (pseudo-Scala) with routers:
//application.conf
akka.actor.deployment {
/palpatine/vader {
router = broadcast-pool
nr-of-instances = 1
}
/palpatine/troopers {
router = broadcast-pool
nr-of-instances = 10
}
}
class Palpatine extends Actor {
import context._
val troopers = actorOf(FromConfig.props(Props[Trooper],
"troopers").withSupervisorStrategy(strategy) //`strategy` is strategy for troopers
val vader = actorOf(FromConfig.props(Props[Vader]), "vader")
override val supervisorStrategy = OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 1) //stategy for Palpatine's children (routers itself)
val strategy = OneForOneStrategy(maxNrOfRetries = 100, withinTimeRange = 1) //stategy for troopers
def receive = {
case p#Process => troopers ! p; vader ! p
case t#Terminted => println(t)
}
}
That creates broadcast pools based on standard akka-config. I also shown that you can customize supervision strategies for them separately.
If you want some of actors to ignore message by some reason - just implement this logic inside actor, like:
class Vader extends Actor {
def receive {
case p#Process => ...
case Ignore => context.become(ignore) //changes message handler to `ignore`
}
def ignore = {
case x => println("Ignored message " + x)
case UnIgnore => context.become(process)//changes message handler back
}
}
This will configure ignore/unignore dynamically (otherwise it's just a simple if). You may send Ignore message to actors based on some config:
val listOfIgnorantPathes = readFromSomeConfig()
context.actorSelection(listOfIgnoredPathes) ! Ignore
You can also create broadcaster for palpatine in the same way as trooper's router (just use groups instead of pools), if you want to control heterogenous broadcast from config:
akka.actor.deployment {
... //vader, troopers configuration
/palpatine/broadcaster {
router = broadcast-group
routees.paths = ["/palpatine/vader", "/palpatine/troopers"]
}
}
class Palpatine extends Actor {
... //vader, troopers definitions
val broadcaster = actorOf(FromConfig.props(), "broadcaster")
def receive = {
case p#Process => broadcaster ! p
}
}
Just exclude vader from routees.paths to make him not receiving Process messages.
P.S. Actors are never alone - there is always Guardian Actor (see The Top-Level Supervisors), which will shut down the whole system in case of exception. So eitherway SenatorPalpatine may really become your rescue.
P.S.2 context.actorSelection("palpatine/*") actually alows you to send message to all children (as an alternative to broadcast pools and groups), so you don't need to have a set of them inside.
Based on your comment, you would still want a Master actor to duplicate and distribute Processes. Conceptually, you wouldn't have the user (or whatever is generating your input) provide the same input once per actor. They would provide the message only once, and you (or the Master actor) would then duplicate the message as necessary and send it to each of its appropriate child actors.
As discussed in dk14's answer, this approach has the added benefit of increased fault tolerance.

Categories