When testing an Akka actor with the TestKit, https://doc.akka.io/docs/akka/2.5/testing.html shows how to verify that a given message was logged.
Is there a way to check for the lack of a message?
I have my actors set up to call a method the logs something like "Unexpected message received" when an unhandled message is received. In my test, I would like to verify that that message is never logged, even if the test otherwise seems to succeed. Is there a way to do that?
I am using Akka 2.5 and Java 10.
It depends on your implementation. You could do one of two things:
1) Create a TestKit probe and make it subscribe to your system's eventStream
yourActorSystemInTheTest.eventStream().subscribe(yourProbe.getRef(), UnhandledMessage.class);
And then at the end check to see how many messages the probe received, in your case 0. Use one of the many "expect..." methods at your disposal.
2) The docs tell you how to check for log messages, so just assert that the number of times you get the "Unexpected message received" is 0.
Again, depending on your actors' implementation, the above might not work.
Good Luck!
To provide some details, here is what I needed to do:
import akka.event.Logging;
import akka.testkit.javadsl.EventFilter;
import akka.testkit.javadsl.TestKit;
...
#Test
public void noUnhandledTest() {
new TestKit(system) {{
new EventFilter(Logging.Warning.class, system).
occurrences(0).
matches("unhandled event").
intercept(() -> {
try {
<actual test code>
// Intercept needs a supplier
return 0;
} catch (Exception e) {
// Suppliers can't throw
throw new RuntimeException(e);
}
});
}};
}
In src/test/resources/application.conf:
akka.loggers = [akka.testkit.TestEventListener ]
Related
I am fairly new to developing distributed applications with messaging, and to Spring Cloud Stream in particular. I am currently wondering about best practices on how to deal with errors on the broker side.
In our application, we need to both consume and produce messages from/to multiple sources/destinations like this:
Consumer side
For consuming, we have defined multiple #Beans of type java.util.function.Consumer. The configuration for those looks like this:
spring.cloud.stream.bindings.consumeA-in-0.destination=inputA
spring.cloud.stream.bindings.consumeA-in-0.group=$Default
spring.cloud.stream.bindings.consumeB-in-0.destination=inputB
spring.cloud.stream.bindings.consumeB-in-0.group=$Default
This part works quite well - wenn starting the application, the exchanges "inputA" and "inputB" as well as the queues "inputA.$Default" and "inputB.$Default" with corresponding binding are automatically created in RabbitMQ.
Also, in case of an error (e.g. a queue is suddenly not available), the application gets notified immediately with a QueuesNotAvailableException and continuously tries to re-establish the connection.
My only question here is: Is there some way to handle this exception in code? Or, what are best practices to deal with failures like this on broker side?
Producer side
This one is more problematic. Producing messages is triggered by some internal logic, we cannot use function #Beans here. Instead, we currently rely on StreamBridge to send messages. The problem is that this approach does not trigger creation of exchanges and queues on startup. So when our code calls streamBridge.send("outputA", message), the message is sent (result is true), but it just disappears into the void since RabbitMQ automatically drops unroutable messages.
I found that with this configuration, I can at least get RabbitMQ to create exchanges and queues as soon as the first message is sent:
spring.cloud.stream.source=produceA;produceB
spring.cloud.stream.default.producer.requiredGroups=$Default
spring.cloud.stream.bindings.produceA-out-0.destination=outputA
spring.cloud.stream.bindings.produceB-out-0.destination=outputB
I need to use streamBridge.send("produceA-out-0", message) in code to make it work, which is not too great since it means having explicit configuration hardcoded, but at least it works.
I also tried to implement the producer in a Reactor style as desribed in this answer, but in this case the exchange/queue also is not created on application startup and the sent message just disappears even though the return status of the sending method is "OK".
Failures on the broker side are not registered at all with this approach - when I simulate one e.g. by deleting the queue or the exchange, it is not registered by the application. Only when another message is sent, I get in the logs:
ERROR 21804 --- [127.0.0.1:32404] o.s.a.r.c.CachingConnectionFactory : Shutdown Signal: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'produceA-out-0' in vhost '/', class-id=60, method-id=40)
But still, the result of StreamBridge#send was true in this case. But we need to know that sending did actually fail at this point (we persist the state of the sent object using this boolean return value). Is there any way to accomplish that?
Any other suggestions on how to make this producer scenario more robust? Best practices?
EDIT
I found an interesting solution to the producer problem using correlations:
...
CorrelationData correlation = new CorrelationData(UUID.randomUUID().toString());
messageHeaderAccessor.setHeader(AmqpHeaders.PUBLISH_CONFIRM_CORRELATION, correlation);
Message<String> message = MessageBuilder.createMessage(payload, messageHeaderAccessor.getMessageHeaders());
boolean sent = streamBridge.send(channel, message);
try {
final CorrelationData.Confirm confirm = correlation.getFuture().get(30, TimeUnit.SECONDS);
if (correlation.getReturned() == null && confirm.isAck()) {
// success logic
} else {
// failed logic
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
// failed logic
} catch (ExecutionException | TimeoutException e) {
// failed logic
}
using these additional configurations:
spring.cloud.stream.rabbit.default.producer.useConfirmHeader=true
spring.rabbitmq.publisher-confirm-type=correlated
spring.rabbitmq.publisher-returns=true
This seems to work quite well, although I'm still clueless about the return value of StreamBridge#send, it is always true and I cannot find information in which cases it would be false. But the rest is fine, I can get information on issues with the exchange or the queue from the correlation or the confirm.
But this solution is very much focused on RabbitMQ, which causes two problems:
our application should be able to connect to different brokers (e.g. Azure Service Bus)
in tests we use Kafka binder and I don't know how to configure the application context to make it work in this case, too
Any help would be appreciated.
On the consumer side, you can listen for an event such as the ListenerContainerConsumerFailedEvent.
https://docs.spring.io/spring-amqp/docs/current/reference/html/#consumer-events
On the producer side, producers only know about exchanges, not any queues bound to them; hence the requiredGroups property which causes the queue to be bound.
You only need spring.cloud.stream.default.producer.requiredGroups=$Default - you can send to arbitrary destinations using the StreamBridge and the infrastructure will be created.
#SpringBootApplication
public class So70769305Application {
public static void main(String[] args) {
SpringApplication.run(So70769305Application.class, args);
}
#Bean
ApplicationRunner runner(StreamBridge bridge) {
return args -> bridge.send("foo", "test");
}
}
spring.cloud.stream.default.producer.requiredGroups=$Default
I am trying to achieve the following scenario in my application:
When my application is up, the message from the incoming exchange should be consumed by the incoming queue.
If any exception/error occurs, the messages are directed to the DeadLetter Queue.
When downtime is going on for my application (I don't want to consume messages during that time), I am redirecting the messages to the ParkingLot Queue.
When downtime is over, I want to first consume the message from the ParkingLot Queue, and then start consuming messages normally using Incoming Queue.
My question is: Can these scenarios be implemented? Here, mainly I am talking about step 4. If yes, can someone please point me in the correct direction?
My second question is: Is it the correct way to achieve this scenario? Or is there a better way to achieve it?
Code added:
#RabbitListener(queues = "${com.rabbitmq.queueName}", id="msgId")
#RabbitListener(queues = "${com.rabbitmq.parkingQueueName}", id="parkingId")
public void consumeMessage(Message message) {
try {
log.info("Received message: {}",new String(message.getBody()));
//check if the application is down
if(val) {
registry.getListenerContainer("msgId").stop();
rabbitTemplate.send(rabbitMQConfig.getExchange(), rabbitMQConfig.getParkingRoutingKey(), message);
}
}catch(Exception e) {
rabbitTemplate.send(rabbitMQConfig.getDeadLetterExchange(), rabbitMQConfig.getDeadLetterRoutingKey(), message);
}
}
Give each #RabbitListener an id attribute.
Then use the RabbitListenerEndpointRegistry bean to control the containers' lifecycles.
registry.getListenerContainer(id).stop();
and
registry.getListenerContainer(id).start();
You can put both #RabbitListener annotations on the same method.
What is the best in terms of reactive programming when there is a need of interrupting a reactive pipeline?
The logic is very straightforward.
The web service, web application will accept requests.
Step 1, from the request, make one first HTTP request to a third party API. The first HTTP service will either answer with what we need, in our example, a string starting with good, or something we do not need.
Step 2, only if step 1 responded with what is needed, make a second HTTP request to a second HTTP service, also no control over, to get the ultimate and greatest response.
Note, this is sequential, we cannot call step 2 unless we have the correct value from step 1.
Obviously, making an entire HTTP call to step 2 at this point does not make sense at all.
Therefore, I am thinking of doing:
#PostMapping(path = "/question")
public Mono<ResponseEntity<String>> createDummyMono(String theImportantKey) {
return WebClient.create("http://first-service.com/get" + theImportantKey).get().exchangeToMono(clientResponse -> clientResponse.bodyToMono(String.class))
.flatMap(extractGoodValueFromStepOne -> {
if (extractGoodValueFromStepOne.startsWith("good")) {
System.out.println("Great! Step1 responded with something starting with good! Only with this we should invoke the second API");
return WebClient.create("http://second-service.com/get" + extractGoodValueFromStepOne.substring(4)).get().exchangeToMono(clientResponse -> clientResponse.bodyToMono(String.class));
} else {
System.out.println("This is bad, Step 1 did not return something starting with good, no need to make the second API call then. Let's just propagate an error message all the way to response with a dummy Mono");
return Mono.just("Step 1 did not answer with something good, the ultimate answer is an error");
}
})
.map(ResponseEntity::ok);
}
In this logic, the second step, represented by the flatMap will see if step 1 responded something we need. Only this case, a second HTTP request will be made to step 2. However, if it is not, I am building a dummy Mono to propagate and carry down the reactive pipeline.
A second solution, is to throw an exception, and catch it with #ExceptionHandler for instance
#PostMapping(path = "/question")
public Mono<ResponseEntity<String>> throwRuntimeException(String theImportantKey) {
return WebClient.create("http://first-service.com/get" + theImportantKey).get().exchangeToMono(clientResponse -> clientResponse.bodyToMono(String.class))
.flatMap(extractGoodValueFromStepOne -> {
if (extractGoodValueFromStepOne.startsWith("good")) {
System.out.println("Great! Step1 responded with something starting with good! Only with this we should invoke the second API");
return WebClient.create("http://second-service.com/get" + extractGoodValueFromStepOne.substring(4)).get().exchangeToMono(clientResponse -> clientResponse.bodyToMono(String.class));
} else {
System.out.println("This is bad, Step 1 did not return something starting with good, no need to make the second API call then. Let's just propagate an error message all the way to response with a dummy Mono");
throw new RuntimeException("Step 1 did not answer with something good, the ultimate answer is an error");
}
})
.map(ResponseEntity::ok);
}
#ExceptionHandler
public Mono<ResponseEntity<String>> exception(final RuntimeException runtimeException) {
return Mono.just(ResponseEntity.ok("Step 1 did not answer with something good, the ultimate answer is an error"));
}
Here, the logic is the same. Just if step 1 did not answer with what we need, I interrupt the pipeline by throwing a RuntimeException.
I kinda think, neither the first solution, passing down some dummy Mono or throwing an unchecked RuntimeException sounds the correct way to do in a reactive world.
May I ask which is the correct solution to answer to this problem and why please?
Your dummy Mono solution only works because there is nothing after in the chain that needs to do any additional processing, what if after your flatMap you need to do an additional flatMapon the successful value? then you will be in a pickle when a strange dummy Monocomes flying down the chain.
.flatMap(value -> {
if (value.startsWith("good")) {
System.out.println("good");
return WebClient.create("http://second-service.com/get" + value.substring(4))
.get()
.exchangeToMono(clientResponse -> clientResponse.bodyToMono(String.class));
} else {
System.out.println("Boo");
return Mono.just("some value");
}
}).flatMap(value2 -> {
// what now?
})
When an exception is thrown in an operator the exception will be propagatade through the stream as an onErrorevent. Like when we for instance return a Mono#error.
Some exceptions like for instance OutOfMemoryExceptionwill not be considered as such, but are instead fatal events and will terminate the flow immediately.
But otherwise most commonly the exception will then be transferred through the chain and "regular" operators will se that, that is an error event so they will just skip that event and pass it down the chain either out to the calling client, or until any of the specialized error event handlers see it, or as in your case be snatched up by an Exception handler that you have defined.
The correct way would be in your cases is to return a Mono#error (so you are explicit in your return) saying that if this happens we return an error and then either you recover, drop the value or whatever you want to do, or as you have done, handled the exception using an exception handler.
Your first solution behaves more like a return empty, and then you have switchIfEmpty operator so you change to another publisher (Mono) if last operator returned empty. Or you could use onErrorResume that will, if a specific error comes along, return a fallback Publisher.
There are very, very many ways of handling errors in reactor and i suggest you read up on them and try them all out.
4.6.2. Handling Exceptions in Operators or Functions
I'm writing a spring-boot based project where I have some synchronous (eg. RESTI API calls) and asynchronous (JMS) pieces of code (the broker I use is a dockerized instance of ActiveMQ in case there's some kind of trick/workaround).
One of the problems I'm currently struggling with is: my application receives a REST api call (I'll call it "a sync call"), it does some processing and then sends a JMS message to a queue (async) whose message in then handled and processed (let's say I have a heavy load to perform, so that's why I want it to be async).
Everything works fine when running the application, async messages are enqueued and dequeued as expecting.
When I'm writing tests, (and I'm testing the whole service, which includes the sync and async call in rapid succession) it happens that the test code is too fast, and the message is still waiting to be dequeued (we are talking about milliseconds, but that's the problem).
Basically as soon as i receive the response from the API call, the message is still in the queue, so if, for example I make a query to check for its existence -> ka-boom the test fails because (obviously) it doesn't find the object (that probably meanwhile is being processed and created).
Is there any way, or any pattern, I can use to make my test wait for that async message to be dequeued? I can attach code to my implementation if needed, It's a bachelors degree thesis project.
One obvious solution I'm temporarily using is adding a hundred milliseconds sleep between the method call and the assert section (hoping everything is done and persisted), but honestly I kinda dislike this solution because it seems so non-deterministic to me. Also creating a latch between development code and testing doesn't sound really good to me.
Here's the code I use as an entry-point to al the mess I explained before:
public TransferResponseDTO transfer(Long userId, TransferRequestDTO transferRequestDTO) {
//Preconditions.checkArgument(transferRequestDTO.amount.compareTo(BigDecimal.ZERO) < 0);
Preconditions.checkArgument(userHelper.existsById(userId));
Preconditions.checkArgument(walletHelper.existsByUserIdAndSymbol(userId, transferRequestDTO.symbol));
TransferMessage message = new TransferMessage();
message.userId = userId;
message.symbol = transferRequestDTO.symbol;
message.destination = transferRequestDTO.destination;
message.amount = transferRequestDTO.amount;
messageService.send(message);
TransferResponseDTO response = new TransferResponseDTO();
response.status = PENDING;
return response;
}
And here's the code that receives the message (although you wouldn't need it):
public void handle(TransferMessage transferMessage) {
Wallet source = walletHelper.findByUserIdAndSymbol(transferMessage.userId, transferMessage.symbol);
Wallet destination = walletHelper.findById(transferMessage.destination);
try {
walletHelper.withdraw(source, transferMessage.amount);
} catch (InsufficientBalanceException ex) {
String u = userHelper.findEmailByUserId(transferMessage.userId);
EmailMessage email = new EmailMessage();
email.subject = "Insufficient Balance in your account";
email.to = u;
email.text = "Your transfer of " + transferMessage.amount + " " + transferMessage.symbol + " has been DECLINED due to insufficient balance.";
messageService.send(email);
}
walletHelper.deposit(destination, transferMessage.amount);
String u = userHelper.findEmailByUserId(transferMessage.userId);
EmailMessage email = new EmailMessage();
email.subject = "Transfer executed";
email.to = u;
email.text = "Your transfer of " + transferMessage.amount + " " + transferMessage.symbol + " has been ACCEPTED.";
messageService.send(email);
}
Im' sorry if the code sounds "a lil sketchy or wrong" it's a primordial implementation.
I'm willing to write a utility to share with you all if that's the case, but, as you've probably noticed, I'm low on ideas right now.
I'm an ActiveMQ developer working mainly on ActiveMQ Artemis (the next-gen broker from ActiveMQ). We run into this kind of problem all the time in our test-suite given the asynchronous nature of the broker, and we developed a little utility class that automates & simplifies basic polling operations.
For example, starting a broker is asynchronous so it's common for our tests to include an assertion to ensure the broker is started before proceeding. Using old-school Java 6 syntax it would look something like this:
Wait.assertTrue(new Condition() {
#Override
public boolean isSatisfied() throws Exception {
return server.isActive();
}
});
Using a Java 8 lambda would look like this:
Wait.assertTrue(() -> server.isActive());
Or using a Java 8 method reference:
Wait.assertTrue(server::isActive);
The utility is quite flexible as the Condition you use can test anything you want as long as it ultimately returns a boolean. Furthermore, it is deterministic unlike using Thread.sleep() (as you noted) and it keeps testing code separate from the "product" code.
In your case you can check to see if the "object" being created by your JMS process can be found. If it's not found then it can keep checking until either the object is found or the timeout elapses.
I'm pretty new to RxJava and have some questions on patterns etc.
I'm creating an observable using the code below:
public Observable<Volume> getVolumeObservable(Epic epic) {
return Observable.create(event -> {
try {
listeners.add(streamingAPI.subscribeForChartCandles(epic.getName(), MINUTE, new HandyTableListenerAdapter() {
#Override
public void onUpdate(int i, String s, UpdateInfo updateInfo) {
if (updateInfo.getNewValue(CONS_END).equals(ONE)) {
event.onNext(new Volume(Integer.parseInt(updateInfo.getNewValue(LAST_TRADED_VOLUME))));
}
}
}));
} catch (Exception e) {
LOG.error("Error from volume observable", e);
}
});
}
Everything is working as expected, but I have some questions on error handling.
If I understand correctly, this is to be viewed as a "hot observble", i.e. events will happen regardless of there being a subscription or not (onUpdate is a callback used by a remote server which I have no control over).
I've chosen not to call onError here since I don't want the observable to stop emitting events in case of a single exception. Is there a better pattern to be used? .retry() comes to mind, but I'm not sure that it makes sense for a hot observable?
Also, how is the observable represented when the subscriptions is created, but before the first onNext is called? Is it just an Observable.empty()
1) Your observable is not hot. The distinguishing factor is whether multiple subscribers share the same subscription. Observable.create() invokes subscribe function for every subscriber, i.e. it is cold.
It is easy to make it hot though. Just add share() operator. It will subscribe with first subscriber and unsubscribe with last one. Do not forget to implement unsubscribe functionality with something like this:
event.setCancellable(() -> listeners.remove(...));
2) Errors could be recoverable and not recoverable.
In case you consider an error to be self-recoverable (no action required from your side) you should not call onError as this will kill your observable (no further events would be emitted). You can possibly notify your subscribers by emitting special Volume message with error details attached.
In case an error is fatal, e.g. you have failed to add listener, so there could be no further messages, you should not silently ignore this. Emit onError as your observable is not functional anyway.
In case an error requires actions from you, typically retry, or retry with timeout, you can add one of retryXxx() operators. Do this after create() but before share().
3) Observable is an object with subscribe() method. How exactly it is represented depends on the method you created it with. See source code of create() for example.