Suppose I have a simple RSocket and Spring Boot Server. The server broadcasts all incoming client messages to all connected clients (including the sender). Client and server look like this:
Server:
public RSocketController() {
this.processor = DirectProcessor.<String>create().serialize();
this.sink = this.processor.sink();
}
#MessageMapping("channel")
Flux<String> channel(final Flux<String> messages) {
this.registerProducer(messages);
// breakpoint here
return processor
.doOnSubscribe(subscription -> logger.info("sub"))
.doOnNext(message -> logger.info("[Sent] " + message));
}
private Disposable registerProducer(Flux<String> flux) {
return flux
.doOnNext(message -> logger.info("[Received] " + message))
.map(String::toUpperCase)
// .delayElements(Duration.ofSeconds(1))
.subscribe(this.sink::next);
}
Client:
#ShellMethod("Connect to the server")
public void connect(String name) {
this.name = name;
this.rsocketRequester = rsocketRequesterBuilder
.rsocketStrategies(rsocketStrategies)
.connectTcp("localhost", 7000)
.block();
}
#ShellMethod("Establish a channel")
public void channel() {
this.rsocketRequester
.route("channel")
.data(this.fluxProcessor.doOnNext(message -> logger.info("[Sent] {}", message)))
.retrieveFlux(String.class)
.subscribe(message -> logger.info("[Received] {}", message));
}
#ShellMethod("Send a lower case message")
public void send(String message) {
this.fluxSink.next(message.toLowerCase());
}
The problem is: the first message a client sends is processed by the server, but does not reach the sender again. All subsequent messages are delivered without any problems. All other clients already connected will receive all messages.
What I noticed so far while debugging
when I call channel() in the client, retrieveFlux() and subscribe() are called. But on the server the breakpoint is not triggered in the corresponding method.
Only when the client sends the first message with send() is the breakpoint triggered on the server.
Using the .delayElements() on the server seems to "solve" the problem.
What am i doing wrong here?
And why does it need the send() first to trigger the servers breakpoint?
Thanks in advance!
A DirectProcessor does not have a buffer. If it does not have a subscriber, the message is dropped.
(Citing from its Javadoc: If there are no Subscribers, upstream items are dropped)
I think that when RSocketController.registerProducer() calls flux.[...].subscribe() it immediately starts processing the incoming messages from flux and passing them to the sink of the processor, but subscription to the processor has not happened yet. Thus the messages are dropped.
I guess that subscription to the processor is done by the framework, after returning from RSocketController.channel(...) method. -- I think that you are able to set a breakpoint in your processor.doOnSubscribe(..) method to see where it actually happens.
Thus maybe moving a registerProducer() call into a processor.doOnSubscribe() callback will solve your issue, like this:
#MessageMapping("channel")
Flux<String> channel(final Flux<String> messages) {
return processor
.doOnSubscribe(subscription -> this.registerProducer(messages))
.doOnSubscribe(subscription -> logger.info("sub"))
.doOnNext(message -> logger.info("[Sent] " + message));
}
But I think that personally I would prefer to replace a DirectProcessor with UnicastProcessor.create().onBackpressureBuffer().publish(). So that broadcasting to multiple subscribers is moved into a separate operation, so that there could be a buffer between the sink and subscribers, and late subscribers and backpressure could be handled in a better way.
Related
During contract test I run main FLOW, which produces 2 events into different kafka topics (TOPIC_1 and TOPIC_2). And I have two different tests to check sending of this events (TEST_1 for TOPIC_1 and TEST_2 for TOPIC_2). So both TEST_1 and TEST_2 runs the same FLOW, and for TEST_1 I have side effect of sending event into TOPIC_2, and for TEST_2 - TOPIC_1. Consider example, where I run TEST_1 and then TEST_2. During TEST_2 I will have 2 events inside TOPIC_2 - one produced by TEST_1 and the second produced by TEST_2. And of course, my TEST_2 will fail, because during the verification it supposes to receive message, produced by himself, nothing else.
So, that's why I need to skip all old messages in all topics before each test. How does it possible to do with usage of org.springframework.cloud.contract.verifier.messaging.internal.ContractVerifierMessaging
I found out a solution where I poll all messages from each endpoint inside CamelContext
#Autowired
private org.apache.camel.CamelContext camelContext;
#org.junit.Before
public void cleanUpCamelEndpoints() {
for (Endpoint endpoint : camelContext.getEndpoints()) {
try {
PollingConsumer pollingConsumer = endpoint.createPollingConsumer();
Exchange exchangeToSkip;
while ((exchangeToSkip = pollingConsumer.receiveNoWait()) != null) {
log.debug("Skipped side effect exchange: {}", exchangeToSkip);
}
} catch (Exception exception) {
log.debug("Exception while receive exchange to skip: " + exception);
}
}
}
I'm using a non Blocking (Async) sending message to Kafka using this :
ListenableFuture<SendResult<Integer, String>> future = template.send(record);
future.addCallback(new ListenableFutureCallback<SendResult<Integer, String>>() {
#Override
public void onSuccess(SendResult<Integer, String> result) {
handleSuccess(data);
}
#Override
public void onFailure(Throwable ex) {
handleFailure(data, record, ex);
}
});
This work perfectly when the send action does its work.
But when there is a connection problem (server down for example), the result become non asynchronous and the method remains blocked until the end of the duration of max.block.ms.
This is natural in Async KAfka producer. You have two options
Either reduce the max.block.ms but don't reduce it too much.
You can wait for acks
You can also create a callback function for onCompletion()
I want to have an ability to send notification to private channels of all users in my group
This is my code
public static void main(String[] args) throws LoginException {
final JDA bot =
new JDABuilder(AccountType.BOT)
.setToken("secret")
.addEventListener(new DemoApplication())
.build();
}
#Override
public void onPrivateMessageReceived(final PrivateMessageReceivedEvent event) {
if (event.getAuthor().isBot()) {
return;
}
event.getJDA().getGuilds().get(0).getMembers().forEach(user->user.getUser().openPrivateChannel().queue());
event.getJDA().getPrivateChannels().forEach(privateChannel -> privateChannel.sendMessage("ZDAROVA").queue());
}
But only sender of this private message receive a message . What did i miss ?
I use JDA with version 3.8.3_462
Your code makes use of async operations. An async task is one that is started on another thread and possibly happens at a later time.
Discord has rate-limits which have to be respected by the operating client. For this reason and the reason that HTTP requests take some time, the requests happen in the background. The method you're using which is called queue() simply puts the request on a queue that is drained by a worker thread.
openPrivateChannel() returns RestAction<PrivateChannel> which means it will receive a private channel instance as a response. This response can be interacted with by using the callback parameter of queue(Consumer<PrivateChannel> callback).
static void sendMessage(User user, String content) {
user.openPrivateChannel().queue(channel -> { // this is a lambda expression
// the channel is the successful response
channel.sendMessage(content).queue();
});
}
guild.getMembers().stream()
.map(Member::getUser)
.forEach(user -> sendMessage(user, "ZDAROVA"));
More information on RestAction is available in the JDA Wiki and Documentation.
I have this in my WebSocketHandler implementation:
#Override
public Mono<Void> handle(WebSocketSession session) {
return session.send(
session.receive()
.flatMap(webSocketMessage -> {
int id = Integer.parseInt(webSocketMessage.getPayloadAsText());
Flux<EfficiencyData> flux = service.subscribeToEfficiencyData(id);
var publisher = flux
.<String>handle((o, sink) -> {
try {
sink.next(objectMapper.writeValueAsString(o));
} catch (JsonProcessingException e) {
e.printStackTrace();
}
})
.map(session::textMessage);
return publisher;
})
);
}
The Flux<EfficiencyData> is currently generated for testing in the service as follows:
public Flux<EfficiencyData> subscribeToEfficiencyData(long weavingLoomId) {
return Flux.interval(Duration.ofSeconds(1))
.map(aLong -> {
longAdder.increment();
return new EfficiencyData(new MachineSpeed(
RotationSpeed.ofRpm(longAdder.intValue()),
RotationSpeed.ofRpm(0),
RotationSpeed.ofRpm(400)));
}).publish().autoConnect();
}
I am using publish().autoConnect() to make it a hot stream. I created a unit test that starts 2 threads that do this on the returned Flux:
flux.log().handle((s, sink) -> {
LOGGER.info("{}", s.getMachineSpeed().getCurrent());
}).subscribe();
In this case, I see both threads printing out the same value every second.
However, when I open 2 browser tabs, I don't see the same values in both my web pages. The more websocket clients that connect, the more the values are apart (So each value from the original Flux seems to be sent to a different client, instead of sent to all of them).
Managed to fix this thanks to Brian Clozel on twitter.
The issue is that for each connecting websocket client, I call the service.subscribeToEfficiencyData(id) method, which returns a new Flux every time it is called. So of course, those independent Flux'es are not being shared between the different websocket clients.
To fix the issue, I create the Flux instance in the constructor or a PostConstruct method of my service so the subscribeToEfficiencyData returns the same Flux instance every time.
Note that .publish().autoConnect() on the Flux remains important, because without that websocket clients will again see different values!
I have a Jersey application in which I am using spring amqp library to publish messages to rabbitMQ exchanges. I am using CachingConnectionFactory in my rabbit template and initially Channel-Transacted was set to false. I noticed that some messages were not actually published to the exchange, so I changed the channel-transacted value to true.
On doing this, my publishing function started taking 500ms (It was 5ms while the channel transacted was false). Is there something I am missing here because 500ms is way too much.
As an alternative, I tried setting publisherConfirms to true and added a ConfirmCallback. I haven't yet benchmarked this, but would like to know if this will have better performance as compared to channel-transacted, given the sole purpose of this application is to publish a message to an exchange in RabbitMQ?
Also, if I go with publisherConfirms, I would like to implement retries in case of failures or at least be able to throw exceptions. With channel-transacted, I will get exception in case of failures, but the latency is high in that case. I am not sure how to implement retries with publisherConfirms.
I tried retries with publisher confirms but my code just hangs.
Here's my code:
CompleteMessageCorrelationData.java
public class CompleteMessageCorrelationData extends CorrelationData {
private final Message message;
private final int retryCount;
public CompleteMessageCorrelationData(String id, Message message, int retryCount) {
super(id);
this.message = message;
this.retryCount = retryCount;
}
public Message getMessage() {
return this.message;
}
public int getRetryCount() {
return this.retryCount;
}
#Override
public String toString() {
return "CompleteMessageCorrelationData [id=" + getId() + ", message=" + this.message + "]";
}
}
Setting up the CachingConnectionFactory:
private static CachingConnectionFactory factory = new CachingConnectionFactory("host");
static {
factory.setUsername("rmq-user");
factory.setPassword("rmq-password");
factory.setChannelCacheSize(50);
factory.setPublisherConfirms(true);
}
private final RabbitTemplate rabbitTemplate = new RabbitTemplate(factory);
rabbitTemplate.setConfirmCallback((correlation, ack, reason) -> {
if (correlation != null && !ack) {
CompleteMessageCorrelationData data = (CompleteMessageCorrelationData)correlation;
log.info("Received nack for message: " + data.getMessage() + " for reason : " + reason);
int counter = data.getRetryCount();
if (counter < Integer.parseInt(max_retries)){
this.rabbitTemplate.convertAndSend(data.getMessage().getMessageProperties().getReceivedExchange(),
data.getMessage().getMessageProperties().getReceivedRoutingKey(),
data.getMessage(), new CompleteMessageCorrelationData(id, data.getMessage(), counter++));
} else {
log.error("Max retries exceeded for message: " + data.getMessage());
}
}
});
Publishing the message:
rabbitTemplate.convertAndSend(exchangeName, routingKey, message, new CompleteMessageCorrelationData(id, message, 0));
So, in short :
Am I doing something wrong with Channel-transacted that the latency is so high?
If I were to implement publisherConfirms instead, along with retries, what's wrong with my approach and will it perform better than channel transacted, considering there is no other job this application has other than publishing messages to rabbitmq?
As you have found, transactions are expensive and significantly degrade performance; 500ms seems high, though.
I don't believe publisher confirms will help much. You still have to wait for the round-trips to the broker, before releasing the servlet thread. Publisher confirms are useful when you send a bunch of messages and then wait for all the confirms to come back; but when you are only sending one message and then waiting for the confirm, it likely won't be much faster than using a transaction.
You could try it, though, but the code is a bit complex, especially if you want to handle exceptions, which you get for "free" with transactions.