I am using:
spring-boot 2.2.10
spring-cloud-gcp-pubsub 1.2.5
google-cloud-pubsub 1.108.0
google-cloud-core 1.93.7
gax 1.57.1
grpc-core 1.30.2
I am consuming messages of different sized from a GCP subscription. When a "big" message is sent to my client library:
1º It never reaches my listener code (I put a dumb logger)
2º I can see "Received data on closed stream"
3º Message is never acked, never dequed, never dlq-ed
4º Message is sent to my service over and over (sent count metric keeps going)
I know gRPC max size problem was already solved time ago, so gRPC keepAlive ... so I am lost on leads to investigate.
Related
I have a Spring boot application (v2.2.10.RELEASE) that subscribes to multiple topics in pubSub and pulls async data and sends it to somewhere else. I am not using SpringGCP, just native google libraries
this is my subscriber setting:
// Instantiate an asynchronous message receiver.
MessageReceiver receiver =
(PubsubMessage message, AckReplyConsumer consumer) -> {
messages.add(message);
consumer.ack();
};
Subscriber subscriber = Subscriber.newBuilder(subscriptionName, receiver)
.setParallelPullCount(2)
.setFlowControlSettings(flowControlSettings)
.setCredentialsProvider(credentialsProvider)
.setExecutorProvider(executorProvider)
//.setChannelProvider()
.build();
With high traffic and big messages (2 - 4 kb) I encounter this info message:
[grpc-default-worker-ELG-1-1] INFO i.grpc.internal.AbstractClientStream - Received data on closed stream
first of all, I don't fully understand what that means? all that I noticed was that when this happens the delivered duplicated messages increase. so I assumed it meant that pubSub tried to reach the subscriber with some messages but the subscriber for some reason was not ready so pubSub will try to deliver the messages again. and hence more duplicates, is that right?
would this problem be solved using the TransportChannelProvider in subscribers? my understanding of the poorly written documentation, that this will create a new channel for delivery when the current in-use channel is closed, hence get rid of the previous log message.
if yes, how do I define the channel target string? and where can I find A NameResolver-compliant URI for the mangagedChannel. the snippet I mean is this:
private TransportChannelProvider getChannelProvider() {
ManagedChannel channel = ManagedChannelBuilder.forTarget(target).usePlaintext(true).build();
return FixedTransportChannelProvider.create(GrpcTransportChannel.create(channel));
}
I am pretty new to GCP so sorry if my question is not coherent enough
Using a custom TransportChannelProvider won't solve this type of issue. This is more likely an issue deeper down in the stack, e.g., at the gRPC level. There have been some open issues for this type of error [1, 2].
With regard to why it is causing duplicates, it is possible that the messages are getting delivered via a stream that is already closed (which aligns with the error message) because they were trapped in a lower-level buffer at the gRPC layer and therefore ended up being duplicates of messages that were subsequently delivered and processed via another stream. This could be a version of the issue discussed in the documentation around large backlogs of small messages. There was a fix for this issue in v1.109.0 of the Java client library, so if you are using a version older than that, it is worth updating.
If duplicates continue to be an issue, it would be best to reach out to support with the name of your subscription and the message IDs of some of the duplicate messages so that they can look at the delivery patterns for those messages and further diagnose if these redeliveries are unexpected.
I'm new in MQTT there is a simple range of numbers which I want to print I have created 2 files in which the 1st file whose send data to the 2nd file and the script is like that:
sender.py
import paho.mqtt.client as mqtt
client = mqtt.Client()
client.connect("192.168.1.169", 1883, 60)
for i in range(1,100):
client.publish("TestTopic", i)
print(i)
client.disconnect()
receiver.py:
import paho.mqtt.client as mqtt
def on_connect(client, userdata, flags, rc):
print("Connected with result code "+str(rc))
client.subscribe("house/bulbs/bulb1")
def on_message(client, userdata, msg):
# print(msg.topic+" "+str(msg.payload))
print("message received ", str(msg.payload.decode("utf-8")))
print("message topic=", msg.topic)
print("message qos=", msg.qos)
print("message retain flag=", msg.retain)
client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message
client.connect("192.168.1.169", 1883, 60)
client.loop_forever()
I'm able to print the data if the receiver file is active but I have a problem in printing it if I started the sender file and then I started the receiver file ,main question is does MQTT follows the queueing Mechanism or not if yes then ....if I'm running the sender file then its all data should be in queue and after that when I'm run the other file which is receiver then I should get printed.. but its not working in the same way please help me I went lots of documents but i'm able to find any relevant info.. recently I found clean_session if someone have knowledge about this please tell me ....have any questions related my code or anything please let me know
thanks
MQTT is a pub/sub protocol, not a message queuing system.
This means under normal circumstances if there is no subscriber running when a message is published then it will not be delivered.
It is possible to get the broker to queue messages for a specific subscriber, but this requires the subscriber to have been connected before the message is published and to have subscribed with a QOS of greater than 0. Then as long as it reconnects with the clean session flag set to false and the same client id after the publish then the broker will deliver the missed messages.
Retained messages are something different. If a message is published with the retained flag set to true then the broker will deliver this single message to every subscriber when they subscribe to the matching topic. There can only ever be 1 retained message for a given topic.
I've got following problem. I'm using RabbitTemplate class from spring-rabbit-2.0.5.RELEASE. And send messeges to different exchanges using it. By default everything work fine. But when one of the exchanges is being deleted and there is a lot of messages to process there is a problem with sending messages to existing exchange - but no error is being thrown - messages are just silently dropped.
Code can be simplify to this. In given scenario after deleting exchange EX2 - Only part of the messages would be sent to EX1. Simple fix for that would be add a Thread.sleep(50) after each send - but this obviously unacceptable.
RabbitTemplate rabbitTemplate = new RabbitTemplate();
for (int i = 0; i < 1000; i++) {
rabbitTemplate.send("EX1", "RK1", someMessage);
rabbitTemplate.send("EX2", "RK2", someMessage);
}
After doing some investigations I came to following conclusions:
1) I'm reusing an existing channel - which is obvious
2) After sending message to non existing exchange channel is being closed and unfortunately it seems that is being closed by Rabbit itself and shutdown message is being send asynchronously to driver
3) After getting message about closed connection driver recreate a channel but messages sent in the meantime are lost
One of the possible solution would be having different channel for each exchange (it will work in my case as I'm sending messages only to couple of exchanges (less then 10)).
But in general it seems that this is just expected behaviour of RabbitTemplate (when your are not using confirmations)
I think you need to study what is Publisher Confirms and Returns: https://docs.spring.io/spring-amqp/docs/2.1.3.RELEASE/reference/html/_reference.html#cf-pub-conf-ret
Also follow the link about Scoped Operations.
I have a route reading from a queue from("jms:xx"). On the route I call an external webservice. I also configure a CircuitBreaker in case the webservice is temporary unavailable.
When the CircuitBreaker opens, all new incoming requests are blocked (status OPEN).
Something that puzzles me: in case of an error, the message needs to be put back on the queue, preferably with a delay (thanks to redelivery policies).
I assume the CircuitBreaker is not in touch with the messagelistener so even when the CircuitBreaker is OPEN, it will still regularly receive all messages from the input queue.
To reduce this overhead I would like to :
move all incoming messages to the DLQ when a RejectedExecutionException is thrown. Once the connection problem is resolved (status becomes CLOSED again) all these messages (header filter) can be moved again to the processing queue.
I understand I have a risk here: It will only retry when new messages arrive.
My Question: Is it possible to get informed when the status changes from HALF_OPEN to CLOSED (f.e. onClosed) ?
I'm using Jetty 9.3.5 and I would like to know what is the proper way to handle unreliable connections when sending websocket messages, specifically: I noticed cases when a websocket connection does not close normally so, even though the client side is down, it takes a lot of time until onClose() is triggered on the server (for ex. a user closes the laptop lid and puts it in standby - it can take 1-2 hours until the close event is received on the server side).
Thus, because the client is still registered, the server keeps sending messages that begin to build up. This becomes an issue when sending a large number of messages.
I've tested sending byte messages with:
Session.getRemote().sendBytes(ByteBuffer, WriteCallback)
Session.getRemote().sendBytesByFuture(ByteBuffer);
To simulate the connection down on one side (ie. user puts laptop in standby), on Linux, I assigned an IP address to eth0 interface, started sending the messages and then brought it down:
ifconfig eth0 192.168.1.1
ifconfig eth0 up
--- start sending messages (simple incremented numbers) and connect using Chrome browser and print them ---
ifconfig eth0 down
This way: the messages were still being sent by Jetty, the Chrome client did not receive them, the onCllose or onError was not triggered on server-side
My questions regarding Jetty are:
Is there a way to clear queued messages that were not delivered?
I've tried, but with no luck:
Session.getRemote().flush();
Can a max number of queued messages be set?
I've tried:
WebSocketServletFactory.getPolicy().setMaxBinaryMessageBufferSize(1)
Can I detect if the client does not receive the message? (or if the connection is in abnormal state let's say)
I've tried:
session.getRemote().sendBytes(bb, new WriteCallback() {
#Override
public void writeSuccess() {
//print success }
#Override
public void writeFailed(Throwable arg0) {
//print fail
}
});
But this prints success even though the messages are not received.
I also tried to use, but couldn't find a solution:
factory.getPolicy().setIdleTimeout(...);
factory.getPolicy().setAsyncWriteTimeout(3000);
sendPing()
Thanks in advance!
Unfortunately, the WebSocket protocol, being a message passing protocol isn't really designed for this level of nuance between messages.
The first message MUST complete before you can even think of sending the next message. So if you have a message in process, then there is no way to safely cancel that message.
At best, an API could exist to truncate that message with a CONTINUATION / empty payload / fin=true.
But even then the remote endpoint wouldn't know that you canceled the message, it would just see a partial message.
Detecting connectivity issues is best handled with either OS level events (like Android's Connectivity intents), or via periodic websocket PING (which inserts itself in front of the line for outgoing websocket frames.
However, even with PING, if your outgoing websocket frame is in-progress, even the PING cannot be sent until that websocket frame is done sending.
RemoteEndpoint.flush() will attempt to flush any pending messages (and frames), not clear out pending messages (or frames).
As for detecting if client got the message, you'll need to implement some sort of message ACK into your own layer to verify that, the protocol has no such concept. (Some libs/apis built on top of websocket have implemented message ACK in that layer. The cometd message ack extension comes to mind as a real world example)
What sort of situation are you attempting to solve for?
Perhaps using the RemoteEndpoint.sendPartialString(String, boolean) or RemoteEndpoint.sendPartialBytes(ByteBuffer, boolean) to send smaller frames of the whole message could be useful to you. However, the other side might not have an API that can read those partial frames (eg: Javascript in a browser).