Netty + ProtoBuffer: A few communication messages for one connection - java

While reading the Netty tutorial, I've found a simple description of how to integrate Netty and Google Protocol Buffers. I've started to investigate its example (because there is no more information in the documentation) and written a simple application like the example local time application. But this example is using static initialization in PipeFactory Class, e.g.:
import org.jboss.netty.channel.ChannelPipeline;
import org.jboss.netty.channel.ChannelPipelineFactory;
import org.jboss.netty.handler.codec.protobuf.ProtobufDecoder;
import org.jboss.netty.handler.codec.protobuf.ProtobufEncoder;
import org.jboss.netty.handler.codec.protobuf.ProtobufVarint32FrameDecoder;
import org.jboss.netty.handler.codec.protobuf.ProtobufVarint32LengthFieldPrepender;
import static org.jboss.netty.channel.Channels.pipeline;
/**
* #author sergiizagriichuk
*/
class ProtoCommunicationClientPipeFactory implements ChannelPipelineFactory {
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline p = pipeline();
p.addLast("frameDecoder", new ProtobufVarint32FrameDecoder());
p.addLast("protobufDecoder", new ProtobufDecoder(Communication.DataMessage.getDefaultInstance()));
p.addLast("frameEncoder", new ProtobufVarint32LengthFieldPrepender());
p.addLast("protobufEncoder", new ProtobufEncoder());
p.addLast("handler", new ProtoCommunicationClientHandler());
return p;
}
}
(Please take a look at line p.addLast("protobufDecoder", new ProtobufDecoder(Communication.DataMessage.getDefaultInstance()));)
and just one factory can be created (as I understand) for ClientBootstrap class, I mean bootstrap.setPipelineFactory() method. So, in this situation I can use ONE message to send to server and ONE message to receive from server and it is bad for me, and I think not just for me :( How can I use different messages to and from for just one connection?
Perhaps I can create a few protobufDecoder like this
p.addLast("protobufDecoder", new ProtobufDecoder(Communication.DataMessage.getDefaultInstance()));
p.addLast("protobufDecoder", new ProtobufDecoder(Communication.TestMessage.getDefaultInstance()));
p.addLast("protobufDecoder", new ProtobufDecoder(Communication.SrcMessage.getDefaultInstance()));
or other techniques?
Thanks a lot.

I've found thread of author of netty in google groups and understood that I have to change my architecture or write my own decoder as I wrote above, So, Start to think what way will be easy and better.

If you are going to write your own codecs anyway, you might want to look at implementing the Externalizable interface for custom data objects.
Serializable is low-effort, but worst performance (serializes everything).
Protobuf is a good trade-off between effort and performance (requires .proto maintenance)
Externalizable is high effort, but best performance (custom minimal codecs)
If you already know your project will have to scale like a mountain goat, you may have to go the hard road. Protobuf is not a silver bullet.

Theoretically this can be done by modifying the pipeline for each incoming message to suit the incoming message. Take a look at the port unification example in Netty.
Sequence would be:
1) In frame decoder or another "DecoderMappingDecoder" you check the message type of the incoming message
2) Modify the pipeline dynamically as shown in the example
But why not use different connections and follow this sequence:
1) Add other decoders in pipeline based on the incoming message only once.
2) Add the same instance of channel upstream handler as the last handler in the pipeline, this way all messages get routed to the same instance, which is almost like having a single connection.

the problem is that there is no way to distinct two different protobuf messages from each other in binary format. But there is a way to solve it within the protobuf file:
message AnyMessage {
message DataMessage { [...] }
optional DataMessage dataMessage = 1;
message TestMessage { [...] }
optional TestMessage testMessage = 2;
message SrcMessage { [...] }
optional SrcMessage srcMessage = 3;
}
optional fields that are not set produce no overhead. Additionally you can add an Enum, but it is just a bonus.

The issue is not quite a Netty limitation or encoder/decoder limitation. The problem is that Google Protocol Buffers are offering just a way to serialize/deserialize objects, but is not provide a protocol. They have some kind of RPC implementation as part of standard distribution, but if you'll try to implement their RPC protocol then you'll end up with 3 layers of indirection.
What I have done in one of the project, was to define a message that is basically an union of messages. This message contains one field that is Type and another field that is the actual message. You'll still end-up with 2 indirection layers, but not 3. In this way the example from Netty will work for you, but as was mention in a previous post, you have to put more logic in the business logic handler.

You can use message tunneling to send various types of messages as payload in a single message.
Hope that helps

After long research and suffering...
I came up with idea of using composition of messages into one wrapper message. Inside that message I use oneof key to limit the number of allowed objects to the only one. Checkout the example:
message OneMessage {
MessageType messageType = 1;
oneof messageBody {
Event event = 2;
Request request = 3;
Response response = 4;
}
string messageCode = 5; //unique message code
int64 timestamp = 6; //server time
}

Related

Is there any way to wait for a JMS message to be dequeued in a unit test?

I'm writing a spring-boot based project where I have some synchronous (eg. RESTI API calls) and asynchronous (JMS) pieces of code (the broker I use is a dockerized instance of ActiveMQ in case there's some kind of trick/workaround).
One of the problems I'm currently struggling with is: my application receives a REST api call (I'll call it "a sync call"), it does some processing and then sends a JMS message to a queue (async) whose message in then handled and processed (let's say I have a heavy load to perform, so that's why I want it to be async).
Everything works fine when running the application, async messages are enqueued and dequeued as expecting.
When I'm writing tests, (and I'm testing the whole service, which includes the sync and async call in rapid succession) it happens that the test code is too fast, and the message is still waiting to be dequeued (we are talking about milliseconds, but that's the problem).
Basically as soon as i receive the response from the API call, the message is still in the queue, so if, for example I make a query to check for its existence -> ka-boom the test fails because (obviously) it doesn't find the object (that probably meanwhile is being processed and created).
Is there any way, or any pattern, I can use to make my test wait for that async message to be dequeued? I can attach code to my implementation if needed, It's a bachelors degree thesis project.
One obvious solution I'm temporarily using is adding a hundred milliseconds sleep between the method call and the assert section (hoping everything is done and persisted), but honestly I kinda dislike this solution because it seems so non-deterministic to me. Also creating a latch between development code and testing doesn't sound really good to me.
Here's the code I use as an entry-point to al the mess I explained before:
public TransferResponseDTO transfer(Long userId, TransferRequestDTO transferRequestDTO) {
//Preconditions.checkArgument(transferRequestDTO.amount.compareTo(BigDecimal.ZERO) < 0);
Preconditions.checkArgument(userHelper.existsById(userId));
Preconditions.checkArgument(walletHelper.existsByUserIdAndSymbol(userId, transferRequestDTO.symbol));
TransferMessage message = new TransferMessage();
message.userId = userId;
message.symbol = transferRequestDTO.symbol;
message.destination = transferRequestDTO.destination;
message.amount = transferRequestDTO.amount;
messageService.send(message);
TransferResponseDTO response = new TransferResponseDTO();
response.status = PENDING;
return response;
}
And here's the code that receives the message (although you wouldn't need it):
public void handle(TransferMessage transferMessage) {
Wallet source = walletHelper.findByUserIdAndSymbol(transferMessage.userId, transferMessage.symbol);
Wallet destination = walletHelper.findById(transferMessage.destination);
try {
walletHelper.withdraw(source, transferMessage.amount);
} catch (InsufficientBalanceException ex) {
String u = userHelper.findEmailByUserId(transferMessage.userId);
EmailMessage email = new EmailMessage();
email.subject = "Insufficient Balance in your account";
email.to = u;
email.text = "Your transfer of " + transferMessage.amount + " " + transferMessage.symbol + " has been DECLINED due to insufficient balance.";
messageService.send(email);
}
walletHelper.deposit(destination, transferMessage.amount);
String u = userHelper.findEmailByUserId(transferMessage.userId);
EmailMessage email = new EmailMessage();
email.subject = "Transfer executed";
email.to = u;
email.text = "Your transfer of " + transferMessage.amount + " " + transferMessage.symbol + " has been ACCEPTED.";
messageService.send(email);
}
Im' sorry if the code sounds "a lil sketchy or wrong" it's a primordial implementation.
I'm willing to write a utility to share with you all if that's the case, but, as you've probably noticed, I'm low on ideas right now.
I'm an ActiveMQ developer working mainly on ActiveMQ Artemis (the next-gen broker from ActiveMQ). We run into this kind of problem all the time in our test-suite given the asynchronous nature of the broker, and we developed a little utility class that automates & simplifies basic polling operations.
For example, starting a broker is asynchronous so it's common for our tests to include an assertion to ensure the broker is started before proceeding. Using old-school Java 6 syntax it would look something like this:
Wait.assertTrue(new Condition() {
#Override
public boolean isSatisfied() throws Exception {
return server.isActive();
}
});
Using a Java 8 lambda would look like this:
Wait.assertTrue(() -> server.isActive());
Or using a Java 8 method reference:
Wait.assertTrue(server::isActive);
The utility is quite flexible as the Condition you use can test anything you want as long as it ultimately returns a boolean. Furthermore, it is deterministic unlike using Thread.sleep() (as you noted) and it keeps testing code separate from the "product" code.
In your case you can check to see if the "object" being created by your JMS process can be found. If it's not found then it can keep checking until either the object is found or the timeout elapses.

Http Websocket as Akka Stream Source

I'd like to listen on a websocket using akka streams. That is, I'd like to treat it as nothing but a Source.
However, all official examples treat the websocket connection as a Flow.
My current approach is using the websocketClientFlow in combination with a Source.maybe. This eventually results in the upstream failing due to a TcpIdleTimeoutException, when there are no new Messages being sent down the stream.
Therefore, my question is twofold:
Is there a way – which I obviously missed – to treat a websocket as just a Source?
If using the Flow is the only option, how does one handle the TcpIdleTimeoutException properly? The exception can not be handled by providing a stream supervision strategy. Restarting the source by using a RestartSource doesn't help either, because the source is not the problem.
Update
So I tried two different approaches, setting the idle timeout to 1 second for convenience
application.conf
akka.http.client.idle-timeout = 1s
Using keepAlive (as suggested by Stefano)
Source.<Message>maybe()
.keepAlive(Duration.apply(1, "second"), () -> (Message) TextMessage.create("keepalive"))
.viaMat(Http.get(system).webSocketClientFlow(WebSocketRequest.create(websocketUri)), Keep.right())
{ ... }
When doing this, the Upstream still fails with a TcpIdleTimeoutException.
Using RestartFlow
However, I found out about this approach, using a RestartFlow:
final Flow<Message, Message, NotUsed> restartWebsocketFlow = RestartFlow.withBackoff(
Duration.apply(3, TimeUnit.SECONDS),
Duration.apply(30, TimeUnit.SECONDS),
0.2,
() -> createWebsocketFlow(system, websocketUri)
);
Source.<Message>maybe()
.viaMat(restartWebsocketFlow, Keep.right()) // One can treat this part of the resulting graph as a `Source<Message, NotUsed>`
{ ... }
(...)
private Flow<Message, Message, CompletionStage<WebSocketUpgradeResponse>> createWebsocketFlow(final ActorSystem system, final String websocketUri) {
return Http.get(system).webSocketClientFlow(WebSocketRequest.create(websocketUri));
}
This works in that I can treat the websocket as a Source (although artifically, as explained by Stefano) and keep the tcp connection alive by restarting the websocketClientFlow whenever an Exception occurs.
This doesn't feel like the optimal solution though.
No. WebSocket is a bidirectional channel, and Akka-HTTP therefore models it as a Flow. If in your specific case you care only about one side of the channel, it's up to you to form a Flow with a "muted" side, by using either Flow.fromSinkAndSource(Sink.ignore, mySource) or Flow.fromSinkAndSource(mySink, Source.maybe), depending on the case.
as per the documentation:
Inactive WebSocket connections will be dropped according to the
idle-timeout settings. In case you need to keep inactive connections
alive, you can either tweak your idle-timeout or inject ‘keep-alive’
messages regularly.
There is an ad-hoc combinator to inject keep-alive messages, see the example below and this Akka cookbook recipe. NB: this should happen on the client side.
src.keepAlive(1.second, () => TextMessage.Strict("ping"))
I hope I understand your question correctly. Are you looking for asSourceOf?
path("measurements") {
entity(asSourceOf[Measurement]) { measurements =>
// measurement has type Source[Measurement, NotUsed]
...
}
}

Zipkin, using existing libraries to handle tracing in microservices connected with Apache Kafka

I would like to implement tracing in my microservices architecture. I am using Apache Kafka as message broker and I am not using Spring Framework. Tracing is a new concept for me. At first I wanted to create my own implementation, but now I would like to use existing libraries. Brave looks like the one I will want to use. I would like to know if there are some guides, examples or docs on how to do this. Documentation on Github page is minimal, and I find it hard to start using Brave. Or maybe there is better library with proper documentation, that is easier to use. I will be looking at Apache HTrace because it looks promising. Some getting started guides will be nice.
There are a bunch of ways to answer this, but I'll answer it from the "one-way" perspective. The short answer though, is I think you have to roll your own right now!
While Kafka can be used in many ways, it can be used as a transport for unidirectional single producer single consumer messages. This action is similar to normal one-way RPC, where you have a request, but no response.
In Zipkin, an RPC span is usually request-response. For example, you see timing of the client sending to the server, and also the way back to the client. One-way is where you leave out the other side. The span starts with a "cs" (client send) and ends with a "sr" (server received).
Mapping this to Kafka, you would mark client sent when you produce the message and server received when the consumer receives it.
The trick to Kafka is that there is no nice place to stuff the trace context. That's because unlike a lot of messaging systems, there are no headers in a Kafka message. Without a trace context, you don't know which trace (or span for that matter) you are completing!
The "hack" approach is to stuff trace identifiers as the message key. A less hacky way would be to coordinate a body wrapper which you can nest the trace context into.
Here's an example of the former:
https://gist.github.com/adriancole/76d94054b77e3be338bd75424ca8ba30
I meet the same problem too.Here is my solution, a less hacky way as above said.
ServerSpan serverSpan = brave.serverSpanThreadBinder().getCurrentServerSpan();
TraceHeader traceHeader = convert(serverSpan);
//in kafka producer,user KafkaTemplete to send
String wrapMsg = "wrap traceHeader with originMsg ";
kafkaTemplate.send(topic, wrapMsg).get(10, TimeUnit.SECONDS);// use synchronization
//then in kafka consumer
ConsumerRecords<String, String> records = consumer.poll(5000);
// for loop
for (ConsumerRecord<String, String> record : records) {
String topic = record.topic();
int partition = record.partition();
long offset = record.offset();
String val = record.value();
//parse val to json
Object warpObj = JSON.parseObject(val);
TraceHeader traceHeader = warpObj.getTraceHeader();
//then you can do something like this
MyRequest myRequest = new MyRequest(traceHeader, "/esb/consumer", "POST");
brave.serverRequestInterceptor().handle(new HttpServerRequestAdapter(new MyHttpServerRequest(myRequest), new DefaultSpanNameProvider()));
//then some httprequest within brave-apache-http-interceptors
//http.post(url,content)
}
you must implements MyHttpServerRequest and MyRequest.It is easy,you just return something a span need,such as uri,header,method.
This is a rough and ugly code example,just offer an idea.

Design query on defining new events on top of an existing set of events for a socket protocol

I wrote a java server and client program using JBoss Netty. In order to send some data to the remote client and receive data back from them, I have defined events and handlers for each event. On the wire, each event is just a single byte(opcode) header followed by the message bytes. Initially I had only supported TCP and had defined events like LOG_IN,LOG_OUT,DATA_IN,DATA_OUT etc in my program.
For e.g
public static final int LOG_IN = 0x08;
public static final int LOG_OUT = 0x0a;
Then I decided to support UDP also and ended up having events like LOGIN_UDP, LOGIN_TCP, DATA_OUT_TCP or DATA_OUT_UDP etc so that based on the event generated the correct event handler would get the event and write it to the appropriate socket and remote port.
As you can see the first issue I am facing is that I have almost doubled the number of defined events and event handlers on adding UDP. Is there a better way to approach this scenario?
The second(minor) issue I am facing is that events like DATA_OUT make sense when you are writing from server to client, but when receiving the same event at the client side "DATA_OUT" does not make such sense, since it is actually incoming data for the client. For the moment, I have a decoder which will translate DATA_OUT to DATA_IN. Is this the best approach?
You can use factory pattern to create connection on the basis of the type channel i.e. TCP or UDP. Other details will you have to define once in this case
Instead Calling DATA_OUT you can call it as SERVER_OUT same way SERVER_IN

Send ProtocolBuffer Message.Builder to another machine via RMI

I have the following pesudocode:
public void sendPB(ObjectId userId, Message.Builder mb) {
if (userId is logged in to server) {
set mb.ackId to random chars
lookup socket and send mb.build()
}
else {
forward message to user's server via RMI
}
}
The problem is Message.Builders do not implement Serializable, so you cannot send it directly via RMI.
Is there an easy way to do this?
I've tried building partial PB from the builder and sending that over, but in order to reconstruct it you need to know the type or the Descriptor. Descriptor doesn't implement Serializable either.
Thanks
Any reason you can't call build(), get a Message, and send it across in whatever the correct format is (e.g., toString()). At the other end, you can inflate it back into a Message, and make it back into a builder with toBuilder() if that's required.
You may also just convert the message to binary format and send that.
Maybe I'm misunderstanding -- the whole point of ProtocolBuffers is to get Messages into a wire representation, so there are a number of ways to do that (most of which are either Serializable or trivially wrapped to be.)
I got it working... I had to include a typeID field in the RMI message. Then, I could take the typeID and resolve it to a Message Builder, and then mergeFrom the bytes of the partially built message.

Categories