Netty UDP compatible decoders - java

Which decoders are safe to extend in use with a Non Blocking Datagram Channel?
Essentially, I need to go from *ByteBuff to String, which I then have code that will turn that string into an object. Also, this would need to be accomplished with a decoder. From object to string and finally back to a *ByteBuff.
I have tried extending ByteToMessageDecoder, but it seems that Netty never invokes the decode method. So I am not sure if this is mainly a problem with the Datagram Channel or a problem with my principle understanding of decoders...
Just in case here is some of my code
Initializer:
public class Initializer extends ChannelInitializer<NioDatagramChannel> {
private SimpleChannelInboundHandler<Packet> sipHandler;
public Initializer(SimpleChannelInboundHandler<Packet> handler) {
sipHandler = handler;
}
#Override
protected void initChannel(NioDatagramChannel chan) throws Exception {
ChannelPipeline pipe = chan.pipeline();
pipe.addLast("decoder", new SipDecoder());
pipe.addLast("handler", sipHandler);
pipe.addLast("encoder", new SipEncoder());
}
}
Beginning of my Decoder:
public class SipDecoder extends ByteToMessageDecoder {
private Packet sip;
#Override
protected void decode(ChannelHandlerContext context, ByteBuf byteBuf, List<Object> objects) throws Exception {
System.out.println("got hit...");
String data = new String(byteBuf.array());
sip = new Packet();
// [...]
}
}

To handle DatagramPacket's you need to use MessageToMessageDecoder as ByteToMessageDecoder only works for ByteBuf.

Related

When many Akka Actors send messages to one actor, how to cleanly handle inheritance of inner Command classes

In akka-typed, the convention is to create Behavior classes with static inner classes that represent the messages that they receive. Heres a simple example
public class HTTPCaller extends AbstractBehavior<HTTPCaller.MakeRequest> {
public interface Command {}
// this is the message the HTTPCaller receives
public static final class MakeRequest implements Command {
public final String query;
public final ActorRef<Response> replyTo;
public MakeRequest(String query, ActorRef<Response> replyTo) {
this.query = query;
this.replyTo = replyTo;
}
}
// this is the response message
public static final class Response implement Command {
public final String result;
public Response(String result) {
this.result = result;
}
}
public static Behavior<Command> create() {
return Behaviors.setup(HTTPCaller::new);
}
private HTTPCaller(ActorContext<Command> context) {
super(context);
}
#Override
public Receive<Command> createReceive() {
return newReceiveBuilder()
.onMessage(MakeRequest.class, this::onMakeRequest).build();
}
private Behavior<MakeRequest> onMakeRequest(MakeRequest message) {
String result = // make HTTP request here using message.query
message.replyTo.tell(new Response(result));
return Behaviors.same();
}
}
Let's say that 20 other actors send MakeRequest messages to the single HTTPCaller actor. Now, each of these other actors have inner classes that implement their own Command. Since MakeRequest is being used by all 20 classes it must be a subtype of all 20 of those actors' Command inner interface.
This is not ideal. I'm wondering what the Akka way of getting around this is.
There's no requirement that a message (e.g. a command) which an actor sends (except for messages to itself...) have to conform to that actor's incoming message type. The commands sent to the HTTPCaller actor only have to (and in this case only do) extend HTTPCaller.Command.
So imagine that we have something like
public class SomeOtherActor extends AbstractBehavior<SomeOtherActor.Command> {
public interface Command;
// yada yada yada
ActorRef<HTTPCaller.Command> httpCallerActor = ...
httpCallerActor.tell(new HTTPCaller.MakeRequest("someQuery", getContext().getSystem().ignoreRef());
}
In general, when defining messages which are sent in reply, those are not going to extend the message type of the sending actor. In HTTPCaller, for instance, Response probably shouldn't implements Command: it can be a standalone class (alternatively, if it is something that might be received by the HTTPCaller actor, it should be handled in the receive builder).
My code above does bring up one question: if Response is to be received by SomeOtherActor, how can it extend SomeOtherActor.Command?
The solution there is message adaptation: a function to convert a Response to a SomeOtherActorCommand. For example
// in SomeOtherActor
// the simplest possible adaptation:
public static final class ResponseFromHTTPCaller implements Command {
public final String result;
public ResponseFromHTTPCaller(HTTPCaller.Response response) {
result = response.result;
}
// at some point before telling the httpCallerActor...
// apologies if the Java lambda syntax is messed up...
ActorRef<HTTPCaller.Response> httpCallerResponseRef =
getContext().messageAdapter(
HTTPCaller.Response.class,
(response) -> { new ResponseFromHTTPCaller(response) }
);
httpCallerActor.tell(new HTTPCaller.MakeRequest("someQuery", httpCallerResponseRef);
There is also the ask pattern, which is more useful for one-shot interactions between actors where there's a timeout.

How to use non-keyed state with Kafka Consumer in Flink?

I'm trying to implement (just starting work with Java and Flink) a non-keyed state in KafkaConsumer object, since in this stage no keyBy() in called. This object is the front end and the first module to handle messages from Kafka.
SourceOutput is a proto file representing the message.
I have the KafkaConsumer object :
public class KafkaSourceFunction extends ProcessFunction<byte[], SourceOutput> implements Serializable
{
#Override
public void processElement(byte[] bytes, ProcessFunction<byte[], SourceOutput>.Context
context, Collector<SourceOutput> collector) throws Exception
{
// Here, I want to call to sorting method
collector.collect(output);
}
}
I have an object (KafkaSourceSort) that do all the sorting and should keep the unordered message in priorityQ in the state and also responsible to deliver the message if it comes in the right order thru the collector.
class SessionInfo
{
public PriorityQueue<SourceOutput> orderedMessages = null;
public void putMessage(SourceOutput Msg)
{
if(orderedMessages == null)
orderedMessages = new PriorityQueue<SourceOutput>(new SequenceComparator());
orderedMessages.add(Msg);
}
}
public class KafkaSourceState implements Serializable
{
public TreeMap<String, SessionInfo> Sessions = new TreeMap<>();
}
I read that I need to use a non-keyed state (ListState) which should contain a map of sessions while each session contains a priorityQ holding all messages related to this session.
I found an example so I implement this:
public class KafkaSourceSort implements SinkFunction<KafkaSourceSort>,
CheckpointedFunction
{
private transient ListState<KafkaSourceState> checkpointedState;
private KafkaSourceState state;
#Override
public void snapshotState(FunctionSnapshotContext functionSnapshotContext) throws Exception
{
checkpointedState.clear();
checkpointedState.add(state);
}
#Override
public void initializeState(FunctionInitializationContext context) throws Exception
{
ListStateDescriptor<KafkaSourceState> descriptor =
new ListStateDescriptor<KafkaSourceState>(
"KafkaSourceState",
TypeInformation.of(new TypeHint<KafkaSourceState>() {}));
checkpointedState = context.getOperatorStateStore().getListState(descriptor);
if (context.isRestored())
{
state = (KafkaSourceState) checkpointedState.get();
}
}
#Override
public void invoke(KafkaSourceState value, SinkFunction.Context contex) throws Exception
{
state = value;
// ...
}
}
I see that I need to implement an invoke message which probably will be called from processElement() but the signature of invoke() doesn't contain the collector and I don't understand how to do so or even if I did OK till now.
Please, a help will be appreciated.
Thanks.
A SinkFunction is a terminal node in the DAG that is your job graph. It doesn't have a Collector in its interface because it cannot emit anything downstream. It is expected to connect to an external service or data store and send data there.
If you share more about what you are trying to accomplish perhaps we can offer more assistance. There may be an easier way to go about this.

Using protobuf with flink

I'm using flink to read data from kafka and convert it to protobuf. The problem I'm facing is when I run the java application I get the below error. If I modify the unknownFields variable name to something else, it works but it's hard to make this change on all protobuf classes.
I also tried to deserialize directly when reading from kafka but I'm not sure what should be the TypeInformation to be returned for getProducedType() method.
public static class ProtoDeserializer implements DeserializationSchema{
#Override
public TypeInformation getProducedType() {
// TODO Auto-generated method stub
return PrimitiveArrayTypeInfo.BYTE_PRIMITIVE_ARRAY_TYPE_INFO;
}
Appreciate all the help. Thanks.
java.lang.RuntimeException: The field protected com.google.protobuf.UnknownFieldSet com.google.protobuf.GeneratedMessage.unknownFields is already contained in the hierarchy of the class com.google.protobuf.GeneratedMessage.Please use unique field names through your classes hierarchy
at org.apache.flink.api.java.typeutils.TypeExtractor.getAllDeclaredFields(TypeExtractor.java:1594)
at org.apache.flink.api.java.typeutils.TypeExtractor.analyzePojo(TypeExtractor.java:1515)
at org.apache.flink.api.java.typeutils.TypeExtractor.privateGetForClass(TypeExtractor.java:1412)
at org.apache.flink.api.java.typeutils.TypeExtractor.privateGetForClass(TypeExtractor.java:1319)
at org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:609)
at org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:437)
at org.apache.flink.api.java.typeutils.TypeExtractor.getUnaryOperatorReturnType(TypeExtractor.java:306)
at org.apache.flink.api.java.typeutils.TypeExtractor.getFlatMapReturnTypes(TypeExtractor.java:133)
at org.apache.flink.streaming.api.datastream.DataStream.flatMap(DataStream.java:529)
Code:
FlinkKafkaConsumer09<byte[]> kafkaConsumer = new FlinkKafkaConsumer09<>("testArr",new ByteDes(),p);
DataStream<byte[]> input = env.addSource(kafkaConsumer);
DataStream<PBAddress> protoData = input.map(new RichMapFunction<byte[], PBAddress>() {
#Override
public PBAddress map(byte[] value) throws Exception {
PBAddress addr = PBAddress.parseFrom(value);
return addr;
}
});
Maybe you should try this follow:
env.getConfig().registerTypeWithKryoSerializer(PBAddress. class,ProtobufSerializer.class);
or
env.getConfig().registerTypeWithKryoSerializer(PBAddress. class,PBAddressSerializer.class);
public class PBAddressSerializer extends Serializer<Message> {
final private Map<Class,Method> hashMap = new HashMap<Class, Method>();
protected Method getParse(Class cls) throws NoSuchMethodException {
Method method = hashMap.get(cls);
if (method == null) {
method = cls.getMethod("parseFrom",new Class[]{byte[].class});
hashMap.put(cls,method);
}
return method;
}
#Override
public void write(Kryo kryo, Output output, Message message) {
byte[] ser = message.toByteArray();
output.writeInt(ser.length,true);
output.writeBytes(ser);
}
#Override
public Message read(Kryo kryo, Input input, Class<Message> pbClass) {
try {
int size = input.readInt(true);
byte[] barr = new byte[size];
input.read(barr);
return (Message) getParse(pbClass).invoke(null,barr);
} catch (Exception e) {
throw new RuntimeException("Could not create " + pbClass, e);
}
}
}
try this:
public class ProtoDeserializer implements DeserializationSchema<PBAddress> {
#Override
public TypeInformation<PBAddress> getProducedType() {
return TypeInformation.of(PBAddress.class);
}
https://issues.apache.org/jira/browse/FLINK-11333 is the JIRA ticket tracking the issue of implementing first-class support for Protobuf types with evolvable schema. You'll see that there was a pull request quite some time ago, which hasn't been merged. I believe the problem was that there is no support there for handling state migration in cases where Protobuf was previously being used by registering it with Kryo.
Meanwhile, the Stateful Functions project (statefun is a new API that runs on top of Flink) is based entirely on Protobuf, and it includes support for using Protobuf with Flink: https://github.com/apache/flink-statefun/tree/master/statefun-flink/statefun-flink-common/src/main/java/org/apache/flink/statefun/flink/common/protobuf. (The entry point to that package is ProtobufTypeInformation.java.) I suggest exploring this package (which includes nothing statefun specific); however, it doesn't concern itself with migrations from Kryo either.

Netty convert HttpRequest to ByteBuf

In my application I need to receive a byte array on a socket, parse it as a HttpRequest to perform some check and, if the checks passes, get back to the byte array and do some more work.
The application is based on NETTY (this is a requirement).
My first idea was to create a pipeline like this:
HttpRequestDecoder (decode from ByteBuf to HttpRequest)
MyHttpRequestHandler (do my own checks on the HttpRequest)
HttpRequestEncoder (encode the HttpRequest to a ByteBuf)
MyButeBufHandler (do my works with the ByteBuf)
However the HttpRequestEncoder extends the ChannelOutboundHandlerAdapter so it doesn't get called for the inbound data.
How can I accomplish this task?
It would be nice to avoid decoding and re-encoding the request.
Regards,
Massimiliano
Use an EmbeddedChannel in MyHttpRequestHandler.
EmbeddedChannel ch = new EmbeddedChannel(new HttpRequestEncoder());
ch.writeOutbound(msg);
ByteBuf encoded = ch.readOutbound();
You'll have to keep the EmbeddedChannel as a member variable of MyHttpRequestEncoder because HttpRequestEncoder is stateful. Also, please close the EmbeddedChannel when you finished using it (probably in your channelInactive() method.)
I just had to encode and decode some HttpObjects and struggled a bit with it.
The hint that the decoder/encoder are stateful is very valuable.
That's why I thought I'll add my findings here. Maybe it's helpful to someone else.
I declared an RequestEncoder and a ResponseDecoder as a class member, but it still didn't work correctly. Until I remembered that the specific handler I was using the en/decoders within was shared...
That's how I got it to work in the end. My sequenceNr is to distinct between the different requests. I create one encoder and one decoder per request and save them in a HashMap. With my sequenceNr, I'm able to always get the same decoder/encoder for the same request. Don't forget to close and remove the de/encoder channels from the Map after processing the LastContent object.
#ChannelHandler.Sharable
public class HttpTunnelingServerHandler extends ChannelDuplexHandler {
private final Map<Integer, EmbeddedChannel> decoders = Collections.synchronizedMap(new HashMap<Integer, EmbeddedChannel>());
private final Map<Integer, EmbeddedChannel> encoders = Collections.synchronizedMap(new HashMap<Integer, EmbeddedChannel>());
.
.
//Encoding
if (!encoders.containsKey(currentResponse.getSequenceNr())) {
encoders.put(currentResponse.getSequenceNr(), new EmbeddedChannel(new HttpResponseEncoder()));
}
EmbeddedChannel encoderChannel = encoders.get(currentResponse.getSequenceNr());
encoderChannel.writeOutbound(recievedHttpObject);
ByteBuf encoded = (ByteBuf) encoderChannel.readOutbound();
.
.
//Decoding
if (!decoders.containsKey(sequenceNr)) {
decoders.put(sequenceNr, new EmbeddedChannel(new HttpRequestDecoder()));
}
EmbeddedChannel decoderChannel = decoders.get(sequenceNr);
decoderChannel.writeInbound(bb);
HttpObject httpObject = (HttpObject) decoderChannel.readInbound();
}
How about the put the EmbeddedChannel as the handler channel's attribute, instead of HashMap. Isn't it the same what you claim to solve the stateful encoder/decoder?
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
ctx.channel().attr(EMBEDED_CH).set( new EmbeddedChannel(new HttpRequestDecoder()));
super.channelActive(ctx);
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
EmbeddedChannel embedCh = ctx.channel().attr(EMBEDED_CH).get();
if (embedCh != null) {
embedCh.close();
}
super.channelInactive(ctx);
}

How to properly test with mocks Akka actors in Java?

I'm very new with Akka and I'm trying to write some unit tests in Java. Consider the following actor:
public class Worker extends UntypedActor {
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof Work) {
Work work = (Work) message;
Result result = new Helper().processWork(work);
getSender().tell(result, getSelf());
} else {
unhandled(message);
}
}
}
What is the proper way to intercept the call new Helper().processWork(work)? On a side note, is there any recommended way to achieve dependency injection within Akka actors with Java?
Thanks in advance.
Your code is already properly testable:
you can test your business logic separately, since you can just instantiate your Helper outside of the actor
once you are sure that the Helper does what it is supposed to do, just send some inputs to the actor and observe that the right replies come back
Now if you need to have a “mocked” Worker to test some other component, just don’t use a Worker at all, use a TestProbe instead. Where you would normally get the ActorRef of the Worker, just inject probe.getRef().
So, how to inject that?
I’ll assume that your other component is an Actor (because otherwise you won’t have trouble applying whatever injection technique you normally use). Then there are three basic choices:
pass it in as constructor argument
send it within a message
if the actor creates the ref as its child, pass in the Props, possibly in an alternative constructor
The third case is probably what you are looking at (I’m guessing based on the actor class’ name):
public class MyParent extends UntypedActor {
final Props workerProps;
public MyParent() {
workerProps = new Props(...);
}
public MyParent(Props p) {
workerProps = p;
}
...
getContext().actorOf(workerProps, "worker");
}
And then you can inject a TestProbe like this:
final TestProbe probe = new TestProbe(system);
final Props workerMock = new Props(new UntypedActorFactory() {
public UntypedActor create() {
return new UntypedActor() {
#Override
public void onReceive(Object msg) {
probe.getRef().tell(msg, getSender());
}
};
}
});
final ActorRef parent = system.actorOf(new Props(new UntypedActorFactory() {
public UntypedActor create() {
return new MyParent(workerMock);
}
}), "parent");

Categories