Java Multithreading in grpc - java

I have a question regarding JAVA-MULTITHREADING.
I have a jetty webapp with an grpc-streaming-client. everything is fine but how can I built up a model for getting the streaming data?
The webapp is build up with jsf. in that i have a controller which invokes a handler class for starting the stream:
public void startStream(){
if(streamHandler!=null & activatedStream == false){
streamHandler.startStreamClient();
activatedStream = true;
}else{
FacesMessage message = new FacesMessage(FacesMessage.SEVERITY_ERROR,
"Could not initialize the StreamHandler and Client Class or a Stream still runs. Please check the logs.", "Stream is running: "+String.valueOf(activatedStream));
FacesContext.getCurrentInstance().addMessage(null, message);
}
}
This method simple starts the client and the stream.
public void startStreamClient(){
log.info("Calling startMethod of Handler............");
CountDownLatch finishLatch;
if(this.client.isChannelShutdown()& this.client!=null){
this.client=new StreamClient(this.serverHost, this.serverPort);
try{
finishLatch = this.client.imageStream(this.startRequest);
}catch(Exception e){
log.warn("Error while starting the imageStream: "+e.getLocalizedMessage(), e);
}
}else{
finishLatch = client.imageStream(this.startRequest);
}
}
The Implementation of checking CountDownLatch is still missing. But it does not matter in this case.
The Responses comes here: The onNext()-Method is giving the streamed Data:
public CountDownLatch imageStream(StreamRequest request){
log.info("Calling imageStream-asnychStub...............");
CountDownLatch finish = new CountDownLatch(1);
/**
* The asyncStub is calling the rpc-Function with a new StreamObserver for the given Responses from the Server.
*/
StreamObserver<StreamRequest> requestOberserver = asyncStub.streamImagaData(new StreamObserver<StreamResponse>() {
/**
* The onNext Method is getting the imageDate, if it is send
*/
#Override
public void onNext(StreamResponse response) {
System.out.println("Data-Input: "+response.getImageData().length());
}
/**
* The onError Method is getting an Exception Object if it is thrown
*/
#Override
public void onError(Throwable t) {
log.warn("Bidirectional Stream with Server an Client: "+t.getLocalizedMessage(), t);
}
/**
* The onCompleted is for ending the Stream and reduces the CountDownLatch by one
*/
#Override
public void onCompleted() {
log.info("Bidirectional Streaming has finished....");
finish.countDown();
}
});
/**
* This Block is for sending a StreamRequest to the Server.
*/
try{
log.info("Sending a Streaming-Request to Server with State: "+request.getStreamState().name());
requestOberserver.onNext(request);
}catch (RuntimeException ex) {
log.warn("Error sending requst to Server: "+ex.getLocalizedMessage(), ex);
requestOberserver.onError(ex);
}
requestOberserver.onCompleted();
return finish;
}
The Imagedata is simple printed on Screen. I tried to build up a consumer-producer-model but failed because the response returns in an innertyp of StreamObserver.
How can I get this Data in realtime. Do I have to create an official implementation of the StreamObserver? Or where do I have to place the additional Threads? Are Threads the only choice? Do i need some callables?
Thanks in advance.

I have solved the problem by implmenting the observer-pattern.
The ManagedBean becam the Observer of the StreamClient.

Related

Netty HTTP2 Frame Forwarding/Proxing - pipeline config question

I'm trying to create a Netty (4.1) POC which can forward h2c (HTTP2 without TLS) frames onto a h2c server - i.e. essentially creating a Netty h2c proxy service. Wireshark shows Netty sending the frames out, and the h2c server replying (for example with the response header and data), although I'm then having a few issues receiving/processing the response HTTP frames within Netty itself.
As a starting point, I've adapted the multiplex.server example (io.netty.example.http2.helloworld.multiplex.server) so that in HelloWorldHttp2Handler, instead of responding with dummy messages, I connect to a remote node:
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
Channel remoteChannel = null;
// create or retrieve the remote channel (one to one mapping) associated with this incoming (client) channel
synchronized (lock) {
if (!ctx.channel().hasAttr(remoteChannelKey)) {
remoteChannel = this.connectToRemoteBlocking(ctx.channel());
ctx.channel().attr(remoteChannelKey).set(remoteChannel);
} else {
remoteChannel = ctx.channel().attr(remoteChannelKey).get();
}
}
if (msg instanceof Http2HeadersFrame) {
onHeadersRead(remoteChannel, (Http2HeadersFrame) msg);
} else if (msg instanceof Http2DataFrame) {
final Http2DataFrame data = (Http2DataFrame) msg;
onDataRead(remoteChannel, (Http2DataFrame) msg);
send(ctx.channel(), new DefaultHttp2WindowUpdateFrame(data.initialFlowControlledBytes()).stream(data.stream()));
} else {
super.channelRead(ctx, msg);
}
}
private void send(Channel remoteChannel, Http2Frame frame) {
remoteChannel.writeAndFlush(frame).addListener(new GenericFutureListener() {
#Override
public void operationComplete(Future future) throws Exception {
if (!future.isSuccess()) {
future.cause().printStackTrace();
}
}
});
}
/**
* If receive a frame with end-of-stream set, send a pre-canned response.
*/
private void onDataRead(Channel remoteChannel, Http2DataFrame data) throws Exception {
if (data.isEndStream()) {
send(remoteChannel, data);
} else {
// We do not send back the response to the remote-peer, so we need to release it.
data.release();
}
}
/**
* If receive a frame with end-of-stream set, send a pre-canned response.
*/
private void onHeadersRead(Channel remoteChannel, Http2HeadersFrame headers)
throws Exception {
if (headers.isEndStream()) {
send(remoteChannel, headers);
}
}
private Channel connectToRemoteBlocking(Channel clientChannel) {
try {
Bootstrap b = new Bootstrap();
b.group(new NioEventLoopGroup());
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.remoteAddress("localhost", H2C_SERVER_PORT);
b.handler(new Http2ClientInitializer());
final Channel channel = b.connect().syncUninterruptibly().channel();
channel.config().setAutoRead(true);
channel.attr(clientChannelKey).set(clientChannel);
return channel;
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
When initializing the channel pipeline (in Http2ClientInitializer), if I do something like:
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(Http2MultiplexCodecBuilder.forClient(new Http2OutboundClientHandler()).frameLogger(TESTLOGGER).build());
ch.pipeline().addLast(new UserEventLogger());
}
Then I can see the frames being forwarded correctly in Wireshark and the h2c server replies with the header and frame data, but Netty replies with a GOAWAY [INTERNAL_ERROR] due to:
14:23:09.324 [nioEventLoopGroup-3-1] WARN
i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was
fired, and it reached at the tail of the pipeline. It usually means
the last handler in the pipeline did not handle the exception.
java.lang.IllegalStateException: Stream object required for
identifier: 1 at
io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.requireStream(Http2FrameCodec.java:587)
at
io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.onHeadersRead(Http2FrameCodec.java:550)
at
io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.onHeadersRead(Http2FrameCodec.java:543)...
If I instead try making it have the pipeline configuration from the http2 client example, e.g.:
#Override
public void initChannel(SocketChannel ch) throws Exception {
final Http2Connection connection = new DefaultHttp2Connection(false);
ch.pipeline().addLast(
new Http2ConnectionHandlerBuilder()
.connection(connection)
.frameLogger(TESTLOGGER)
.frameListener(new DelegatingDecompressorFrameListener(connection, new InboundHttp2ToHttpAdapterBuilder(connection)
.maxContentLength(maxContentLength)
.propagateSettings(true)
.build() ))
.build());
}
Then I instead get:
java.lang.UnsupportedOperationException: unsupported message type:
DefaultHttp2HeadersFrame (expected: ByteBuf, FileRegion) at
io.netty.channel.nio.AbstractNioByteChannel.filterOutboundMessage(AbstractNioByteChannel.java:283)
at
io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:882)
at
io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1365)
If I then add in a HTTP2 frame codec (Http2MultiplexCodec or Http2FrameCodec):
#Override
public void initChannel(SocketChannel ch) throws Exception {
final Http2Connection connection = new DefaultHttp2Connection(false);
ch.pipeline().addLast(
new Http2ConnectionHandlerBuilder()
.connection(connection)
.frameLogger(TESTLOGGER)
.frameListener(new DelegatingDecompressorFrameListener(connection, new InboundHttp2ToHttpAdapterBuilder(connection)
.maxContentLength(maxContentLength)
.propagateSettings(true)
.build() ))
.build());
ch.pipeline().addLast(Http2MultiplexCodecBuilder.forClient(new Http2OutboundClientHandler()).frameLogger(TESTLOGGER).build());
}
Then Netty sends two connection preface frames, resulting in the h2c server rejecting with GOAWAY [PROTOCOL_ERROR]:
So that is where I am having issues - i.e. configuring the remote channel pipeline such that it will send the Http2Frame objects without error, but also then receive/process them back within Netty when the response is received.
Does anyone have any ideas/suggestions please?
I ended up getting this working; the following Github issues contain some useful code/info:
Generating a Http2StreamChannel, from a Channel
A Http2Client with Http2MultiplexCode
I need to investigate a few caveats further, although the gist of the approach is that you need to wrap your channel in a Http2StreamChannel, meaning that my connectToRemoteBlocking() method ends up as:
private Http2StreamChannel connectToRemoteBlocking(Channel clientChannel) {
try {
Bootstrap b = new Bootstrap();
b.group(new NioEventLoopGroup()); // TODO reuse existing event loop
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.remoteAddress("localhost", H2C_SERVER_PORT);
b.handler(new Http2ClientInitializer());
final Channel channel = b.connect().syncUninterruptibly().channel();
channel.config().setAutoRead(true);
channel.attr(clientChannelKey).set(clientChannel);
// TODO make more robust, see example at https://github.com/netty/netty/issues/8692
final Http2StreamChannelBootstrap bs = new Http2StreamChannelBootstrap(channel);
final Http2StreamChannel http2Stream = bs.open().syncUninterruptibly().get();
http2Stream.attr(clientChannelKey).set(clientChannel);
http2Stream.pipeline().addLast(new Http2OutboundClientHandler()); // will read: DefaultHttp2HeadersFrame, DefaultHttp2DataFrame
return http2Stream;
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
Then to prevent the "Stream object required for identifier: 1" error (which is essentially saying: 'This (client) HTTP2 request is new, so why do we have this specific stream?' - since we were implicitly reusing the stream object from the originally received 'server' request), we need to change to use the remote channel's stream when forwarding our data on:
private void onHeadersRead(Http2StreamChannel remoteChannel, Http2HeadersFrame headers) throws Exception {
if (headers.isEndStream()) {
headers.stream(remoteChannel.stream());
send(remoteChannel, headers);
}
}
Then the configured channel inbound handler (which I've called Http2OutboundClientHandler due to its usage) will receive the incoming HTTP2 frames in the normal way:
#Sharable
public class Http2OutboundClientHandler extends SimpleChannelInboundHandler<Http2Frame> {
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
super.exceptionCaught(ctx, cause);
cause.printStackTrace();
ctx.close();
}
#Override
public void channelRead0(ChannelHandlerContext ctx, Http2Frame msg) throws Exception {
System.out.println("Http2OutboundClientHandler Http2Frame Type: " + msg.getClass().toString());
}
}

Netty 4 - The pool returns a channel which is not yet ready to send the the actual message

I have created an inbound handler of type SimpleChannelInboundHandler and added to pipeline. My intention is every time a connection is established, I wanted to send an application message called session open message and make the connection ready to send the actual message. To achieve this, the above inbound handler
over rides channelActive() where session open message is sent, In response to that I would get a session open confirmation message. Only after that I should be able to send any number of actual business message. I am using FixedChannelPool and initialised as follows. This works well some time on startup. But if the remote host closes the connection, after that if a message is sent calling the below sendMessage(), the message is sent even before the session open message through channelActive() and its response is obtained. So the server ignores the message as the session is not open yet when the business message was sent.
What I am looking for is, the pool should return only those channel that has called channelActive() event which has already sent the session open message and it has got its session open confirmation message from the server. How to deal with this situation?
public class SessionHandler extends SimpleChannelInboundHandler<byte[]> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
if (ctx.channel().isWritable()) {
ctx.channel().writeAndFlush("open session message".getBytes()).;
}
}
}
// At the time of loading the applicaiton
public void init() {
final Bootstrap bootStrap = new Bootstrap();
bootStrap.group(group).channel(NioSocketChannel.class).remoteAddress(hostname, port);
fixedPool = new FixedChannelPool(bootStrap, getChannelHandler(), 5);
// This is done to intialise connection and the channelActive() from above handler is invoked to keep the session open on startup
for (int i = 0; i < config.getMaxConnections(); i++) {
fixedPool.acquire().addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
} else {
LOGGER.error(" Channel initialzation failed...>>", future.cause());
}
}
});
}
}
//To actually send the message following method is invoked by the application.
public void sendMessage(final String businessMessage) {
fixedPool.acquire().addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
Channel channel = future.get();
if (channel.isOpen() && channel.isActive() && channel.isWritable()) {
channel.writeAndFlush(businessMessage).addListener(new GenericFutureListener<ChannelFuture>() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// success msg
} else {
// failure msg
}
}
});
fixedPool.release(channel);
}
} else {
// Failure
}
}
});
}
If there is no specific reason that you need to use a FixedChannelPool then you can use another data structure (List/Map) to store the Channels. You can add a channel to the data structure after sending open session message and remove it in the channelInactive method.
If you need to perform bulk operations on channels you can use a ChannelGroup for the purpose.
If you still want you use the FixedChannelPool you may set an attribute in the channel on whether open message was sent:
ctx.channel().attr(OPEN_MESSAGE_SENT).set(true);
you can get the attribute as follows in your sendMessage function:
boolean sent = ctx.channel().attr(OPEN_MESSAGE_SENT).get();
and in the channelInactive you may set the same to false or remove it.
Note OPEN_MESSAGE_SENT is an AttributeKey:
public static final AttributeKey<Boolean> OPEN_MESSAGE_SENT = AttributeKey.valueOf("OPEN_MESSAGE_SENT");
I know this is a rather old question, but I stumbled across the similar issue, not quite the same, but my issue was the ChannelInitializer in the Bootstrap.handler was never called.
The solution was to add the pipeline handlers to the pool handler's channelCreated method.
Here is my pool definition code that works now:
pool = new FixedChannelPool(httpBootstrap, new ChannelPoolHandler() {
#Override
public void channelCreated(Channel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(HTTP_CODEC, new HttpClientCodec());
pipeline.addLast(HTTP_HANDLER, new NettyHttpClientHandler());
}
#Override
public void channelAcquired(Channel ch) {
// NOOP
}
#Override
public void channelReleased(Channel ch) {
// NOOP
}
}, 10);
So in the getChannelHandler() method I assume you're creating a ChannelPoolHandler in its channelCreated method you could send your session message (ch.writeAndFlush("open session message".getBytes());) assuming you only need to send the session message once when a connection is created, else you if you need to send the session message every time you could add it to the channelAcquired method.

Akka ActorSystem never terminate in Java

Im using Akka 2.5.6 in Java 8 and I want to know the right way to finish de ActorSystem, part of the functionality of my code is to process some XML files and validate them, to achieve this I have created 3 actors:
Controller, Processor and Validator.
The Controller is responsible for initiating the process and sending file by file and other information to the Processor, then the Processor create a digital signature of the file and sends the response to the Validator that finally validates the status and sends an OK message to the Controller which is counting the number of files validated and compares them with the total files. Once the total of files with the total of validated files are equal, I call to finish the ActorSystem with the terminate () method.
The method to finish is as follows:
private void endActors()
{
ActorSystem actorSystem = getContext().system();
Future <Terminated> terminated = actorSystem.terminate();
do {
log.info ("Waiting to finish ...");
try {
Thread.sleep (30000L);
} catch (InterruptedException ex) {
log.error ("Error in Thread.");
}
} while (! ended.isCompleted ());
log.info ("Actors finished processing.");
}
The loop never ends because the future is never complete, I dont know if this is the right way, I hope you have understood me and can help me or give me some advice.
Try the following (the key here is the on complete) . I wrote a class along these lines to use in a setup and teardown for junit, to avoid issues from actor system not fully terminating in the teardown of one test before being created in another test. (that caused port already in use issues)
private static ActorSystem system = null;
private static Future<Terminated> terminatedFuture;
public static ActorSystem getFreshActorSystem() {
tearDownActorSystem();
while(system != null) {
try {
Thread.sleep(500L);
} catch (InterruptedException e) {
}
}
system = ActorSystem.create();
return system;
}
public static void tearDownActorSystem() {
if (system !=null && !isInMiddleOfTerminating()) {
terminatedFuture = system.terminate();
terminatedFuture.onComplete( new OnComplete(){
#Override
public void onComplete(Throwable failure, Object success) throws Throwable {
system = null;
terminatedFuture = null;
}
} , system.dispatcher());
}
}
private static boolean isInMiddleOfTerminating() {
return terminatedFuture !=null;
}

How to manage concurrent API Calls throughout the lifetime of an application

At the moment i am trying to find the best way to manage concurrent API Calls within my application. Currently i have been using HTTPURLConnection to make my HTTP method calls and although it works fine, eventually i would come across some 'Socket exception: connection reset' whilst calls are being made. however, i am using multithreading as i have many different api calls running concurrently.
I have looked into using AsyncRestTemplate and although it is working i find that in the console a list of the pool and thread is shown i.e [pool-6-thread-1] however when it becomes [pool-2018-thread-1] that is when it decides to stop making any more api calls.
This is the code that i am using:
//This method is inside another class in my actual application but here for simplicity
public static ListenableFuture<ResponseEntity<String>> getLastPrice( AsyncRestTemplate asyncRestTemplate) {
String url = "https://bittrex.com/api/v1.1/public/getmarketsummary?market=btc-dar";
asyncRestTemplate = new AsyncRestTemplate(new ConcurrentTaskExecutor(Executors.newCachedThreadPool()));
return asyncRestTemplate.exchange(url, HttpMethod.GET, new HttpEntity<>("result"), String.class);
}
public PriceThread(JTextField lastPriceJT) {
this.lastPriceJT = lastPriceJT;
}
#Override
public void run() {
AsyncRestTemplate asyncRestTemplate = new AsyncRestTemplate(new ThreadPoolTaskExecutor());
while (true) {
try {
getLastPrice(coin, asyncRestTemplate)
.addCallback(new ListenableFutureCallback<ResponseEntity<String>>() {
#Override
public void onSuccess(ResponseEntity<String> response) {
//TODO: Add real response handling
try {
JSONObject result = new JSONObject(response.getBody());
String status = LOGGER.printResponseToLogger(result);
BigDecimal last = result.getJSONArray("result").getJSONObject(0).getBigDecimal("Last");
lastPriceJT.setText(last.toPlainString());
} catch (Exception e) {
LOGGER.printResponseToLogger(e.getMessage());
}
}
#Override
public void onFailure(Throwable ex) {
//TODO: Add real logging solution
LOGGER.printResponseToLogger(ex.getMessage());
}
});
} catch (Exception e) {
LOGGER.printResponseToLogger(e.getMessage());
}
}
}
Currently i'm thinking the solution to this issue would be for me to reuse the pools so that it doesn't increment to 2018 if that is possible but i have not found a way to do so.

Java Kryonet servers, client not receiving server response

I am trying to teach myself some networking in Java using the Kryonet library. The following code is almost identical to the code in the kyronet tutorial. https://code.google.com/p/kryonet/#Running_a_server
The client is successfully sending the message "Here is the request!" to the server (the server is printing it out) however the client is not receiving any response from the server even though the server is sending one.
I've tried unsuccessfully to fix it, can anyone see or suggest a possible problem/solution with the code?
(The code follows)
Client
public class Client_test {
Client client = new Client();
public Client_test() {
Kryo kryo = client.getKryo();
kryo.register(SomeRequest.class);
kryo.register(SomeResponse.class);
client.start();
try {
client.connect(50000, "127.0.0.1", 54555, 54777);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
client.addListener(new Listener() {
public void received (Connection connection, Object object) {
if (object instanceof SomeResponse) {
SomeResponse response = (SomeResponse)object;
System.out.println(response.text);
}
}
});
SomeRequest request = new SomeRequest();
request.text = "Here is the request!";
client.sendTCP(request);
}
}
Server
public class ServerGame {
Server server = new Server();
public ServerGame() {
Kryo kryo = server.getKryo();
kryo.register(SomeRequest.class);
kryo.register(SomeResponse.class);
server.start();
try {
server.bind(54555, 54777);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
server.addListener(new Listener() {
public void received (Connection connection, Object object) {
if (object instanceof SomeRequest) {
SomeRequest request = (SomeRequest)object;
System.out.println(request.text);
SomeResponse response = new SomeResponse();
response.text = "Thanks!";
connection.sendTCP(response);
}
}
});
}
}
Response & Request classes
public class SomeRequest {
public String text;
public SomeRequest(){}
}
public class SomeResponse {
public String text;
public SomeResponse(){}
}
After many hours watching youtube videos and sifting through the web I found the answer. Which I will post on here as it seems that quite a few people have had this problem so I would like to spread the word.
Basically the client would shut down immediately, before it could receive and output the message packet. This is because "Starting with r122, client update threads were made into daemon threads, causing the child processes to close as soon as they finish initializing.", the solution is "Maybe you could use this? new Thread(client).start();".
So basically instead of using
client.start();
to start the client thread you must use
new Thread(client).start();
Which I believe stops the thread being made into a daemon thread which therefore stops the problem.
Source: https://groups.google.com/forum/?fromgroups#!topic/kryonet-users/QTHiVmqljgE
Yes, inject a tool like Fiddler in between the two so you can see the traffic going back and forth. It's always easier to debug with greater transparency, more information.

Categories