How to manage gRPC server channel within a spring-boot - java

I have a setup which has a spring-boot application and gRPC server. gRPC server has written in NodeJS and deployed in a separate server. In my spring-boot app, there is an endpoint which accepts a request object and delegate it to the gRPC server.
Question:
1) In this scenario, do I need to create rGPC channel for each incoming http request? (which sounds not efficient)
2) or do I have one channel created at the initialization of the spring-boot application?
In the 2) solution, how do I manage when the gRPC server is down and need to fetch the new URI from the eureka server?
Here is the gRPC channel creation in spring-boot.

2) is the way to go. To address the server down case, you can check the channel state (io.grpc.ManagedChannel.getState(boolean)) before forwarding the incoming http request to the gRPC server and if it's not READY then call your initCommunicationChannel() after suitably refactoring it so it can be called multiple times.
Alternatively you can implement a Name resolver plugin (https://github.com/grpc/grpc/blob/master/doc/naming.md) which simply calls eurekaClient.getApplication("logger-app").getInstances() to return resolved addresses which are then used by a client side load balancing policy (https://github.com/grpc/grpc/blob/master/doc/load-balancing.md).

Related

Connecting and sending message between Spring WebSocket instances

I have multiple instances using Spring Boot WebSocket (created following the first half of Spring's guide). I need them to connect to other instances at specific hostnames and ports and to be able to send messages over the websocket connection using STOMP protocol.
How can I connect to my other services over websocket?
How can I send messages using the STOMP protocol (preferably using the same marshalling/unmarshalling magic I get with received messages)?
Things that don't answer my question:
I have read Spring: send message to websocket clients and Sending message to specific user on Spring Websocket but these and other questions seem to all assume that a client has already initiated a connection and that there are users and topics established. This is not my use case as my services are both server AND client.
I am not using a cluster and I am not sharing sessions across instances as in Spring Websocket in a tomcat cluster
I have found some resources that cast some light on how to accomplish this:
http://www.baeldung.com/websockets-api-java-spring-client
https://www.sitepoint.com/implementing-spring-websocket-server-and-client/#javaspringchatclient
http://useof.org/java-open-source/org.springframework.messaging.simp.stomp.StompSessionHandler
number 3 is at least a complete implementation but is unfortunately devoid of comments to explain what's going on.

Soap API call through a load balancer

I have a java app/client which calls a soap web service using Java Sockets. Now I need to add a load balancer in between the client and server which redirects the request to multiple Tomcat instances where the API is running. How can you establish socket connection from the client without specifying the server port address since the routing happens through the load balancer(F5)? Can this be accomplished through F5 configuration somehow?
For instance the soap url looks like this:
http://abc:1234/myapp/mysoap?wsdl
It needs to be converted to
http://abc/myapp/mysoap?wsdl

Netty add httprequest to server handling

I use microservice architecture in my project. And for interservice communication I use message queue NATS. I wrote a gateway, that handle all http requests and put it to queue. All end point services are subscribed to this queue.
At endpoint services I use Xitrum based on Netty IO. When I get request from queue, I deserialise it to FullHttpRequest. But I don't know how to send it to my netty server, that can handle it according to business logic (without using external httpclient, for example, that can send it to localhost)
Is there any possibility to send FullHttpRequest instance to netty server (listening localhost:8000) using netty api? Or may be another solution. What is the common approach?
Please see the netty examples which has everything you need:
https://github.com/netty/netty/tree/4.1/example/src/main/java/io/netty/example/http/snoop

How to use Netty clients within Netty server

I'm going to create an authentication server which itself interacts with
a set of different Oauth2.0 servers.
Netty seems to be a good candidate to implement network part here.
But before start I need to clear some details about netty as I'm new to it.
The routine will be as follows:
The server accepts an HTTPS connection from a client.
Then, not closing this first connection, it makes another connection
via HTTPS to a remote Oauth2.0 server and gets data
After all, the server sends the result back to the client which is supposed to keep the connection alive.
How to implement this scenario with Netty?
Do I have to create a new netty client and/or reconnect it each time I need to connect to a remote Oauth2.0 server?
If so, I'll have to create a separate thread for every
outgoing connection which will drastically reduce performance.
Another scenario is to create a sufficient number of Netty clients
within a server at the beginning (when server starts)
and keep them constantly connected to the Oauth2.0 servers via HTTPS.
That's easily done with Netty. First you set up your Netty server using the ServerBootstrap and then in a ChannelHandler that handles your connection from the client you can use e.g. the client Bootstrap to connect to the OAuth server and fetch the data. You don't need to worry about creating threads or similar. You can do it all in a non-blocking fashion. Take a look at and try to understand how this example works:
https://github.com/netty/netty/blob/master/example/src/main/java/io/netty/example/proxy/HexDumpProxyFrontendHandler.java#L44.

Jetty - proxy server with dynamic registration

We have a number of Jetty http(s) servers, all behind different firewalls. The http servers are at customer sites (not under our control). Opening ports in the firewalls at these sites is not an option. Right now, these servers only serve JSON documents in response to REST requests.
We have web clients that need to interact with a given http server based on URL parameter or header value.
This seems like a straightforward proxy server situation - except for the firewall.
The approach that I'm currently trying is this:
Have a centralized proxy server (also Jetty based) that listens for inbound registration requests from the remote http servers. The registration request will take the form of a Websocket connection, which will be kept alive as long at the remote HTTP server is available. On registration, the Proxy Server will capture the websocket connection and map it to a resource identifier.
The web client will connect the proxy server, and include the resource identifier in the URL or header.
The proxy server will determine the appropriate Websocket to use, then pass the request on to the HTTP server. So the request and response will travel over the Websocket. Once the response is received, it will be returned to the web client.
So this is all well and good in theory - what I'm trying to figure out is:
a) is there a better way to achieve this?
b) What's the best way to set up Jetty to do the proxying on the HTTP Server end of the pipe?
I suppose that I could use Jetty's HttpClient, but what I really want to do is just pull the HTTP bytes from the websocket and pipe them directly into the Jetty connector. It doesn't seem to make sense to parse everything out. I suppose that I could open a regular socket connection on localhost, grab the bytes from the websocket, and do it that way - but it seems silly to route through the OS like that (I'm already operating inside the HTTP Server's Jetty environment).
It sure seems like this is the sort of problem that may have already been solved... Maybe by using a custom jetty Connection that works on WebSockets instead of TCP/IP sockets?
Update: as I've been playing with this, it seems like another tricky problem is how to handle request/response behavior (and ideally support muxing over the websocket channel). One potential resource that I've found is the WAMP sub-protocol for websockets: http://wamp.ws/
In case anyone else is looking for an answer to this one - RESTEasy has a mocking framework that can be used to invoke the REST functionality without running through a full servlet container: http://docs.jboss.org/resteasy/docs/2.0.0.GA/userguide/html_single/index.html#RESTEasy_Server-side_Mock_Framework
This, combined with WAMP, appears to do what I'm looking for.

Categories