Quick disclaimer, I am very new to gRPC and RPC in general, so please have patience
I have two gRPC servers running on the same java application, Service A and Service B.
Service A creates multiple clients of Service B which then synchronously makes calls to the various instances of Service B
The server
Service A has a rpc call defined by the .proto file as
rpc notifyPeers(NotifyPeersRequest) returns (NotifyPeersResponse);
the server side implementation,
#Override
public void notifyPeers(NotifyPeersRequest request, StreamObserver<NotifyPeersResponse> responseObserver) {
logger.debug("gRPC 'notifyPeers' request received");
String host = request.getHost();
for (PeerClient c : clients.values()) {
c.addPeer(host); // <---- this call
}
NotifyPeersResponse response = NotifyPeersResponse.newBuilder()
.setResult(result)
.build();
responseObserver.onNext(response);
responseObserver.onCompleted();
}
The list of peers, clients are built up in previous rpc calls.
ManagedChannel channel = ManagedChannelBuilder.forTarget(peer).usePlaintext().build();
ClientB client = new ClientB(channel);
clients.put(peer, client);
The client
rpc addPeer(AddPeerRequest) returns (AddPeerResponse);rpc addPeer(AddPeerRequest) returns (AddPeerResponse);
the server side implementation,
#Override
public void addPeer(AddPeerRequest addPeerRequest, StreamObserver<AddPeerResponse> responseObserver) {
logger.info("gRPC 'addPeer' request received");
boolean result = peer.addPeer(host);
AddPeerResponse response = AddPeerResponse.newBuilder()
.setResponse(result)
.build();
responseObserver.onNext(response);
responseObserver.onCompleted();
the client side implementation,
public boolean addPeer(String host) {
AddPeerRequest request = AddPeerRequest.newBuilder().setHost(host).build();
logger.info("Sending 'addPeer' request");
AddPeerResponse response = blockingStub.addPeer(request);
return response.getResponse();
}
When I run this application, and an RPC is made to Service A and the client connection is created that calls addPeer, an ambiguous exception is thrown, io.grpc.StatusRuntimeException: UNKNOWN which then causes the JVM to shut down. I have no idea how to fix this, or whether it is even possible to create an gRPC client connection within a gRPC server
for all of my gRPC server implementations I'm using blocking stubs.
<grpc.version>1.16.1</grpc.version>
<java.version>1.8</java.version>
I've pretty much hit a brick wall, so any information will be appreciated
The UNKNOWN message is an exception on the server side that was not passed to the client.
You probably need to increase the log level on the server to try to find the root cause.
In this post here ,
creating the channel like below, enable it to see a more meaningful error message:
ManagedChannel channel = NettyChannelBuilder.forAddress( host, port )
.protocolNegotiator(ProtocolNegotiators.serverPlaintext() )
If A and B are in the same application have you considered making direct function calls or at least using the InProcessChannelBuilder and InProcessServerBuilder?
As mentioned elsewhere, in the current setup you can try increasing the log level on the server side (in B) to see the source of the exception.
I have a controller in spring which getting a POST request which is handling as asynchronous(using DeferredResult object as a return value).
The response for this request is writing bytes to the HTTP stream directly (HttpServletResponse.getWriter().print()) and when it's done writing it sets result on the DeferredResult object for close the connection.
I'm writing my response in stream chunks.
I have an issue in this request handling because the client is closing the connection if I'm not writing to it for 1 minute. (I can write some chunks and then stop writing for 1 minute - therefore the connection will be closed in the middle of my procedure).
I want to control the closing connection procedure - I want to send keep alive when I'm not writing any data to the stream so that the connection won't be closed until I decided to close it from the server-side.
I didn't find out how should I get control of the connection from the controller in the server.
Please assist.
Thanks.
There is no such thing as a "keep alive" during an ongoing request or response in HTTP which can help with idle timeouts when receiving a request or response.
HTTP keep alive is only about keeping the TCP connection open after a response in order to process more requests on the same connection. TCP keep alive is instead used to detect connection loss without TCP shutdown and can also be used to prevent idle timeouts in stateful packet filters (as used in firewalls or NAT routers) in between client and server. It does not prevent idle timeouts at the application level though since it does not transport any data visible to the application level.
Note that the way you want to use HTTP is contrary to how HTTP was designed originally. It was designed for a client sending a full request and the server sending a full response immediately and not for the server sending some parts of the response, idling some time and then send some more. The proper way to implement such behavior would be by using WebSockets. With WebSockets both client and server can send new messages at any time (i.e. no request-response schema) and it also supports keep-alive messages. If WebSockets are not an option you can instead implement a polling client which regularly polls for new data from the server with a new request.
I ran into similar need just recently. The server code executes a long running operation that can take as long as 30 minutes to return, and the client times out long before that. The solution was to have the long running operation send periodic "keep alive" packets of data to the client via a "callback" argument provided by the request handler method. The callback is nothing more than a function (think of Lambda in Java) that takes as parameter the "keep alive" data packet to send to client, and then writes that data packet to the client via the java.io.PrintWriter reference that you can get off of javax.servlet.http.HttpServletResponse. Below code is the handler method that does this. I had to refactor the code in the call hierarchy to accept this new "callback" parameter until the "callback" can reach the method that is performing the long running operation, and inside that code I invoke the "callback" every so often, for example every time 10 records are processed. Not that below is Groovy code (scripting code on top of Java that runs on the JVM) and the server-side framework is Spring,
...
#Autowired
DataImporter dataImporter
#PostMapping("/my/endpoint")
void importData(#RequestBody MyDto myDto, HttpServletResponse response) {
// Callback to allow servant code deep in the call hierarchy to report back to client any arbitrary message
Closure<Void> callback = { String str ->
response.writer.print str
response.writer.flush()
}
// This leads to the code that is performing a long running operation. Using
// this "hook" that code has a direct connection to the client whereby
// it can send packets of data to keep the connection from timing out.
dataImporter.importData(myDto, callback)
}
}
I have a standalone zookeeper server running.
client = CuratorFrameworkFactory.newClient(zkHostPorts, retryPolicy);
client.start();
assertThat(client.checkExists().forPath("/")).isNotNull(); // working
listener = new LeaderSelectorListenerAdapter() {
#Override
public void takeLeadership(CuratorFramework client) throws Exception {
System.out.println("This method is never called! :( ");
Thread.sleep(5000);
}
};
String path = "/somepath";
leaderSelector = new LeaderSelector(client, path, listener);
leaderSelector.autoRequeue();
leaderSelector.start();
I am connecting to the server successfully, defining a listener and starting leader election.
Note: There is only 1 client.
But my client app is never taking leadership. I am not able to figure out what I am doing wrong. Also this is a trivial single client scenario. Shouldn't the client already be a leader
EDIT:
It works if I use TestingServer from curator-test library instead of starting my Zookeeper server, like below -
TestingServer server = new TestingServer();
client = CuratorFrameworkFactory.newClient(server.getConnectString(), retryPolicy);
...
Does this mean there is something wrong with my zookeeper server.
This is my zoo.cfg -
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/tmp/zookeeper/ex1
clientPort=2181
Also, the server appears to be working fine as I am able to connect to it using cli and am able to create/delete zNodes.
I am trying to debug http client request. I need to get the local port number that was used when the connection was made. I can use the HttpClientContext to get the connection, and retrieve the local port when the connection is successful. However, in cases where the IOExceptions are thrown, I get a ConnectionShutdownException when retrieving the local port number. Any clue on how I can obtain the local port number for all http requests in case of error.
This is for HTTPClient 4.0.1 (last version I have with me).
I did not find any simple one liner...
The part of http client that actually binds sockets to local host/port and connects to the remote host/port is, unsurprisingly, the SocketFactory. In HTTPClient, the socket factory is associated to a SchemeRegistry, that in turns belongs to the connection manager. Note that the HTTPClient's SocketFactory is NOT a javax.net.SocketFactory, but a wrapper around such an object. You define a scheme like so :
SchemeRegistry schRgstr = new SchemeRegistry();
Scheme httpScheme = new Scheme("http", PlainSocketFactory.getSocketFactory(), 80);
schRgstr.register(httpScheme);
Of course, org.apache.http.conn.scheme.SocketFactory is an Interface, so you can decorate it to do anything you want in particular, this will come in handy.
The part of httpclient that calls the socket factory is called the ClientConnectionOperator (which is an interface too). This object is actually also tied to the connection manager, and not the client per se. So if you want to customize the connection operator, you may override the connection manager too, for example, like so (anonymous class) :
ThreadSafeClientConnManager connMngr = new ThreadSafeClientConnManager(httpParams, schRgstr) {
protected ClientConnectionOperator createConnectionOperator(SchemeRegistry schreg) {
return new YourConnectionOperator(schreg);
};
};
The lifecycle model of the socket goes like so:
When the connection manager needs a now connection, it calls createConnection on the operator (which basically does nothing but create an internal object that eventually will hold the actual socket).
Further along the way it calls openConnection
openConnection goes to the SocketFactory and asks for a new Socket, then tries to connect it like so (here, sf is the "httpclient socket factory")
sock = sf.connectSocket(sock, target.getHostName(),
schm.resolvePort(target.getPort()),
local, 0, params);
If this call fails, an exception is thrown and more information will not be accessible. We'll get back to that.
If the connectSocket works, though, the prepareSocket method is called on the connection operator. And so, you can override the method and put the port information into the context, (or anything else you fancy) :
#Override
protected void prepareSocket(Socket sock, HttpContext context, HttpParams params) throws IOException {
super.prepareSocket(sock, context, params);
context.setAttribute("LOCAL PORT INTERCEPTOR", sock.getLocalPort());
}
The HttpContext instance that is used is passed when you invoke HTTPClient, so you can access it even if the call fails later, because of some other exception. When you place your call, make it so:
HttpContext ctx = new BasicHttpContext();
HttpGet get = new HttpGet(targetUrl.getUrl());
HttpResponse resp = client.execute(get, ctx);
In this code, if the client could go as far as step 5, you have your port info accessible, even if an exception occurs later on (connection drop out, timeout, invalid HTTP, ...).
Going further
There is still a dark zone in all this : if the call fails at step 4 (actual opening of the socket, like if you have a DNS error while resolving the destination host name)... I'm not sure this case is actually interesting to you, (I can not see why it would be). Seeing dealing with it starts to get "messy", you should really consider wether or not you need this.
For that we need to start overriding really deep, and that involves lots of work - some of it I would not consider very good design.
The complexity arises because authors of HTTPClient did not provide the necessary methods one could override to get to the information you need. Inside the socket factory, the interesting point is :
sock = createSocket();
InetSocketAddress isa = new InetSocketAddress(localAddress, localPort);
sock.bind(isa);
// go on and connect to the server
// like socket.connect...
There is no method to override that splits the local and the server side of the socket openning, so that if any socket exception is thrown on the server side, your access to the socket instance is lost, and the local port info gone with it.
But all is not lost because we do have one entry point we can play with : the createSocket method! Default implementation is
public Socket createSocket() {
return new Socket();
}
But as Socket is not a final class, you can... play with it !
public Socket createSocket() {
return new Socket() {
#Override
public void bind(SocketAddress bindpoint) throws IOException {
super.bind(bindpoint);
// get the local port and give the info back to whomever you like
}
};
}
Problem is : this works with plain sockets (because we can create an anonymous subclass), but this does not work with HTTPS, because you can not simply instanciate a Socket on this case, you have to do :
return (SSLSocket) this.javaxNetSSLSocketFactory.createSocket();
And you can not create an anonymous subclass in that case. And as Socket is no interface either, you can not even proxy it to decorate it. So you could (uglyest code ever) create a Socket subclass that wraps and delegates to the SSLSocket, but that'd be desperate.
So, recap time.
If we only care about sockets that were at some point connected to the server, the solution is fairly simple.
A scheme registry we builb is used in a custom ConnectionManager that overrides the ConnectionOperator. The operator's overriden methods is prepareSocketthat allows us to simply update the HttpContext of any request we send with the port information. Which is an easy way to find the information when everything goes well.
If we want to care about local ports attributed to a socket that never, ever, got connected (which I would argue is of limited use), one need to go deeper.
Our own SchemeRegistry should be designed to have custom SocketFactories. These should probably be decorators or the default ones... the createSocket overriden method allows us to "intercept" the binding on the local port (and store this into maybe a ConcurrentMap<Socket, Integer> or a ThreadLocal), and by overriding 'connectSocket' we trap any exception that may happen, and rethrow it, but not before wrapping it in our own exception type that can hold local port information. That way, if the exceptions passes though when you call the client, by checking the cause chain, you will find your port data. If no exception occurs, we clean or map instance / thread local.
I am novice in soap and jax-ws.
After reading many information I knew that eclipse can catch soap messages, But I have problem with it.
my publisher
public static void main(String[] args) {
Endpoint.publish("http://localhost:8081/WS/Greeting",
new GreetingImpl());
}
my cient
public static void main(String[] args) {
GreetingImplService service = new GreetingImplService();
Greeting greeting = service.getGreetingImplPort();
System.out.println("------->> Call Started");
System.out.println(greeting.sayHello("friend !!!"));
System.out.println("------->> Call Ended");
}
When I invoke client in Console I see
------->> Call Started
Hello, Welcom to jax-ws friend !!!
------->> Call Ended
Therefore it is working service.
But in TCP|IP monitor I see empty list.
my configuration of TCP|IP monitor
What Do I make wrong?
please, help)
I think that the probelm is that your client is pointing directly to port 8081 (the port of the ws) so the tcp/ip monitor does not come into play. Since the monitor is listening on port 8080, your client should use this endpoint:
http://localhost:8080/WS/Greeting
The TCP/IP monitor will receive the http request and then it will forward the message to
http://localhost:8081/WS/Greeting
To alter the endpoint used by the client you have 2 possibilities:
If the client uses a local wsdl document (for example you have saved a copy of the wsdl on your file system and used it to call wsimport), you can modify the endpoint in it (look at the element service at the end of the wsdl). The stub returned by service.getGreetingImplPort() reads the endpoint from the wsdl.
You can use this instruction in the main method of the client
((BindingProvider) greeting).getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY,"http://localhost:8080/WS/Greeting");