I have a java application(say A) which communicate with an application(say B) via TCP Socket. My java application is multithreaded, can handle up to 100 threads. To communicate between A --> B we have 10 sockets.
Challenges -
Connection Pooling - need connection pooling mechanism to handle n(say 100) number of thread(of application A), communicating to
application B via x(say 10) number of TCP Socket.
Multithreading - How can two thread access same socket send the request one by one and get back the response mapped to appropriate
thread.
Multiple request - Is it possible that two thread can send the request on single socket simultaneously.
Can we over come this challenge via any framework? Is it possible?
I heard that Spring Integration/ApacheCamel/Local MQ can resolve this solutions. Any examples.
With Spring Integration:
CachingClientConnectionFactory
TcpOutboundGateway (with CachingClientConnectionFactory).
Collaborating Outbound and Inbound Channel Adapters.
But you have to do your own request/reply collaboration (usually based on something in the message); the replies may not come back in the order they were sent. Since there is no standard way to perform that collaboration, the framework doesn't support it itself.
I was able to resolve the problem stated in question via jPOS.
jPOS can do multiplexing. It uses ISOMessage field 11 and 41 for matching the request and response.
jPOS also providing pooling mechanism.
Related
We have a spring boot (with zuul) app using default embedded tomcat (I think). It has many clients implemented with different technologies and languages. And we have problem with too many port in TIME_WAIT: i.e. too many socket connections are opened/closed w.r.t the expected request behavior that should keep connections alive most of the time.
By retrieving the HttpRequest object in the deployed API, I can get information on the request header. This way I can track the http protocol used (http/1.1) and header parameter such as keep-alive (which, if present, is redundant with the use of http/1.1).
=> I would like to track opened and closed socket connections, but I don't see how?
Intermediate information would be better than nothing.
Note: I found some tutorial on a similar topic when using spring-websocket, but we don't.
I'm developing a Java API for an Adndroid app in Spring. Right now my API is 100% REST and stateless. For the client to receive data, it must send a request first.
However, what I need is the server to send data to the to the client /not the client to the server fisrt/ whenever it is ready with it's task.
I think that some kind of session must be created between the two parties.
My question is: How can I achieve this functionality of the SERVER sending data to the CLIENT when it's ready with it's task? /It is unknown how long the task will take./
What kind of API should I develop for this purpose?
One idiotic workaround is sending a request to the server every n seconds but I'm seeking for a more intelligent approach.
There are multiple options available. You can choose what suits best for you.
Http Long Polling - In this, server holds the request until it's ready with its task (in your case). Here, you don't have to make multiple requests every few seconds (Which is Http Polling).
Server Sent Events - In this, server sends update to the client without long-polling. It is a standardized part of HTML 5 - https://www.w3.org/TR/eventsource/
Websockets - Well, websockets work in duplex mode and in this a persistent TCP connection is established. Once TCP connection is established, both server and client sends data to and fro. Supported by most modern browsers. You can check for Android Websocket Library like autobahn and Java websocket.
SockJs - I would recommend to go with this option instead of plain WebSocket. http://docs.spring.io/spring/docs/current/spring-framework-reference/htmlsingle/#websocket-fallback-sockjs-enable
I'm going to create an authentication server which itself interacts with
a set of different Oauth2.0 servers.
Netty seems to be a good candidate to implement network part here.
But before start I need to clear some details about netty as I'm new to it.
The routine will be as follows:
The server accepts an HTTPS connection from a client.
Then, not closing this first connection, it makes another connection
via HTTPS to a remote Oauth2.0 server and gets data
After all, the server sends the result back to the client which is supposed to keep the connection alive.
How to implement this scenario with Netty?
Do I have to create a new netty client and/or reconnect it each time I need to connect to a remote Oauth2.0 server?
If so, I'll have to create a separate thread for every
outgoing connection which will drastically reduce performance.
Another scenario is to create a sufficient number of Netty clients
within a server at the beginning (when server starts)
and keep them constantly connected to the Oauth2.0 servers via HTTPS.
That's easily done with Netty. First you set up your Netty server using the ServerBootstrap and then in a ChannelHandler that handles your connection from the client you can use e.g. the client Bootstrap to connect to the OAuth server and fetch the data. You don't need to worry about creating threads or similar. You can do it all in a non-blocking fashion. Take a look at and try to understand how this example works:
https://github.com/netty/netty/blob/master/example/src/main/java/io/netty/example/proxy/HexDumpProxyFrontendHandler.java#L44.
I am planing to develop JavaScript client application that will connect to Java server using websocket. Server should handle many connected clients.
After some reading I found out websocket single thread. This is not good if I want to run databases query that can block everything for a while.
What I am thinking about is to opening separated websocket for each JavaScript client. One socket is listening for new connection and when connection is established creates some unique id. After that opens new websocket and send id to client using listener socket. When client received id close first socket and connect to new one.
What do you think, is it good solution? Maybe I am missing something?
Spring 4 gives you the chance to use a thread pool. The documentation is here:
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/websocket.html
You could use Akka to manage all the concurrency and thread management for you. Or you could use the Play Framework that already builds on Akka and that supports WebSocket quite nicely. With Play you can choose between Java and Scala on the server side.
You should use NodeJS on the server to handle the socket i/o. You can connect to it via your javascript client apps, and then make calls to your Java based API. NodeJS is non blocking (async) and you should be able to leverage your existing Javascripting skills to quickly build a Node app. You could even use a full MEAN stack to build the client/server app. http://meanjs.org/ or http://mean.io/#!/ are two popular places to start.
I have a tcp endpoint which send messages to a java component that calls a stored procedure in db and do some processing on the result and return it to the same tcp.
What I knew that every tcp request will be in it's own thread , but if the messages comes from the same connection does that mean i'll have only one thread , I need to configure mule to make the java component multi-threaded.
The only thing I found is this :
http://www.mulesoft.org/documentation/display/MULE3USER/Tuning+Performance#TuningPerformance-pooling
and I can't understand it :D
In Mule 3, whose doc you've linked in your question, message receivers (ie inbound endpoints) typically have a dedicated work manager with a pool of threads assigned to process requests in parallel (the exception is the JMS connector which acts a little different).
So in your case, the TCP inbound endpoint will have, by default, 16 threads assigned to deal with incoming requests that hits the single open TCP socket.
No need to use pooled components.
EDIT: The question is about Mule 1.3, which is super old and has a very different threading model. In that case, each endpoint has a different thread pool.