I am looking for best way to connect desktop based trading client with Trading server. Latency is most important factor to consider.
We have two option
1 Rest calls: I can call rest service from trading client but i don't think it's good way to do because each call would establish TCP session
2 AMQP(eg:RabbitMQ): We can publish message on RabiitMQ server and server can consume messages from there.
Please suggest which approach is best or is there any other possible approach is well.
Client is in .net and server is java service
A Rest call is probably faster than a message queue call in most cases since the message queue will likely involve disk access.
For minimum latency, establish a direct TCP connection and implement your own protocol.
Related
I'm working on an application which is a monolith. We have some features in our roadmap that I think would fit into a microservices architecture and am toying around with building them as such.
My problem: the application processes ~150 requests per second during peak times. These requests come in on raw TCP/IP connections which are kept alive at all times. We have very strict latency requirements (the majority of our requests are responded to within 25-50 milliseconds). Each request would need to consume 1 to many microservices. My concern is that consuming multiple restful web services (specifically creating/destroying the connection each time the service is consumed as well as TLS handshakes) is going to cause too much latency for processing these requests.
My question: Is it possible (and is there a best practice) to maintaining the state of a connection to a restful web service while multiple threads consume that web service? each request to consume the web service would be self contained but we would simply keep the physical connection alive.
JVM naturally pools HTTP connections for the HttpURLConnection (via http://docs.oracle.com/javase/8/docs/technotes/guides/net/http-keepalive.html). So, it should be happening for JAX-WS and JAX-RS out of the box. Usually, other non-HttpURLConnection based frameworks (like netty) support http connection pooling as well. So it's very likely you don't need to worry about this by yourself in your code. You need to calculate how many connections you would need to pool though, but it's a configuration sort of thing.
You could check that TCP connections are not closed after getting an HTTP response by sniffing traffic from you application by tcpdump or Wireshark and checking if there is no TCP FIN happening after you get the result.
I need to implement analytics system with server and terminals which in realtime.
I use library ZeroMq (pub|sub mode) to send messages to client (~40bytes).
if I connect with 1 client, messages come with delay (sometime more than 250ms).
if I connect with 100 clients a lot of clients lose uniformity of delivery (more than 750ms no one message, after that huge scope of data). It is so critical issue for me.
I have to publish to more than 6000 terminals...
Publish every 30ms, it is about 1700bytes to each client in the worst case (tcp)
Maybe I should use another technology to deliver messages in realtime?
As I said in the comment, Multicast is the way. The primary overriding concern is whether your terminals can join the group you are publishing on - irrespective of how far away they are.
You've not indicated how the terminals connect to your network - (for example vpn over internet, private line whatever..) You asked for a better technology - it's multicast.
Now there are some options if you are going to go down the tcp route:
Build a load-balacing infrastructure which sits in front of your
service. Meaning that your terminals don't connect to your
service, but to a set of load balancers which then connect to your
service. If you have 10 of these for example, each only has to deal
with 600 clients. Your problem is much smaller - you can scale this
way. Don't forget to use asynchronous io.
Buy better hardware - for example solace or tervela do hardware
message brokers which can scale to very large numbers concurrent tcp
connections - but this is not cheap.
I am not as familiar with web services and I'm having a hard time finding information about a question regarding the way clients interact with a RESTful web service.
I've implemented a REST web interface to interact with clients using Java and the Jersey JAX-RS library, however I need to limit the number of connected clients to 6. If I have 6 clients connected and a 7th tries to connect, I need to know if one of the other 6 has disconnected at some point so I can give the new client a connection. Is there a simple way to tell on the server side if a client is still connected? Do clients maintain connection in a REST web service after they complete a request to the server? Normally the clients I'm dealing with make HTTP POST and GET requests to the server at least one a second.
The only thing I can think of would be to ping each connected client and wait for a response in the event of another client trying to connect. If one ping times out then I could replace that client with the new one. But I'm not really sure how that would impact server performance. If anyone has any input on a way too accomplish this, I would greatly appreciate it. Thanks!
I would like to have the clients query each other through the server without delay ( = no polling interval ).
Example: Server S, clients A and B
Client A wants to request Client B.
Client A will make a request to the server S, no problem there.
Then Server S needs to be able to request Client B but how to do that without polling?
All the node.js/APE (for PHP) technos are designed for the web, however I don't use a web server for that. Does Java has something close to a push technology/framework that is not web?
I would really prefer a solution that doesn't require each client to use their own reserved port (I don't want to end up with 1 WebService per client for example)
Note: all the clients are on the same machine.
A couple of options...
Plain socket communication. java.net.Socket, java.net.ServerSocket. Maximum flexibility but requires knowledge of low level TCP/IP API/concepts.
The good old RMI. Java based RPC layer on top of TCP/IP. Works good when client and server are both in Java and generally in same subnet. May give problems when client and/or server are natted.
Spring Remoting, it's actually pretty decent.
Bi-Directional Web Services. i.e. clients host their own WSes which the Server calls when it needs to do a callback.
JMS as someone already mentioned.
Distributed Data Structures, Check out http://www.hazelcast.com/
Lots of options to chose from, no need for webserver.
If you really don't want to use a web server then I would check out JMS. That being said, all the cool kids are using web servers these days since the protocols are so ubiquitous.
Your use case requires a messaging protocol. We don't really know the scope of your problem but you already said you want a server to exchange requests between clients, so I would go with an existing solution rather than a roll your own approach.
JMS has been mentioned and is certainly a viable Java based solution, another would be XMPP which is a real time communication protocol commonly used for instant messaging.
It is an open standard that has both server and client support in every major language and platform. This would allow you to have standalone apps, web based ones and apps for mobile devices all being able to communicate with each other. The only potential gotcha for your use case is that it is text based. Since you haven't said what requests you want to pass back and forth, I don't know if this will fit your bill or not.
You can use Smack for client side development in Java and any OS server you want.
In another question I was worried about using a web service that takes a five minutes to complete. I was thinking about using RMI instead of web services for this use case..
but at the end of the day, do both a web service and RMI use a TCP socket for the underlying connection? Is there any reason why a web service call taking 5 minutes is less stable than an RMI request taking the same time?
Note that in our case we are talking about internal apps communicating.
Update: This question stems from me worrying that we'd run into dropped connections or other issues with web services that take 3-5 minutes to complete. The worry maybe totally irrational - responders to my other question indicated you should be fine if you control both the client and the server. But I just wanted to understand in more detail why a dropped connection for a 5 minute call is no more likely using a web service implementation than an RMI implementation. If they both rely on socket connections than that might explain why there is no difference...
If a single remote call is taking 5 minutes to complete, then it's probably because the operation implementing that call is slow, not because the web service layer itself is slow. If you were to re-wrap the operation with RMI, it'll likely be just as slow.
The performance benefit of RMI over SOAP is only really going to be apparent when you have a large number of operations being called, rather than for the speed of any one operation, simply because RMI is more efficient than SOAP. But it won't magically make a slow operation go faster.
As for your question regarding sockets, yes, RMI and SOAP both use socket-level protocols when you go down far enough (IIOP or JRMP in the case of RMI, HTTP in the case of SOAP). That isn't really relevant to your problem, though.
RMI is mostly used over JRMP (in pure Java context) or IIOP (in non-JVM context), while SOAP messages are usually (but not exclusively) sent over HTTP. All of these three wire protocols use TCP/IP, so in this regard there is no advantage of choosing RMI over a web service.