In another question I was worried about using a web service that takes a five minutes to complete. I was thinking about using RMI instead of web services for this use case..
but at the end of the day, do both a web service and RMI use a TCP socket for the underlying connection? Is there any reason why a web service call taking 5 minutes is less stable than an RMI request taking the same time?
Note that in our case we are talking about internal apps communicating.
Update: This question stems from me worrying that we'd run into dropped connections or other issues with web services that take 3-5 minutes to complete. The worry maybe totally irrational - responders to my other question indicated you should be fine if you control both the client and the server. But I just wanted to understand in more detail why a dropped connection for a 5 minute call is no more likely using a web service implementation than an RMI implementation. If they both rely on socket connections than that might explain why there is no difference...
If a single remote call is taking 5 minutes to complete, then it's probably because the operation implementing that call is slow, not because the web service layer itself is slow. If you were to re-wrap the operation with RMI, it'll likely be just as slow.
The performance benefit of RMI over SOAP is only really going to be apparent when you have a large number of operations being called, rather than for the speed of any one operation, simply because RMI is more efficient than SOAP. But it won't magically make a slow operation go faster.
As for your question regarding sockets, yes, RMI and SOAP both use socket-level protocols when you go down far enough (IIOP or JRMP in the case of RMI, HTTP in the case of SOAP). That isn't really relevant to your problem, though.
RMI is mostly used over JRMP (in pure Java context) or IIOP (in non-JVM context), while SOAP messages are usually (but not exclusively) sent over HTTP. All of these three wire protocols use TCP/IP, so in this regard there is no advantage of choosing RMI over a web service.
Related
I'm working on an application which is a monolith. We have some features in our roadmap that I think would fit into a microservices architecture and am toying around with building them as such.
My problem: the application processes ~150 requests per second during peak times. These requests come in on raw TCP/IP connections which are kept alive at all times. We have very strict latency requirements (the majority of our requests are responded to within 25-50 milliseconds). Each request would need to consume 1 to many microservices. My concern is that consuming multiple restful web services (specifically creating/destroying the connection each time the service is consumed as well as TLS handshakes) is going to cause too much latency for processing these requests.
My question: Is it possible (and is there a best practice) to maintaining the state of a connection to a restful web service while multiple threads consume that web service? each request to consume the web service would be self contained but we would simply keep the physical connection alive.
JVM naturally pools HTTP connections for the HttpURLConnection (via http://docs.oracle.com/javase/8/docs/technotes/guides/net/http-keepalive.html). So, it should be happening for JAX-WS and JAX-RS out of the box. Usually, other non-HttpURLConnection based frameworks (like netty) support http connection pooling as well. So it's very likely you don't need to worry about this by yourself in your code. You need to calculate how many connections you would need to pool though, but it's a configuration sort of thing.
You could check that TCP connections are not closed after getting an HTTP response by sniffing traffic from you application by tcpdump or Wireshark and checking if there is no TCP FIN happening after you get the result.
I am new to the RESTful Webservices world and I have a question regarding how WS works.
Context:
I am developing a RESTful WS that will have a high load; at one given time I can have let's say up to 10 clients sending multiple requests. All the requests will be sent to port 80.
I am developing the WS with Jersey (Java) and deploying on a Tomcat Webserver.
Question:
Let's say we have 5 clients that send requests at the same time; each one sends 2 requests to port 80; will they be treated in FIFO order? Can we have some sort of multi-threading if let's say we don't care about the order?
It all depends what server you use and how it is configured. Standard configuration (you have to work hard to make it not standard) is to have multiple threads. In other words - server usually automatically creates or uses another thread for each new request and it is almost certain that it will be processed in parallel.
You can actually see it inside your running code by using java.lang.Thread.currentThread() - print the name of current thread and Rest request and you will see.
To answer your question, a thread will be fetched from thread pool to server every request you send. The server does not care about the order, the request comes first will be served first.
More about the servers:
I suggest you use Nginx or Apache as reverse server to enable high performance, a thread will be fetched from the thread pool to server the request. To improve performance, you can increase the thread pool size. However, too much thread will, on the other hand, reduce your performance due to the frequency of switching from thread to thread increases. You don't want to have a very large thread pool.
If you are using Apache + Tomcat, basically, you have the same situation like you are using Tomcat. But apache is more suitable than tomcat to be the web server. In real life, companies use apache as reverse server that dispatch request to tomcat.
Apache and Tomcat are multithread based server, their performance reduce when there are too much requests. If you have to handle a lot of requests, you can use Nginx.
Nginx is an even based server, it uses queue to store requests and use FIFO to dispatch them. It can handle a lot of requests with much fewer threads. Therefore, its performance will be more stable even with larger amount of requests. However, with extremely large amount of requests, Nginx will also be overwhelmed, as its event loop has no room for extra requests.
Companies due with the situation by using distributed system concepts. For example load balancer. But to answer your question, that's a little too much. Check this article and this article to gain a better idea about nginx and apache.
I am incredibly new to the topic of websockets and am trying to figure out the correct way to handle the communication between a device and a server.
Here is my scenario: I have a thermostat (very similar to the Nest) that needs to communicate with a web server. Every time you change the temperature on the thermostat, I need to send data to the web server to update it's "current stats" in the database. Easy, I can do that.
The part that I am confused about, and think websockets might be a use-case is when the user changes the temperature from the web interface. The thermostat has to pull in that information from the server to see "Oh, okay you want it to be 66 degrees."
I've thought of having the thermostat long-polling the server every 2-5 seconds to see what the "current stats" are in the database to change the temperature, but that seems like overkill.
Is there a way to open a connection between the thermostat and the server to listen for messages?
I began down the road of reading about websockets, however, I believe it is unfortunately browser-based only.
As I'm fairly new to the game with regards to these types of connections, if anyone could point me in the right direction regarding protocols, communication, etc. I would greatly appreciate it!
Tech Specs
Server is written in Ruby on Rails
Thermostat is written in Java
Websockets can be used between any two programs which need to communicate, they are certainly not restricted to the browser. That said, should you be using websockets is a different question. one thing to think about is that websockets involves a persistent connection. this may not scale (if you have lots of devices) and it may also be overkill. if you are expecting the temperature to be changed once a day, having a persistent connection for the entire day is an enormous waste of resources. websockets are typically used when communication needs to be "fast" and relatively frequent. unless you really need instantaneous updates in the thermostat, i would just have it ping the server every few minutes for updates.
Side note, websockets is fairly new, so any libraries you end up using may be a bit on the immature side.
We prototyped some java to java websockets a while not too long ago. We used the ning async library on the client side and the atmosphere library (built on netty) on the server side.
WebSockets is just a specification for tunneling something similar to TCP sockets over HTTP; it's not confined to the browser, and client libraries are available for most common languages.
This sounds like a reasonable use case for a long-running connection, but I would generally prefer a raw TCP connection to a WebSockets connection unless you have a specific restriction in mind (e.g., most home Internet connections have no problem with connecting to a server at an arbitrary port).
I would like to have the clients query each other through the server without delay ( = no polling interval ).
Example: Server S, clients A and B
Client A wants to request Client B.
Client A will make a request to the server S, no problem there.
Then Server S needs to be able to request Client B but how to do that without polling?
All the node.js/APE (for PHP) technos are designed for the web, however I don't use a web server for that. Does Java has something close to a push technology/framework that is not web?
I would really prefer a solution that doesn't require each client to use their own reserved port (I don't want to end up with 1 WebService per client for example)
Note: all the clients are on the same machine.
A couple of options...
Plain socket communication. java.net.Socket, java.net.ServerSocket. Maximum flexibility but requires knowledge of low level TCP/IP API/concepts.
The good old RMI. Java based RPC layer on top of TCP/IP. Works good when client and server are both in Java and generally in same subnet. May give problems when client and/or server are natted.
Spring Remoting, it's actually pretty decent.
Bi-Directional Web Services. i.e. clients host their own WSes which the Server calls when it needs to do a callback.
JMS as someone already mentioned.
Distributed Data Structures, Check out http://www.hazelcast.com/
Lots of options to chose from, no need for webserver.
If you really don't want to use a web server then I would check out JMS. That being said, all the cool kids are using web servers these days since the protocols are so ubiquitous.
Your use case requires a messaging protocol. We don't really know the scope of your problem but you already said you want a server to exchange requests between clients, so I would go with an existing solution rather than a roll your own approach.
JMS has been mentioned and is certainly a viable Java based solution, another would be XMPP which is a real time communication protocol commonly used for instant messaging.
It is an open standard that has both server and client support in every major language and platform. This would allow you to have standalone apps, web based ones and apps for mobile devices all being able to communicate with each other. The only potential gotcha for your use case is that it is text based. Since you haven't said what requests you want to pass back and forth, I don't know if this will fit your bill or not.
You can use Smack for client side development in Java and any OS server you want.
I'm writing a multiplayer/multiroom game (Hearts) in java, using RMI and a centralized server.
But there's a problem: RMI Callbacks will not work beacause clients are Natted and Firewalled. I basically need the server to push data updates to clients, possibly without using polling and without using sockets (I would code at an higher level)
In your opinion, what's the best solution for realizing this kind of architecture? Is an ajax application the only solution?
You say that you don't want polling, but AJAX is exactly that. You can look at Comet but it's hard to escape polling anyway (e.g. Comet itself uses polling underneath).
You could use a peer to peer framework such as JXTA.
I can suggest two main techniques.
The server has a method getUpdates, callable by clients. The method returns the control to the client when there is an update to show.
When Clients perform the registration, they give the server a callback remote object
Since this object is not registered in any rmi registry, there should no be any issue with natted clients.
I'm not sure how(if) ajax works for a non-browser-based app. You could just maintain your own pool of SocketConnections open for the duration of the application with a Thread per connection.
If you need to scale to a lot of concurrent connections, look to a non-blocking I/O framework like Apache Mina or Netty (related SO post: Netty vs Apache MINA).