Using AJP to perform requests from Python to Java webserver - java

I have to implement a Python web application which operates on data available through web-services (API with GET requests and JSON responses).
The API server is implemented in Java. Preliminarily tests show significant overhead if API calls are made through urllib2 (connection is opened and closed for each request).
If I enable AJP in API server, which library should I use to perform requests using AJP protocol from Python? I googled Plup, but I can't find a clear way to request and consume data in Python, not just proxying it elsewhere.
Is using AJP a good solution? Obviously I have to maintain a connection pool to perform AJP requests, but I can't find anything related in Plup.
Thank you.

I have no idea what's AJP is. Also you did not open what goes to "sigfinicant overhead", so I might be a poor person to answer to this question.
But if I were you I would first try to few tricks:
Enable HTTP 1.1 keep-alive on urllib2
(here is an example using another library Python urllib2 with keep alive )
HTTP 1.1 keep-alive connections do not close TCP/IP pipe for the subsequent requests.
Use Spawning / eventlets web server which does non-blocking IO patch for urllib / Python sockets.
http://pypi.python.org/pypi/Spawning/
This will make parallelization in Python much more robust, when the overhead in the application is input/output, not using CPU to process the requests. JSON decoding is rarely CPU bound.
With these two tricks we were able to consume 1000 request/sec in our Python web application from Microsoft IIS backed API server (farm).

Related

Can java.net.http.HttpClient talk to a unix socked?

I'd like to use java.net.http.HttpClient instead of curl to perform the http examples list here:
https://docs.docker.com/engine/api/sdk/examples/
Is there a way to do this?
The JDK does not support Unix Domain Socket connections yet (JEP 380 will add this feature). But regardless of that it appears java.net.http.HttpClient only supports URIs (but not SocketAddress) as destination, therefore it would not work anyways.
There are however libraries which offer this functionality:
junixsocket (relevant issue)
Reactor Netty
Netty: How do I connect to a UNIX domain socket running an HTTP server using Netty?
unix-socket-factory (for Apache HttpClient)
However, since your goal is to connect to Docker, it would be easiest to use one of the available Java Docker clients.

Java TCP server for communicating with an IoT device

I have developed an embedded system which sends data to a server as TCP requests. I can't use more-frequent HTTP requests, because of its data overhead. Less package length will result in less energy consumption and less communication expenses.
The server has to listen to a special port, get the data from device and store in a table.
As I explored, Java servlets + Apache Tomcat is a popular solution but in this case should not be used, because a Java servlet is more suitable for HTTP-based connections.
Is there a better solution for this type of communication?
Please take a look at Sockets. They are on the Application layer TCP/IP model and they provide reliable, bidirectional communication, with no data overhead. However, you will need to design a tiny protocol for the communication to much your needs.
Most probably this will suffice your needs, but if you decide to go with the HTTP solution, keep in mind Websockets which is an interesting solution, will diminish the overhead of the HTTP protocol (but they won't eliminate it, the overhead will remain at around 2-10 bytes.). Unfortunately, Java SE doesn't built in provide support for Websockets so you will need to use an external library.
PS: Both options support encryption over TLS but I didn't mention it, cause it adds a noticeable overhead (at least during the initialization of the connection)

Is there a way to maintain a connection to a restful web service?

I'm working on an application which is a monolith. We have some features in our roadmap that I think would fit into a microservices architecture and am toying around with building them as such.
My problem: the application processes ~150 requests per second during peak times. These requests come in on raw TCP/IP connections which are kept alive at all times. We have very strict latency requirements (the majority of our requests are responded to within 25-50 milliseconds). Each request would need to consume 1 to many microservices. My concern is that consuming multiple restful web services (specifically creating/destroying the connection each time the service is consumed as well as TLS handshakes) is going to cause too much latency for processing these requests.
My question: Is it possible (and is there a best practice) to maintaining the state of a connection to a restful web service while multiple threads consume that web service? each request to consume the web service would be self contained but we would simply keep the physical connection alive.
JVM naturally pools HTTP connections for the HttpURLConnection (via http://docs.oracle.com/javase/8/docs/technotes/guides/net/http-keepalive.html). So, it should be happening for JAX-WS and JAX-RS out of the box. Usually, other non-HttpURLConnection based frameworks (like netty) support http connection pooling as well. So it's very likely you don't need to worry about this by yourself in your code. You need to calculate how many connections you would need to pool though, but it's a configuration sort of thing.
You could check that TCP connections are not closed after getting an HTTP response by sniffing traffic from you application by tcpdump or Wireshark and checking if there is no TCP FIN happening after you get the result.

Is it possible to initiate http2 session or stream from Jetty Assuming a http2 connection already exists?

It is possible to do server push. But if the client is the low-level jetty client, is it possible to initiate a new session or stream from the server? the assumption is the client is low-level jetty based client and the connection is already established.
After the initial connection is established, and the prefaces exchanged, HTTP/2 is a symmetric protocol.
The HTTP semantic requires the client to initiate requests, but at the lower level - at the HTTP/2 protocol framing level - this is not necessary and it is possible for a server to initiate a stream towards a client.
While the HTTP/2 protocol framing is symmetric after the preface, it is still tied to the HTTP protocol semantic, that is you need to send a HEADERS frame (even an empty one) before a DATA frame. However, this may not be of much hindrance if you want to build your own protocol on top of the HTTP/2 framing, you will just have few additional bytes to send over the network.
As an aside, there are proposals that use the HTTP/2 framing to transport WebSocket (a pure bidirectional protocol) frames inside HTTP/2 DATA frames, in what is essentially an infinite request with an infinite response. But I digress.
As for the Jetty specific implementation of HTTP/2, is it possible to initiate a stream from the server towards a client in Android ?
The answer is two-fold.
The first is that the current implementation (Jetty 9.3.8) has some assumption that the protocol being transported by the HTTP/2 framing is HTTP. As such, a server-initiated stream is currently dropped by the client.
It would be fairly easy, though, to override this behavior and allow the client to properly handle the server-initiated streams, in the same way the server handles client-initiated streams.
The second is that Jetty's HTTP/2 support in general requires JDK 8, and at this time this is not available in Android.
If there already exist HTTP/2 Android clients that are capable of handling server-initiated streams, please comment on this answer which one, as I am really interested.
The idea of server-initiated streams is intriguing though, and I filed this issue to keep track of it.
If this is really important to you, you can contact Webtide (the company behind Jetty) to sponsor the implementation.

WebSockets versus Long-Polling versus TCP Scalability/Ease of Use

I'm writing a backend for a mobile web-app based in Java and I was wondering as far as scalability and ease of use go what are the pros and cons associated with using WebSockets versus Long-Polling solutions like comet. Another option would also be implementing my own solution using TCP. From what I've read it seems that you need to run Long-polling solutions on dedicated servers as they don't run well in Tomcat/Jetty when you start dealing with large numbers of users. WebSockets sounds like it scales better. Are there any disadvantages to going with Websockets over Comet or should I just resort to my own solution using TCP connections? I'm looking for the option that uses the least traffic.
I guess it depends on your usecase and tolerance for learning new things but, for sure, going down the path of using WebSocket APIs for communication, or even SSE, would better than a traditional long-polling/Comet solution for many reason - one that you mentioned - scalability, but also for bandwidth utilization and latency. It is important to also understand that WebSocket is to the Web what TCP is to the desktop e.g. a socket. In a desktop solution you don't necessarily code against TCP, you use a client library supporting a transport protocol like STOMP or XMPP over TCP. You do the same when using WebSocket, pick a server to communicate with e.g. XMPP server, and a XMPP client library to communicate with the server over WebSockets.
You can see our example of it here and we have docs you can read here.
The thing to watch out for is browser adoption of HTML5 WebSocket - currently in Chrome and Safari, and coming soon in FF and Opera. We have addressed this, but in case you plan to build your own server you will have to create a fall back solution for older browsers.

Categories