Limit connections to different destinations in RxJava/Micronaut project - java

I have a project that reads data from many different providers; some via SOAP, some via HTTP, etc. Some of these providers also have a restriction on the number of concurrent connections to them. For example, provider A may allow unlimited connections, provider B may only allow 2, and provider C may allow 5.
I'm decent with Micronaut, but I'm unaware of anything built into it that would allow me to limit connections to specific URLs as necessary. So, my first thought is to create a per-provider thread limit (perhaps using RxJava's scheduler system? I believe you can create custom ones using Java's Executor class) and let that do the work of queuing for me. I think I could also go the more manual route of creating a ConcurrentMap and storing the number of active connections in that, but that seems messier and more error-prone.
Any advice would be greatly appreciated! Thanks!

Limiting thread numbers is suitable only if the network connections are made by threads, that is, synchronously. But Micronaut also can make asynchronous connections, and then limiting the number of threads won't work. Better do limiting the number of connections directly. Create an intermediate proxy object with has the same interface as Micronaut and passes all incoming requests to the real Micronaut. It also has a parameter - limit, and when a request is passed, decrements the limit. when the limit becomes 0, the proxy object stops passing requests, keeping them in an input queue. As soon as a request is finished, it signals the proxy object and it passes one request from the input queue, if any, or just increments the limit.
The simplest implementation of the proxy is a thread with BlockingQueue for input requests and Semaphore for limit. But if there are many providers and creating a thread for each provider is expensive, the proxy can be implemented as an asynchronous object.

Related

Single Transaction in multiple java jvms

One spring service is implemented in one java deployment unit(JVM). Another spring service is implemented in another JVM. Making service call from 1st jvm to 2nd jvm. Service interface could be either rest or soap over http. Need to keep single transaction over multiple jvms, meaning if any service fails every thing must be rolled back. How to do this. Any code examples.
Use global transactions (i.e., JTA),
Use XA resources (RDBMS and JMS connections), do "Full XA with 2PC".
For further reference on the Spring transaction management, including the JTA/XA scenario, read: http://docs.spring.io/spring/docs/current/spring-framework-reference/htmlsingle/#transaction
REST faces the exact same problem as SOAP-based web services with regards to atomic transactions. There is no stateful connection, and every operation is immediately committed; performing a series of operations means other clients can see interim states.
Unless, of course, you take care of this by design. First, ask yourself: do I have a standard set of atomic operations? This is commonly the case. For example, for a banking operation, removing a sum from one account and adding the same sum to a different account is often a required atomic operation. But rather than exporting just the primitive building blocks, the REST API should provide a single "transfer" operation, which encapsulates the entire process. This provides the desired atomicity, while also making client code much simpler. This appracoh is known as low granularity services, or high-level batch operations.
If there is no simple, pre-defined set of desired atomic operation sequences, the problem is more severe. A common solution is the batch command pattern. Define one REST method to demarcate the beginning of a transaction, and another to demarcate its end (a 'commit' request). Anything sent between these sets of operations is queued by the server but not committed, until the commit request is sent.
This pattern complicates the server significantly -- it must maintain a state per client. Normally, the first operation ('begin transaction') returns a transaction ID (TID), and all subsequent operations, up to and including the commit, must include this TID as a parameter.
It is a good idea to enforce a timeout on transactions: if too much time has passed since the initial 'begin transaction' request, or since the last step, the server has the right to abort the transaction. This prevents a potential DoS attack that causes the server to waste resources by keeping too many transactions open. The client design must keep in mind that each operation must be checked for a timeout response.
It is also a good idea to allow the client to abort a transaction, by providing a 'rollback' API.
The usual care required in designing code that uses multiple concurrent transactions applies as usual in this complex design scenario. If at all possible, try to limit the use of transactions, and support high-level batch operations instead.
I take no credit of this information, i'm just a director, credit goes to This article
Also please read Transactions in REST?
You can get some handy code samples here http://www.it-soa.eu/en/resp/atomicrest/userguide/index.html

Does AsyncHttpClient knows how many threads to allocate for all the HTTP requests

I'm evaluating AsyncHttpClient for big loads (~1M HTTP requests).
For each request I would like to invoke a callback using the AsyncCompletionHandler which will just insert the result into a blocking queue
My question is: if I'm sending asynchronous requests in a tight loop, how many threads will the AsyncHttpClient use? (I know you can set the max but apparently you take a risk of losing requests, I've seen it here)
I'm currently using the Netty implementation with these versions:
async-http-client v1.9.33
netty v3.10.5.Final
I don't mind using other versions if there are any optimization in later versions
EDIT:
I read that Netty uses the reactor pattern for reacting to HTTP responses which means it allocates a very small number of threads to act as selectors. This also means that the number of allocated threads doesn't increase with high requests volume. However that contradicts the need to set the max number of connections.
Can anyone explain what I'm missing?
Thanks in advance
The AsyncHttpClient client (and other non-blocking IO clients for the matter), don't need to allocate a thread per request, and the client need not resize its thread pool even if you bombard it with requests. You do initiate many connections if you don't use HTTP keep-alive, or call multiple hosts, but it can all be handled by a single threaded client (there may be more than one IO thread, depending on the implementation).
However, it's always a good idea to limit the max requests per host, and max requests per domain, to avoid overloading a service on a specific host, or a site, and avoid getting blocked. This is why HTTP clients add a maxConnectionsPerXxx setting.
AHC has two types of threads:
For I/O operation. On your screen, it's AsyncHttpClient-x-x threads. AHC creates 2*core_number of those.
For timeouts. On your screen, it's AsyncHttpClient-timer-1-1 thread. Should be only one.
And as you mentioned:
maxConnections just means number of open connections which does not
directly affect the number of threads
Source: issue on GitHub: https://github.com/AsyncHttpClient/async-http-client/issues/1658

Designing Orchestrator In Java using disruptor and Executors

We are designing an Orchestrator System in java that will host a web service and on a request to it will invoke a flow written in XML which are nothing but steps that are executed one after another but the XML tell the user what the flow is and he can also change the flow by changing the XML. He can add a new service and add it to the XML. But while designing I am now confused with things like.
Should I make a service a runnable with a blocking queue and keep it alive all the time by scheduling it to the executor so when the new request arrives I will push the request in the blocking queue and the service will process it. And create a Mailbox which will carry the message passing task between different services.
Instead of Making service runnable I should use spring IOC that will make the class singleton thus only one instance will be there and I will send a request directly to the service methods thus there will be no hassle that I have to do as there are no threads and also didn't need to implement any mailbox.
I read about how event driven system is faster like nodejs and ngnix so I was thinking to use disuptor framework to push all the incoming request to the ringbuffer and then write a handler that will emit the event to the orchestrator that will start processing the request and also send a callback with the request so that when the orchestrator is done it will send back the response back to the caller using callback. But as the request is not of the same type it so I would not be able to take advantage of disruptor object allocations.
The system needs to provide maximum throughput with low latency, the system will add new services/flows to XML in future so it should adopt the new flows without hitting the performance. I am allowed to only use java 7, no Scala so I am bounded.
Answer #1 is a terrible idea. You will effectively tie up a thread per service. If the number of services exceeds the number of threads backing the executor service you have an instant, automatic DOS. Plus, if the services are inter-dependent on each other... all the ways in which you can dead lock. Plus, the inefficiency of using N threads if only M (< N) are actually required.
Answer #2: if the proposed flow is Request Parsing -> Dispatch -> Service Processing -> Callbacks you rely on the actual 'services' not to foul up because that will prevent callbacks from being called and/or DOS the application. Essentially: what happens if an exception occurs in a service? Will that also impact future requests to the same service and/or other services?
Also the opportunities for parallelism are limited to the framework's way of handling incoming requests. Meaning if you have X requests and the framework inherently processes them serially, you get a backlog of X requests. Your latency requirements may be hard to meet in such a scenario.
Answer #3: an event driven system is indeed the better approach: have a scheduler farm out jobs to an executor service to allow the system to distribute the total load of all services dynamically and have a mechanism to generate and react on events to handle the control flow. This scales better if the number of services become very large and each 'job' is reasonably substantial (so the overhead of scheduling/dispatch is low compared to the actual work being performed).

Java pooling connection optimization

Which are the commons guidelines/advices to configure, in Java, a http connection pool to support huge number of concurrent http calls to the same server? I mean:
max total connections
max default connection per route
reuse strategy
keep alive strategy
keep alive duration
connection timeout
....
(I am using Apache http components 4.3, but I am available to explore new solutions)
In order to be more clear, this is my situation:
I developed a REST resource that needs to perform about 10 http calls to AWS CloudSearch in order to obtain search results to be collected in a final result (that I really cannot obtain through a single query).
The whole operation must take less than 0.25 seconds. So, I run http calls in parallel in 10 different threads.
During a benchamarking test, I noticed that with few concurrent request, 5, my objective is reached. But, increasing concurrent requests to 30, there is a tremendous degradation of performance due to the connection time that takes about 1 second. With few concurrent requests, instead, the connection time is about 150 ms (to be more precise, the first connection takes 1 second, all the following connections take about 150 ms). I can ensure that CloudSearch returns its response in less than 15 ms, so there is a problem somewhere in my connection pool.
Thank you!
The amount of threads/connections that are best for your implementation depend on that implementation (which you did not post), but here are some guidelines as requested:
If those threads never block at all, you should have as many threads as cores (Runtime.availableCores(), this will include hyperthread-cores). Simply because more than 100% CPU usage isn't possible.
If your threads rarely block, cores * 2 is a good start for benchmarking.
If your threads frequently block, you absolutely need to benchmark your application with various settings to find the best solution for your implementation, OS and hardware.
Now the most optimal case is obviously the first one, but to get to this one, you need to remove blocking from your code as much as you can. Java can do this for IO operations if you use the NIO package in non-blocking mode (which is not how the Apache package does it).
Then you have 1 thread that waits on a selector and awakes as soon as any data is ready to be sent or read. This thread then only copies the data from it's source to the destination and returns to the selector. In case of a read (incoming data), this destination is a blocking queue, on which core amount of threads wait. One of those threads will then pull out the received data and process it, now without any blocking.
You can then use the length of the blocking queue to adjust how many parallel requests are reasonable for your task and hardware.
The first connection takes >1 second, because it actually has to look-up the address via DNS. All other connections are put on hold for the moment, as there is no sense in doing this twice. You can circumvent that by either calling the IP (probably not good if you talk to a load-balancer) or by "warming-up" the connections with an initial request. Any new connection afterwards will use the cached DNS result, but still needs to perform other initializations, so reusing connections as much as you can will reduce latency a lot. With NIO this is a very easy task.
In addition there are HTTP-multi-requests, that is: you make one connection but request several URLs in one request and get several responses over "the same line". This massively reduces connection overhead, but needs to be supported by the server.

Logic behind camel ServicePool when used with Netty

I have a camel instance with a Netty endpoint that consolidates many incoming requests to send to a single receiver. More specifically, this is a web service whereby each incoming SOAP request results in a Producer.sendBody() into the camel subsystem. The processing of each request involves different routes, but they will all end up in the single Netty endpoint to send on to the next-level server. All is fine, as long as I only have a handful of incoming requests at any one time. If I start having more than 100 simultaneous requests, though, I get this exception:
java.lang.IllegalStateException: Queue full
at java.util.AbstractQueue.add(AbstractQueue.java:71) ~[na:1.6.0_24]
at java.util.concurrent.ArrayBlockingQueue.add(ArrayBlockingQueue.java:209) [na:1.6.0_24]
at org.apache.camel.impl.DefaultServicePool.release(DefaultServicePool.java:95) [camel-core-2.9.2.jar:2.9.2]
at org.apache.camel.impl.ProducerCache$1.done(ProducerCache.java:297) ~[camel-core-2.9.2.jar:2.9.2]
at org.apache.camel.processor.SendProcessor$2$1.done(SendProcessor.java:120) ~[camel-core-2.9.2.jar:2.9.2]
at org.apache.camel.component.netty.handlers.ClientChannelHandler.messageReceived(ClientChannelHandler.java:162) ~[camel-netty-2.9.2.jar:2.9.2]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) ~[netty-3.3.1.Final.jar:na]
This is coming from the DefaultServicePool that's used by the Netty component. The DefaultServicePool uses an ArrayBlockingQueue as the backend to the queue and it sets it to a default capacity of 100 Producers. It uses a service pool for performance reasons, to avoid having to keep creating and destroying often-reused producers. Fair enough. Unfortunately, I'm not getting the logic on how it is implemented.
This all starts in ProducerCache::doInAsyncProducer, which starts off by calling doGetProducer. Said method attempts to acquire a Producer from the pool and, if that fails, it creates a new Producer using endpoint.getProducer(). It then makes sure that the service pool exists using pool.addAndAcquire. That done, it returns to the calling function. The doInAsyncProducer does its thing until it's finished, in which case it calls the done processor. At this point, we're completely done processing the exchange, so it releases the Producer back to the pool using pool.release
Here is where the rubber hits the road. The DefaultServicePool::release method inserts the Producer into the ArrayBlockingQueue backend using an add. This is where my java.lan.IllegalStateException is coming from.
Why? Well, let's look through a use case. I have 101 simultaneous incoming requests. Each of them hits the Netty endpoint at roughly the same time. The very first creates the service pool with the capacity of 100 but it's empty to start. In fact, each of the 101 requests will create a new Producer from the endpoint.getProducer; each will verify that they don't exceed the capacity of the service pool (which is empty); and each will continue on to send to the server. After each finishes, it tries to do a pool.release. The first 100 will succeed, since the pool capacity hasn't been reached. The 101st request will attempt to add to the queue and will fail, since the queue is full!
Is that right? If I'm reading that correctly, then this code will always fail whenever there are more than 100 simultaneous requests. My service needs to support upwards of 10,000 simultaneous requests, so that's just not going to fly.
It seems like a more stable solution might be to:
Pre-allocate all 100 Producers on initialization
Block during acquire until a Producer is available
Absolutely do not create your own non-pool Producers if using a ServicePool
In the meantime, I'm thinking of throttling incoming requests.
What I'm hoping for with this question is to learn if I'm reading that logic correctly and to see if it can get changed. Or, am I using it wrong? Is there a better way to handle this type of thing?
Yes the logic should IMHO be improved. I have logged a ticket to improve this.
https://issues.apache.org/jira/browse/CAMEL-5703

Categories