I am trying to use PoolingHttpClientConnectionManager in our module.
Below is my code snippet
import java.io.IOException;
import org.apache.hc.client5.http.ClientProtocolException;
import org.apache.hc.client5.http.HttpRoute;
import org.apache.hc.client5.http.classic.HttpClient;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
import org.apache.hc.client5.http.impl.classic.CloseableHttpResponse;
import org.apache.hc.client5.http.impl.classic.HttpClientBuilder;
import org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager;
import org.apache.hc.core5.http.HttpEntity;
import org.apache.hc.core5.http.HttpHost;
import org.apache.hc.core5.http.io.entity.EntityUtils;
public class Testme {
static PoolingHttpClientConnectionManager connectionManager;
static CloseableHttpClient httpClient;
public static void main(String args[]) {
connectionManager = new PoolingHttpClientConnectionManager();
connectionManager.setDefaultMaxPerRoute(3);
connectionManager.setMaxPerRoute(new HttpRoute(new HttpHost("http://127.0.0.1",8887)), 5);
httpClient = HttpClientBuilder.create().setConnectionManager(connectionManager).build();
System.out.println("Available connections "+connectionManager.getTotalStats().getAvailable());
System.out.println("Max Connections "+connectionManager.getTotalStats().getMax());
System.out.println("Number of routes "+connectionManager.getRoutes().size());
Testme testme = new Testme();
Testme.ThreadMe threads[] = new Testme.ThreadMe[5];
for(int i=0;i<5;i++)
threads[i] = testme.new ThreadMe();
for(Testme.ThreadMe thread:threads) {
System.out.println("Leased connections before assigning "+connectionManager.getTotalStats().getLeased());
thread.start();
}
}
class ThreadMe extends Thread{
#Override
public void run() {
try {
CloseableHttpResponse response= httpClient.execute(new HttpGet("http://127.0.0.1:8887"));
System.out.println("Req for "+Thread.currentThread().getName() + " executed with "+response);
try {
HttpEntity entity = response.getEntity();
EntityUtils.consume(entity);
}catch(IOException e) {
e.printStackTrace();
}
finally {
response.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
the output i received was as below:
Available connections 0
Max Connections 25
Number of routes 0
Leased connections before assigning 0
Leased connections before assigning 0
Leased connections before assigning 0
Leased connections before assigning 0
Leased connections before assigning 0
Req for Thread-2 executed with 200 OK HTTP/1.1
Req for Thread-4 executed with 200 OK HTTP/1.1
Req for Thread-3 executed with 200 OK HTTP/1.1
Req for Thread-0 executed with 200 OK HTTP/1.1
Req for Thread-1 executed with 200 OK HTTP/1.1
I am unable to find my leased connections are always 0 though there are requests executing. Also, the routes are always shown as 0 though I have registered the route. It seems to me that there is some problem but I could not identify it. Available connections are also shown as 0 during execution(though it is not printed here). Please help me in finding out what went wrong. Thanks!
The first print of "Available connections" message is of course 0, as only after you'll get the response and it'll be decided, based on default strategies DefaultClientConnectionReuseStrategy & DefaultConnectionKeepAliveStrategy, if the connection should be reusable, and for how long, only then the connection will be moved to available connections list.
I'm guessing that the number of routes is also decided after at least one connection was created.
In your log you can see that the main thread printed all the "Leased connections before assigning" messages before the child threads ran, and so you see 0 leased connections.
A connection exists in leased connections list only from the time of creation until the releasing time, that usually happens on response.readEntity(), response.close(), the closing of the connection manager, and maybe on EntityUtils.consume() as well.
So maybe try to move the "Leased connections before assigning" from the main thread to child thread.
Related
Currently, I'm reading the book "Reactive Programming with RxJava" by Tomasz Nurkiewicz. In chapter 5 he compares two different approaches to build an HTTP server which one of them is based on a netty framework.
And I can't figure out how using such a framework can help to build more responsive server compare to the classic approach with a thread per request blocking IO.
The main concept is to utilize as few threads as possible but if there is some blocking IO operation such as DB access that means the very limited number on concurrent connection could be handled at a time
I've reproduced an example from that book.
Initializing the server:
public static void main(String[] args) throws Exception {
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
new ServerBootstrap()
.option(ChannelOption.SO_BACKLOG, 50_000)
.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new HttpInitializer())
.bind(8080)
.sync()
.channel()
.closeFuture()
.sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
The size of worker group thread pool is availableProcessors * 2 = 8 on my machine.
To simulate some IO operation and be able to see what is going on in the LOG I've added latency(but it could be some business logic invocation) of 1sec to the handler:
class HttpInitializer extends ChannelInitializer<SocketChannel> {
private final HttpHandler httpHandler = new HttpHandler();
#Override
public void initChannel(SocketChannel ch) {
ch
.pipeline()
.addLast(new HttpServerCodec())
.addLast(httpHandler);
}
}
And the handler itself:
class HttpHandler extends ChannelInboundHandlerAdapter {
private static final Logger log = LoggerFactory.getLogger(HttpHandler.class);
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof HttpRequest) {
try {
System.out.println(format("Request received on thread '%s' from '%s'", Thread.currentThread().getName(), ((NioSocketChannel)ctx.channel()).remoteAddress()));
} catch (Exception ex) {}
sendResponse(ctx);
}
}
private void sendResponse(ChannelHandlerContext ctx) {
final DefaultFullHttpResponse response = new DefaultFullHttpResponse(
HTTP_1_1,
HttpResponseStatus.OK,
Unpooled.wrappedBuffer("OK".getBytes(UTF_8)));
try {
TimeUnit.SECONDS.sleep(1);
} catch (Exception ex) {
System.out.println("Ex catched " + ex);
}
response.headers().add("Content-length", 2);
ctx.writeAndFlush(response);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
log.error("Error", cause);
ctx.close();
}
}
The client to simulate multiple concurrent connections:
public class NettyClient {
public static void main(String[] args) throws Exception {
NettyClient nettyClient = new NettyClient();
for (int i = 0; i < 100; i++) {
new Thread(() -> {
try {
nettyClient.startClient();
} catch (Exception ex) {
}
}).start();
}
TimeUnit.SECONDS.sleep(5);
}
public void startClient()
throws IOException, InterruptedException {
InetSocketAddress hostAddress = new InetSocketAddress("localhost", 8080);
SocketChannel client = SocketChannel.open(hostAddress);
System.out.println("Client... started");
String threadName = Thread.currentThread().getName();
// Send messages to server
String[] messages = new String[]
{"GET / HTTP/1.1\n" +
"Host: localhost:8080\n" +
"Connection: keep-alive\n" +
"Cache-Control: max-age=0\n" +
"Upgrade-Insecure-Requests: 1\n" +
"User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36\n" +
"Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3\n" +
"Accept-Encoding: gzip, deflate, br\n" +
"Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7"};
for (int i = 0; i < messages.length; i++) {
byte[] message = new String(messages[i]).getBytes();
ByteBuffer buffer = ByteBuffer.wrap(message);
client.write(buffer);
System.out.println(messages[i]);
buffer.clear();
}
client.close();
}
}
Expected -
Our case is the blue line with the only difference that delay was set to 0.1sec instead of 1sec as I explained above. With 100 concurrent connection, I was expecting 100 RPS because there were 90k RPS with 100k concurrent connection with 0.1 delays as the graph shows.
Actual - netty handles only 8 concurrent connection at a time, wait while sleep expires, take another bunch of 8 requests and so on. As a result, it took about 13sec to complete all requests. It's obvious to handle more clients I need to allocate more threads.
But this is exactly how the classic blocking IO approach works! Here the logs on the server-side, as you can see first 8 requests handled and one second later another 8 requests
2019-07-19T12:34:10.791Z Request received on thread 'nioEventLoopGroup-3-2' from '/127.0.0.1:49466'
2019-07-19T12:34:10.791Z Request received on thread 'nioEventLoopGroup-3-1' from '/127.0.0.1:49465'
2019-07-19T12:34:10.792Z Request received on thread 'nioEventLoopGroup-3-8' from '/127.0.0.1:49464'
2019-07-19T12:34:10.793Z Request received on thread 'nioEventLoopGroup-3-7' from '/127.0.0.1:49463'
2019-07-19T12:34:10.799Z Request received on thread 'nioEventLoopGroup-3-6' from '/127.0.0.1:49462'
2019-07-19T12:34:10.802Z Request received on thread 'nioEventLoopGroup-3-3' from '/127.0.0.1:49467'
2019-07-19T12:34:10.802Z Request received on thread 'nioEventLoopGroup-3-4' from '/127.0.0.1:49461'
2019-07-19T12:34:10.803Z Request received on thread 'nioEventLoopGroup-3-5' from '/127.0.0.1:49460'
2019-07-19T12:34:11.798Z Request received on thread 'nioEventLoopGroup-3-8' from '/127.0.0.1:49552'
2019-07-19T12:34:11.798Z Request received on thread 'nioEventLoopGroup-3-1' from '/127.0.0.1:49553'
2019-07-19T12:34:11.799Z Request received on thread 'nioEventLoopGroup-3-2' from '/127.0.0.1:49554'
2019-07-19T12:34:11.801Z Request received on thread 'nioEventLoopGroup-3-6' from '/127.0.0.1:49470'
2019-07-19T12:34:11.802Z Request received on thread 'nioEventLoopGroup-3-3' from '/127.0.0.1:49475'
2019-07-19T12:34:11.805Z Request received on thread 'nioEventLoopGroup-3-7' from '/127.0.0.1:49559'
2019-07-19T12:34:11.805Z Request received on thread 'nioEventLoopGroup-3-4' from '/127.0.0.1:49468'
2019-07-19T12:34:11.806Z Request received on thread 'nioEventLoopGroup-3-5' from '/127.0.0.1:49469'
So my question is - how could netty (or something similar) with its non-blocking and event-driven architecture utilize the CPU more effectively? If we only had 1 thread per each loop group the pipeline would be as follows:
ServerChannel selection key set to ON_ACCEPT
ServerChannel accept a connection and ClientChannel selection key set to ON_READ
Worker thread read the content of this ClientChannel and pass to the chain of handlers.
Even if the ServerChannel thread accept another client connection
and put it to some sort of queue, worker thread can't do anything before all handlers in the chain finish their job. From my
the perspective of view thread can't just switch to another job since
even waiting for the response from remote DB requires CPU ticks.
"how could netty (or something similar) with its non-blocking and event-driven architecture utilize the CPU more effectively? "
It cannot.
The goal of asynchronous (non-blocking and event-driven) programming is to save core memory, when tasks are used instead of threads as units of parallel work. This allows to have millions of parallel activities instead of thousands.
CPU cycles cannot be saved automatically - it is always an intellectual job.
I have set up an HttpsServer in Java. All of my communication works perfectly. I set up multiple contexts, load a self-signed certificate, and even start up based on an external configuration file.
My problem now is getting multiple clients to be able to hit my secure server. To do so, I would like to somehow multi-thread the requests that come in from the HttpsServer but cannot figure out how to do so. Below is my basic HttpsConfiguration.
HttpsServer server = HttpsServer.create(new InetSocketAddress(secureConnection.getPort()), 0);
SSLContext sslContext = SSLContext.getInstance("TLS");
sslContext.init(secureConnection.getKeyManager().getKeyManagers(), secureConnection.getTrustManager().getTrustManagers(), null);
server.setHttpsConfigurator(new SecureServerConfiguration(sslContext));
server.createContext("/", new RootHandler());
server.createContext("/test", new TestHandler());
server.setExecutor(Executors.newCachedThreadPool());
server.start();
Where secureConnection is a custom class containing server setup and certificate information.
I attempted to set the executor to Executors.newCachedThreadPool() and a couple of other ones. However, they all produced the same result. Each managed the threads differently but the first request had to finish before the second could process.
I also tried writing my own Executor
public class AsyncExecutor extends ThreadPoolExecutor implements Executor
{
public static Executor create()
{
return new AsyncExecutor();
}
public AsyncExecutor()
{
super(5, 10, 10000, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(12));
}
#Override
public void execute(Runnable process)
{
System.out.println("New Process");
Thread newProcess = new Thread(process);
newProcess.setDaemon(false);
newProcess.start();
System.out.println("Thread created");
}
}
Unfortunately, with the same result as the other Executors.
To test I am using Postman to hit the /Test endpoint which is simulating a long running task by doing a Thread.sleep(10000). While that is running, I am using my Chrome browser to hit the root endpoint. The root page does not load until the 10 second sleep is over.
Any thoughts on how to handle multiple concurrent requests to the HTTPS server?
For ease of testing, I replicated my scenario using the standard HttpServer and condensed everything into a single java program.
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import java.util.concurrent.Executors;
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
public class Example
{
private final static int PORT = 80;
private final static int BACKLOG = 10;
/**
* To test hit:
* <p><b>http://localhost/test</b></p>
* <p>This will hit the endoint with the thread sleep<br>
* Then hit:</p>
* <p><b>http://localhost</b></p>
* <p>I would expect this to come back right away. However, it does not come back until the
* first request finishes. This can be tested with only a basic browser.</p>
* #param args
* #throws Exception
*/
public static void main(String[] args) throws Exception
{
new Example().start();
}
private void start() throws Exception
{
HttpServer server = HttpServer.create(new InetSocketAddress(PORT), BACKLOG);
server.createContext("/", new RootHandler());
server.createContext("/test", new TestHandler());
server.setExecutor(Executors.newCachedThreadPool());
server.start();
System.out.println("Server Started on " + PORT);
}
class RootHandler implements HttpHandler
{
#Override
public void handle(HttpExchange httpExchange) throws IOException
{
String body = "<html>Hello World</html>";
httpExchange.sendResponseHeaders(200, body.length());
OutputStream outputStream = httpExchange.getResponseBody();
outputStream.write(body.getBytes("UTF-8"));
outputStream.close();
}
}
class TestHandler implements HttpHandler
{
#Override
public void handle(HttpExchange httpExchange) throws IOException
{
try
{
Thread.sleep(10000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
String body = "<html>Test Handled</html>";
httpExchange.sendResponseHeaders(200, body.length());
OutputStream outputStream = httpExchange.getResponseBody();
outputStream.write(body.getBytes("UTF-8"));
outputStream.close();
}
}
}
TL;DR: It's OK, just use two different browsers, or specialized tool to test it.
You original implementation is OK and it work as expected, no custom Executor needed. For each request it executes method of "shared" handler class instance. It always picks up free thread from pool, so each method call is executed in different thread.
The problem seems to be, that when you use multiple windows of the same browser to test this behavior... for some reason requests get executed in serialised way (only one at the time). Tested with latest Firefox, Chrome, Edge and Postman. Edge and Postman work as expected. Also anonymous mode of Firefox and Chrome helps.
Same local URL opened at the same time from two Chrome windows. In first the page loaded after 5s, I got Thread.sleep(5000) so that's OK. Second window loaded respons in 8,71s, so there is 3,71s delay of unknown origin.
My guess? Probably some browser internal optimization or failsafe mechanism.
Try specifying a non-zero maximum backlog (the second argument to create()):
HttpsServer server = HttpsServer.create(new InetSocketAddress(secureConnection.getPort()), 10);
I did some experiments and what works for me is:
public void handler(HttpExchange exchange) {
executor.submit(new SomeOtherHandler());
}
public class SomeOtherHandler implements Runnable {
}
where the executor is the one you created as thread pool.
CloseableHttpResponse response = null;
try {
// do some thing ....
HttpPost request = new HttpPost("some url");
response = getHttpClient().execute(request);
// do some other thing ....
} catch(Exception e) {
// deal with exception
} finally {
if(response != null) {
try {
response.close(); // (1)
} catch(Exception e) {}
request.releaseConnection(); // (2)
}
}
I've made a http request like above.
In order to release the underlying connection, is it correct to call (1) and (2)? and what's the difference between the two invocation?
Short answer:
request.releaseConnection() is releasing the underlying HTTP connection to allow it to be reused. response.close() is closing a stream (not a connection), this stream is the response content we are streaming from the network socket.
Long Answer:
The correct pattern to follow in any recent version > 4.2 and probably even before that, is not to use releaseConnection.
request.releaseConnection() releases the underlying httpConnection so the request can be reused, however the Java doc says:
A convenience method to simplify migration from HttpClient 3.1 API...
Instead of releasing the connection, we ensure the response content is fully consumed which in turn ensures the connection is released and ready for reuse. A short example is shown below:
CloseableHttpClient httpclient = HttpClients.createDefault();
HttpGet httpGet = new HttpGet("http://targethost/homepage");
CloseableHttpResponse response1 = httpclient.execute(httpGet);
try {
System.out.println(response1.getStatusLine());
HttpEntity entity1 = response1.getEntity();
// do something useful with the response body
String bodyAsString = EntityUtils.toString(exportResponse.getEntity());
System.out.println(bodyAsString);
// and ensure it is fully consumed (this is how stream is released.
EntityUtils.consume(entity1);
} finally {
response1.close();
}
CloseableHttpResponse.close() closes the tcp socket
HttpPost.releaseConnection() closes the tcp socket
EntityUtils.consume(response.getEntity()) allows you to re-use the tcp socket
Details
CloseableHttpResponse.close() closes the tcp socket, preventing the connection from being re-used. You need to establish a new tcp connection in order to initiate another request.
This is the call chain that lead me to the above conclusion:
HttpResponseProxy.close()
-> ConnectionHolder.close()
-> ConnectionHolder.releaseConnection(reusable=false)
-> managedConn.close()
-> BHttpConnectionBase.close()
-> Socket.close()
HttpPost.releaseConnection() also closes the Socket. This is the call chain that lead me to the above conclusion:
HttpPost.releaseConnection()
HttpRequestBase.releaseConnect()
AbstractExecutionAwareRequest.reset()
ConnectionHolder.cancel() (
ConnectionHolder.abortConnection()
HttpConnection.shutdown()
Here is experimental code that also demonstrates the above three facts:
import java.lang.reflect.Constructor;
import java.net.Socket;
import java.net.SocketImpl;
import java.net.SocketImplFactory;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
import org.apache.http.util.EntityUtils;
public class Main {
private static SocketImpl newSocketImpl() {
try {
Class<?> defaultSocketImpl = Class.forName("java.net.SocksSocketImpl");
Constructor<?> constructor = defaultSocketImpl.getDeclaredConstructor();
constructor.setAccessible(true);
return (SocketImpl) constructor.newInstance();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public static void main(String[] args) throws Exception {
// this is a hack that lets me listen to Tcp socket creation
final List<SocketImpl> allSockets = Collections.synchronizedList(new ArrayList<>());
Socket.setSocketImplFactory(new SocketImplFactory() {
public SocketImpl createSocketImpl() {
SocketImpl socket = newSocketImpl();
allSockets.add(socket);
return socket;
}
});
System.out.println("num of sockets after start: " + allSockets.size());
CloseableHttpClient client = HttpClientBuilder.create().build();
System.out.println("num of sockets after client created: " + allSockets.size());
HttpGet request = new HttpGet("http://www.google.com");
System.out.println("num of sockets after get created: " + allSockets.size());
CloseableHttpResponse response = client.execute(request);
System.out.println("num of sockets after get executed: " + allSockets.size());
response.close();
System.out.println("num of sockets after response closed: " + allSockets.size());
response = client.execute(request);
System.out.println("num of sockets after request executed again: " + allSockets.size());
request.releaseConnection();
System.out.println("num of sockets after release connection: " + allSockets.size());
response = client.execute(request);
System.out.println("num of sockets after request executed again for 3rd time: " + allSockets.size());
EntityUtils.consume(response.getEntity());
System.out.println("num of sockets after entityConsumed: " + allSockets.size());
response = client.execute(request);
System.out.println("num of sockets after request executed again for 4th time: " + allSockets.size());
}
}
pom.xml
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>org.joseph</groupId>
<artifactId>close.vs.release.conn</artifactId>
<version>1.0.0</version>
<properties>
<maven.compiler.target>1.8</maven.compiler.target>
<maven.compiler.source>1.8</maven.compiler.source>
</properties>
<build>
<plugins>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.13</version>
</dependency>
</dependencies>
</project>
Output:
num of sockets after start: 0
num of sockets after client created: 0
num of sockets after get created: 0
num of sockets after get executed: 1
num of sockets after response closed: 1
num of sockets after request executed again: 2
num of sockets after release connection: 2
num of sockets after request executed again for 3rd time: 3
num of sockets after entityConsumed: 3
num of sockets after request executed again for 4th time: 3
Notice that both .close() and .releaseConnection() both result in a new tcp connection. Only consuming the entity allows you to re-use the tcp connection.
If you want the connect to be re-usable after each request, then you need to do what #Matt recommended and consume the entity.
This code creates a new connection to the RESTful server for each request rather than just use the existing connection. How do I change the code, so that there is only one connection?
The line "response = oClientCloseable.execute(...)" not only does the task, but creates a connection.
I checked the server daemon log and the only activity generates from the .execute() method.
import org.apache.http.HttpResponse;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.client.methods.HttpDelete;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.utils.HttpClientUtils;
import org.apache.http.conn.ConnectionPoolTimeoutException;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
...
String pathPost = "http://someurl";
String pathDelete = "http://someurl2";
String xmlPost = "myxml";
HttpResponse response = null;
BufferedReader rd = null;
String line = null;
CloseableHttpClient oClientCloseable = HttpClientBuilder.create().setDefaultRequestConfig(defaultRequestConfig).build();
for (int iLoop = 0; iLoop < 25; iLoop++)
{
HttpPost hPost = new HttpPost(pathPost);
hPost.setHeader("Content-Type", "application/xml");
StringEntity se = new StringEntity(xmlPost);
hPost.setEntity(se);
line = "";
try
{
response = oClientCloseable.execute(hPost);
rd = new BufferedReader(new InputStreamReader(response.getEntity().getContent()));
while ((line = rd.readLine()) != null)
{
System.out.println(line);
}
}
catch (ClientProtocolException e)
{
e.printStackTrace();
}
catch (ConnectionPoolTimeoutException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
HttpClientUtils.closeQuietly(response);
}
HttpDelete hDelete = new HttpDelete(pathDelete);
hDelete.setHeader("Content-Type", "application/xml");
try
{
response = oClientCloseable.execute(hDelete);
}
catch (ClientProtocolException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
HttpClientUtils.closeQuietly(response);
}
}
oClientCloseable.close();
The server daemon log emits the following for whatever it is worth, when connecting.
HTTP connection from [192.168.20.86]...ALLOWED
POST [/linx] SIZE 248
LINK-18446744073709551615: 2 SEND-BMQs, 2 RECV-BMQs
THREAD-LINK_CONNECT-000, TID: 7F0F1B7FE700 READY
NODE connecting to [192.168.30.20]:9099...
LINK-0-CONTROL-NODE-0 connected to 192.168.30.20(192.168.30.20 IPv4 address: 192.168.30.20):9099
Auth accepted, protocol compatible
NODE connecting to [192.168.30.20]:9099...
This article seems the most relevant, as it talks about consuming (closing) connections, which ties in the response. That article is also out of date, as consumeContent is deprecated. It seems that response.close() is the proper way, but that closes the connection and a new response creates a new connection.
It seems that I need to somehow create one response to the serer daemon and then change action (get, post, put, or delete).
Thoughts on how the code should change?
Here are some other links that I used:
link 1
link 2
link 3
I implemented the suggestion of Robert Rowntree (sorry not sure to properly reference name) by replacing the beginning code with:
// Increase max total connection to 200 and increase default max connection per route to 20.
// Configure total max or per route limits for persistent connections
// that can be kept in the pool or leased by the connection manager.
PoolingHttpClientConnectionManager oConnectionMgr = new PoolingHttpClientConnectionManager();
oConnectionMgr.setMaxTotal(200);
oConnectionMgr.setDefaultMaxPerRoute(20);
oConnectionMgr.setMaxPerRoute(new HttpRoute(new HttpHost("192.168.20.120", 8080)), 20);
RequestConfig defaultRequestConfig = RequestConfig.custom()
.setSocketTimeout(5000)
.setConnectTimeout(5000)
.setConnectionRequestTimeout(5000)
.setStaleConnectionCheckEnabled(true)
.build();
//HttpClient client = HttpClientBuilder.create().setDefaultRequestConfig(defaultRequestConfig).build();
CloseableHttpClient oClientCloseable = HttpClientBuilder.create()
.setConnectionManager(oConnectionMgr)
.setDefaultRequestConfig(defaultRequestConfig)
.build();
I still saw the bunch of authenticates.
I contacted the vendor and shared with them the log using the modified version and my code was clean.
My test sample created a connection (to a remote server) followed by deleting the connection and repeating however many times. Their code dumps the authenticate message each time a connection creation request arrives.
I was pointed to what technically I already knew that the line that creates a new RESTful connection to the service is always "XXXXX connection allowed". There was one of those, two if you count my going to the browser based interface afterwards to make sure that all my links were gone.
Sadly, I am not sure that I can use the Apache client, so sad. Apache does not support message bodies inside a GET request. To the simple minded here (me, in this case), Apache does not allow:
GET http://www.example.com/whatevermethod:myport?arg1=data1&arg2=data2
Apache HttpClient --> HttpGet does not have a setEntities command. Research showed that as a POST request, but the service is the way that it is and will not change, so...
You can definitely use query parameters in Apache HttpClient:
URIBuilder builder = new URIBuilder("http://www.example.com/whatevermehtod");
builder.addParameter("arg1", "data1");
URI uri = builder.build();
HttpGet get = new HttpGet(uri);
I'm trying to use server side code based on java NIO(non blocking) from 'The Rox Java NIO Tutorial'. There are lot of incoming socket connections and I would like to accept only 100. So if there are 100 active connections then new ones should be rejected/refused. But how to do that? There is only method ServerSocketChannel.accept() which returns SocketChannel object. Using that object I can call socketChannel.socket().close(), but connection is already open. Here is part of the code:
#Override
public void run() {
while (true) {
try {
// Wait for an event one of the registered channels
this.selector.select();
// Iterate over the set of keys for which events are available
Iterator selectedKeys = this.selector.selectedKeys().iterator();
while (selectedKeys.hasNext()) {
SelectionKey key = (SelectionKey) selectedKeys.next();
selectedKeys.remove();
if (!key.isValid()) {
continue;
}
// Check what event is available and deal with it
if (key.isAcceptable()) {
this.accept(key);
} else if (key.isReadable()) {
this.read(key);
} else if (key.isWritable()) {
this.write(key);
}
}
} catch (Exception e) {
logger.warn("Reading data", e);
}
}
}
and accept() mehod:
private void accept(SelectionKey key) throws IOException {
// For an accept to be pending the channel must be a server socket channel.
ServerSocketChannel serverSocketChannel = (ServerSocketChannel) key.channel();
// Accept the connection and make it non-blocking
if (noOfConnections < MAX_CONNECTIONS) {
SocketChannel socketChannel = serverSocketChannel.accept();
Socket socket = socketChannel.socket();
socket.setKeepAlive(true);
socketChannel.configureBlocking(false);
// Register the new SocketChannel with our Selector, indicating
// we'd like to be notified when there's data waiting to be read
socketChannel.register(this.selector, SelectionKey.OP_READ | SelectionKey.OP_WRITE);//listener for incoming data: READ from client, WRITE to client
noOfConnections++;
logger.info("Accepted: " + socket.getRemoteSocketAddress().toString());
} else {
// REJECT INCOMING CONNECTION, but how?
logger.warn("Server is full: " + noOfConnections + " / " + MAX_CONNECTIONS);
}
}
If connection is not accepted then accept() method is being called over and over.
Thanks for help!
There is no way to accomplish that, but I doubt that that's what you really want, or at least what you really should do.
If you want to stop accepting connections, change the interestOps in the server socket channel's selection key to zero, and change it back to OP_ACCEPT when you are ready to accept again. In the interim, isAcceptable() will never be true, so the problem you describe won't occur.
However that won't cause further connections to be refused: it will just leave them on the backlog queue where in my opinion and that of the designers of TCP they belong. There will be another failure behaviour if the backlog queue fills up: its effect in the client is system-dependent: connection refusals and/or timeouts.
I think any tuning of a backlog queue hardly ever would be a good solution. But probably, you can just stop listening.
Well, I managed this problem next way:
Pending-state connections on socket are in kind of "middle_state", that mean you cannot control/reject them.
Backlog socket parameter may be used/ignored/treated in different way by specific VM.
That mean you have to accept particular connection to receive associated object and operate it.
Use one thread to accept connection, pass accepted connection to second thread for processing.
Create some variable for number of active connections.
Now, while number of active connections is less than wished maximum, accept connection, rise the number by 1, and pass to second thread for processing.
Otherwise, accept connection and close that immediately.
Also, in connection process thread, than finished, decrease the number of active connections by 1 to point there is one more free channel available.
EDT: Just made the "stub" for server machanism for Java.Net NIO.
May be adapted for OP needs:
package servertest;
import java.io.IOException;
import java.net.InetAddress;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.logging.Level;
import java.util.logging.Logger;
public class Servertest extends Thread {
final int MAXIMUM_CONNECTIONS = 3;
int connectionnumber = 0;
/**
* #param args the command line arguments
* #throws java.io.IOException
*/
public static void main(String[] args){
new Servertest().start();
}
#Override
public void run() {
try {
ServerSocket sc = new ServerSocket(33000, 50, InetAddress.getLoopbackAddress());
while (sc.isBound()) {
Socket connection = sc.accept();
if(connectionnumber<=MAXIMUM_CONNECTIONS){
new ClientConnection(connection).start();
connectionnumber++;
} else {
//Optionally write some error response to client
connection.close();
}
}
} catch (IOException ex) {
Logger.getLogger(Servertest.class.getName()).log(Level.SEVERE, null, ex);
}
}
private class ClientConnection extends Thread{
private Socket connection;
public ClientConnection(Socket connection) {
this.connection=connection;
}
#Override
public void run() {
try {
//make user interaction
connection.close();
} catch (IOException ex) {
Logger.getLogger(Servertest.class.getName()).log(Level.SEVERE, null, ex);
}
connectionnumber--;
}
}
}