Java HttpsServer multi threaded - java

I have set up an HttpsServer in Java. All of my communication works perfectly. I set up multiple contexts, load a self-signed certificate, and even start up based on an external configuration file.
My problem now is getting multiple clients to be able to hit my secure server. To do so, I would like to somehow multi-thread the requests that come in from the HttpsServer but cannot figure out how to do so. Below is my basic HttpsConfiguration.
HttpsServer server = HttpsServer.create(new InetSocketAddress(secureConnection.getPort()), 0);
SSLContext sslContext = SSLContext.getInstance("TLS");
sslContext.init(secureConnection.getKeyManager().getKeyManagers(), secureConnection.getTrustManager().getTrustManagers(), null);
server.setHttpsConfigurator(new SecureServerConfiguration(sslContext));
server.createContext("/", new RootHandler());
server.createContext("/test", new TestHandler());
server.setExecutor(Executors.newCachedThreadPool());
server.start();
Where secureConnection is a custom class containing server setup and certificate information.
I attempted to set the executor to Executors.newCachedThreadPool() and a couple of other ones. However, they all produced the same result. Each managed the threads differently but the first request had to finish before the second could process.
I also tried writing my own Executor
public class AsyncExecutor extends ThreadPoolExecutor implements Executor
{
public static Executor create()
{
return new AsyncExecutor();
}
public AsyncExecutor()
{
super(5, 10, 10000, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(12));
}
#Override
public void execute(Runnable process)
{
System.out.println("New Process");
Thread newProcess = new Thread(process);
newProcess.setDaemon(false);
newProcess.start();
System.out.println("Thread created");
}
}
Unfortunately, with the same result as the other Executors.
To test I am using Postman to hit the /Test endpoint which is simulating a long running task by doing a Thread.sleep(10000). While that is running, I am using my Chrome browser to hit the root endpoint. The root page does not load until the 10 second sleep is over.
Any thoughts on how to handle multiple concurrent requests to the HTTPS server?
For ease of testing, I replicated my scenario using the standard HttpServer and condensed everything into a single java program.
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import java.util.concurrent.Executors;
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
public class Example
{
private final static int PORT = 80;
private final static int BACKLOG = 10;
/**
* To test hit:
* <p><b>http://localhost/test</b></p>
* <p>This will hit the endoint with the thread sleep<br>
* Then hit:</p>
* <p><b>http://localhost</b></p>
* <p>I would expect this to come back right away. However, it does not come back until the
* first request finishes. This can be tested with only a basic browser.</p>
* #param args
* #throws Exception
*/
public static void main(String[] args) throws Exception
{
new Example().start();
}
private void start() throws Exception
{
HttpServer server = HttpServer.create(new InetSocketAddress(PORT), BACKLOG);
server.createContext("/", new RootHandler());
server.createContext("/test", new TestHandler());
server.setExecutor(Executors.newCachedThreadPool());
server.start();
System.out.println("Server Started on " + PORT);
}
class RootHandler implements HttpHandler
{
#Override
public void handle(HttpExchange httpExchange) throws IOException
{
String body = "<html>Hello World</html>";
httpExchange.sendResponseHeaders(200, body.length());
OutputStream outputStream = httpExchange.getResponseBody();
outputStream.write(body.getBytes("UTF-8"));
outputStream.close();
}
}
class TestHandler implements HttpHandler
{
#Override
public void handle(HttpExchange httpExchange) throws IOException
{
try
{
Thread.sleep(10000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
String body = "<html>Test Handled</html>";
httpExchange.sendResponseHeaders(200, body.length());
OutputStream outputStream = httpExchange.getResponseBody();
outputStream.write(body.getBytes("UTF-8"));
outputStream.close();
}
}
}

TL;DR: It's OK, just use two different browsers, or specialized tool to test it.
You original implementation is OK and it work as expected, no custom Executor needed. For each request it executes method of "shared" handler class instance. It always picks up free thread from pool, so each method call is executed in different thread.
The problem seems to be, that when you use multiple windows of the same browser to test this behavior... for some reason requests get executed in serialised way (only one at the time). Tested with latest Firefox, Chrome, Edge and Postman. Edge and Postman work as expected. Also anonymous mode of Firefox and Chrome helps.
Same local URL opened at the same time from two Chrome windows. In first the page loaded after 5s, I got Thread.sleep(5000) so that's OK. Second window loaded respons in 8,71s, so there is 3,71s delay of unknown origin.
My guess? Probably some browser internal optimization or failsafe mechanism.

Try specifying a non-zero maximum backlog (the second argument to create()):
HttpsServer server = HttpsServer.create(new InetSocketAddress(secureConnection.getPort()), 10);

I did some experiments and what works for me is:
public void handler(HttpExchange exchange) {
executor.submit(new SomeOtherHandler());
}
public class SomeOtherHandler implements Runnable {
}
where the executor is the one you created as thread pool.

Related

Sending a GET http request freezes my window [duplicate]

I'm fairly new to Java, so this may seem obvious to some. I've worked a lot with ActionScript, which is very much event based and I love that. I recently tried to write a small bit of Java code that does a POST request, but I've been faced with the problem that it's a synchronous request, so the code execution waits for the request to complete, time out or present an error.
How can I create an asynchronous request, where the code continues the execution and a callback is invoked when the HTTP request is complete? I've glanced at threads, but I'm thinking it's overkill.
If you are in a JEE7 environment, you must have a decent implementation of JAXRS hanging around, which would allow you to easily make asynchronous HTTP request using its client API.
This would looks like this:
public class Main {
public static Future<Response> getAsyncHttp(final String url) {
return ClientBuilder.newClient().target(url).request().async().get();
}
public static void main(String ...args) throws InterruptedException, ExecutionException {
Future<Response> response = getAsyncHttp("http://www.nofrag.com");
while (!response.isDone()) {
System.out.println("Still waiting...");
Thread.sleep(10);
}
System.out.println(response.get().readEntity(String.class));
}
}
Of course, this is just using futures. If you are OK with using some more libraries, you could take a look at RxJava, the code would then look like:
public static void main(String... args) {
final String url = "http://www.nofrag.com";
rx.Observable.from(ClientBuilder.newClient().target(url).request().async().get(String.class), Schedulers
.newThread())
.subscribe(
next -> System.out.println(next),
error -> System.err.println(error),
() -> System.out.println("Stream ended.")
);
System.out.println("Async proof");
}
And last but not least, if you want to reuse your async call, you might want to take a look at Hystrix, which - in addition to a bazillion super cool other stuff - would allow you to write something like this:
For example:
public class AsyncGetCommand extends HystrixCommand<String> {
private final String url;
public AsyncGetCommand(final String url) {
super(Setter.withGroupKey(HystrixCommandGroupKey.Factory.asKey("HTTP"))
.andCommandPropertiesDefaults(HystrixCommandProperties.Setter()
.withExecutionIsolationThreadTimeoutInMilliseconds(5000)));
this.url = url;
}
#Override
protected String run() throws Exception {
return ClientBuilder.newClient().target(url).request().get(String.class);
}
}
Calling this command would look like:
public static void main(String ...args) {
new AsyncGetCommand("http://www.nofrag.com").observe().subscribe(
next -> System.out.println(next),
error -> System.err.println(error),
() -> System.out.println("Stream ended.")
);
System.out.println("Async proof");
}
PS: I know the thread is old, but it felt wrong that no ones mentions the Rx/Hystrix way in the up-voted answers.
You may also want to look at Async Http Client.
Note that java11 now offers a new HTTP api HttpClient, which supports fully asynchronous operation, using java's CompletableFuture.
It also supports a synchronous version, with calls like send, which is synchronous, and sendAsync, which is asynchronous.
Example of an async request (taken from the apidoc):
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://example.com/"))
.timeout(Duration.ofMinutes(2))
.header("Content-Type", "application/json")
.POST(BodyPublishers.ofFile(Paths.get("file.json")))
.build();
client.sendAsync(request, BodyHandlers.ofString())
.thenApply(HttpResponse::body)
.thenAccept(System.out::println);
Based on a link to Apache HTTP Components on this SO thread, I came across the Fluent facade API for HTTP Components. An example there shows how to set up a queue of asynchronous HTTP requests (and get notified of their completion/failure/cancellation). In my case, I didn't need a queue, just one async request at a time.
Here's where I ended up (also using URIBuilder from HTTP Components, example here).
import java.net.URI;
import java.net.URISyntaxException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import org.apache.http.client.fluent.Async;
import org.apache.http.client.fluent.Content;
import org.apache.http.client.fluent.Request;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.concurrent.FutureCallback;
//...
URIBuilder builder = new URIBuilder();
builder.setScheme("http").setHost("myhost.com").setPath("/folder")
.setParameter("query0", "val0")
.setParameter("query1", "val1")
...;
URI requestURL = null;
try {
requestURL = builder.build();
} catch (URISyntaxException use) {}
ExecutorService threadpool = Executors.newFixedThreadPool(2);
Async async = Async.newInstance().use(threadpool);
final Request request = Request.Get(requestURL);
Future<Content> future = async.execute(request, new FutureCallback<Content>() {
public void failed (final Exception e) {
System.out.println(e.getMessage() +": "+ request);
}
public void completed (final Content content) {
System.out.println("Request completed: "+ request);
System.out.println("Response:\n"+ content.asString());
}
public void cancelled () {}
});
You may want to take a look at this question: Asynchronous IO in Java?
It looks like your best bet, if you don't want to wrangle the threads yourself is a framework. The previous post mentions
Grizzly, https://grizzly.dev.java.net/, and Netty, http://www.jboss.org/netty/.
From the netty docs:
The Netty project is an effort to provide an asynchronous event-driven network application framework and tools for rapid development of maintainable high performance & high scalability protocol servers & clients.
Apache HttpComponents also have an async http client now too:
/**
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpasyncclient</artifactId>
<version>4.0-beta4</version>
</dependency>
**/
import java.io.IOException;
import java.nio.CharBuffer;
import java.util.concurrent.Future;
import org.apache.http.HttpResponse;
import org.apache.http.impl.nio.client.CloseableHttpAsyncClient;
import org.apache.http.impl.nio.client.HttpAsyncClients;
import org.apache.http.nio.IOControl;
import org.apache.http.nio.client.methods.AsyncCharConsumer;
import org.apache.http.nio.client.methods.HttpAsyncMethods;
import org.apache.http.protocol.HttpContext;
public class HttpTest {
public static void main(final String[] args) throws Exception {
final CloseableHttpAsyncClient httpclient = HttpAsyncClients
.createDefault();
httpclient.start();
try {
final Future<Boolean> future = httpclient.execute(
HttpAsyncMethods.createGet("http://www.google.com/"),
new MyResponseConsumer(), null);
final Boolean result = future.get();
if (result != null && result.booleanValue()) {
System.out.println("Request successfully executed");
} else {
System.out.println("Request failed");
}
System.out.println("Shutting down");
} finally {
httpclient.close();
}
System.out.println("Done");
}
static class MyResponseConsumer extends AsyncCharConsumer<Boolean> {
#Override
protected void onResponseReceived(final HttpResponse response) {
}
#Override
protected void onCharReceived(final CharBuffer buf, final IOControl ioctrl)
throws IOException {
while (buf.hasRemaining()) {
System.out.print(buf.get());
}
}
#Override
protected void releaseResources() {
}
#Override
protected Boolean buildResult(final HttpContext context) {
return Boolean.TRUE;
}
}
}
It has to be made clear the HTTP protocol is synchronous and this has nothing to do with the programming language. Client sends a request and gets a synchronous response.
If you want to an asynchronous behavior over HTTP, this has to be built over HTTP (I don't know anything about ActionScript but I suppose that this is what the ActionScript does too). There are many libraries that could give you such functionality (e.g. Jersey SSE). Note that they do somehow define dependencies between the client and the server as they do have to agree on the exact non standard communication method above HTTP.
If you cannot control both the client and the server or if you don't want to have dependencies between them, the most common approach of implementing asynchronous (e.g. event based) communication over HTTP is using the webhooks approach (you can check this for an example implementation in java).
Hope I helped!
Here is a solution using apache HttpClient and making the call in a separate thread. This solution is useful if you are only making one async call. If you are making multiple calls I suggest using apache HttpAsyncClient and placing the calls in a thread pool.
import java.lang.Thread;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
import org.apache.hc.client5.http.impl.classic.HttpClients;
public class ApacheHttpClientExample {
public static void main(final String[] args) throws Exception {
try (final CloseableHttpClient httpclient = HttpClients.createDefault()) {
final HttpGet httpget = new HttpGet("http://httpbin.org/get");
new Thread(() -> {
final String responseBody = httpclient.execute(httpget);
}).start();
}
}
}

Java create/overwrite http server

I'm creating a plugin on a certain platform (the details are irrelevant) and need to create a HTTP endpoint. In normal circumstances you'd create a http server and stop it whenever you're done using it or when the application stops, however, in my case I can't detect when the plugin is being uninstalled/reinstalled.
The problem
When someone installs my plugin twice, the second time it will throw an error because I'm trying to create a http server on a port which is already in use. Since it's being reinstalled, I can't save the http server on some static variable either. In other words, I need to be able to stop a previously created http server without having any reference to it.
My attempt
I figured the only way to interact with the original reference to the http server would be to create a thread whenever the http server starts, and then overwrite the interrupt() method to stop the server, but somehow I'm still receiving the 'port is already in use' error. I'm using Undertow as my http server library, but this problem applies to any http server implementation.
import io.undertow.Undertow;
import io.undertow.util.Headers;
public class SomeServlet extends Thread {
private static final String THREAD_NAME = "some-servlet-container-5391301";
private static final int PORT = 5839;
private Undertow server;
public static void listen() { // this method is called whenever my plugin is installed
deleteExistingServer();
new SomeServlet().start();
}
private static void deleteExistingServer() {
for (Thread t : Thread.getAllStackTraces().keySet()) {
if (t.getName().equals(THREAD_NAME)) {
t.interrupt();
}
}
}
#Override
public void run() {
createServer();
}
#Override
public void interrupt() {
try {
System.out.println("INTERRUPT");
this.server.stop();
} finally {
super.interrupt();
}
}
private void createServer() {
this.server = Undertow.builder()
.addHttpListener(PORT, "localhost")
.setHandler(exchange -> {
exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain");
exchange.getResponseSender().send("Hello World!");
})
.build();
this.server.start();
}
}
Desired behaviour
Whenever listen() is called, it will remove any previously existing http server and create a new one, without relying on storing the server on a static variable.
You could try com.sun.net.httpserver.HttpServer. Use http://localhost:8765/stop to stop and 'http://localhost:8765/test' for test request:
import com.sun.net.httpserver.HttpServer;
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
public class TestHttpServer {
public static void main(String[] args) throws IOException {
final HttpServer server = HttpServer.create();
server.bind(new InetSocketAddress(8765), 0);
server.createContext("/test", httpExchange -> {
String response = "<html>TEST!!!</html>";
httpExchange.sendResponseHeaders(200, response.length());
OutputStream os = httpExchange.getResponseBody();
os.write(response.getBytes());
os.close();
});
server.createContext("/stop", httpExchange -> server.stop(1));
server.start();
}
}

Matlab Java Interoperability

Our web app acts as an integration layer which allows the users to run Matlab code (Matlab is a scientific programming language) which was compiled to Java, packaged up as jar files via browser (selected ones as in above image, except for remote_proxy-1.0.0.jar which is not, it is used for RMI).
The problem is that, Matlab Java runtime, contained inside the javabuilder-1.0.0.jar file, has a process-wide blocking mechanism which means if the first user sends an HTTP request to execute the cdf_read-1.0.0.jar or any Matlab-compiled-to-Java jars at all, then subsequent requests will block until the first one finishes and it will take no less than 5 seconds since JNI is used to invoke the native Matlab code and because the app server just spawns new threads to serve each request, but once again, due to the process-wide locking mechanism of Matlab Java runtime, these newly spawned threads will just block waiting for the first request to be fulfilled, thus our app can technically serve one user at a time.
So to work around this problem, for each such request, we start a new JVM process, send the request to this new process to run the job using RMI, then return the result back to the app server's process, then destroy the spawned process. So we've solved the blocking issue, but this is not very good at all in terms of memory used, this is a niche app, so number of users is in the range of thoudsands. Below is the code used to start a new process to run the BootStrap class which starts a new RMI registry, and binds a remote object to run the job.
package rmi;
import java.io.*;
import java.nio.file.*;
import static java.util.stream.Collectors.joining;
import java.util.stream.Stream;
import javax.enterprise.concurrent.ManagedExecutorService;
import org.slf4j.LoggerFactory;
//TODO: Remove sout
public class ProcessInit {
public static Process startRMIServer(ManagedExecutorService pool, String WEBINF, int port, String jar) {
ProcessBuilder pb = new ProcessBuilder();
Path wd = Paths.get(WEBINF);
pb.directory(wd.resolve("classes").toFile());
Path lib = wd.resolve("lib");
String cp = Stream.of("javabuilder-1.0.0.jar", "remote_proxy-1.0.0.jar", jar)
.map(e -> lib.resolve(e).toString())
.collect(joining(File.pathSeparator));
pb.command("java", "-cp", "." + File.pathSeparator + cp, "rmi.BootStrap", String.valueOf(port));
while (true) {
try {
Process p = pb.start();
pool.execute(() -> flushIStream(p.getInputStream()));
pool.execute(() -> flushIStream(p.getErrorStream()));
return p;
} catch (Exception ex) {
ex.printStackTrace();
System.out.println("Retrying....");
}
}
}
private static void flushIStream(InputStream is) {
try (BufferedReader br = new BufferedReader(new InputStreamReader(is))) {
br.lines().forEach(System.out::println);
} catch (IOException ex) {
LoggerFactory.getLogger(ProcessInit.class.getName()).error(ex.getMessage());
}
}
}
This class is used to start a new RMI registry so each HTTP request to execute Matlab code can be run in a separate process, the reason we do this is because each RMI registry is bound to a process, so we need a separate registry for each JVM process.
package rmi;
import java.rmi.RemoteException;
import java.rmi.registry.*;
import java.rmi.server.UnicastRemoteObject;
import java.util.logging.*;
import remote_proxy.*;
//TODO: Remove sout
public class BootStrap {
public static void main(String[] args) {
int port = Integer.parseInt(args[0]);
System.out.println("Instantiating a task runner implementation on port: " + port );
try {
System.setProperty("java.rmi.server.hostname", "localhost");
TaskRunner runner = new TaskRunnerRemoteObject();
TaskRunner stub = (TaskRunner)UnicastRemoteObject.exportObject(runner, 0);
Registry reg = LocateRegistry.createRegistry(port);
reg.rebind("runner" + port, stub);
} catch (RemoteException ex) {
Logger.getLogger(BootStrap.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
This class allows to submit the request to execute the Matlab code, returns the result, and kill the newly spawned process.
package rmi.tasks;
import java.rmi.*;
import java.rmi.registry.*;
import java.util.Random;
import java.util.concurrent.*;
import java.util.logging.*;
import javax.enterprise.concurrent.ManagedExecutorService;
import remote_proxy.*;
import rmi.ProcessInit;
public final class Tasks {
/**
* #param pool This instance should be injected using #Resource(name = "java:comp/DefaultManagedExecutorService")
* #param task This is an implementation of the Task interface, this
* implementation should extend from MATLAB class and accept any necessary
* arguments, e.g Class1 and it must implement Serializable interface
* #param WEBINF WEB-INF directory
* #param jar Name of the jar that contains this MATLAB function
* #throws RemoteException
* #throws NotBoundException
*/
public static final <T> T runTask(ManagedExecutorService pool, Task<T> task, String WEBINF, String jar) throws RemoteException, NotBoundException {
int port = new Random().nextInt(1000) + 2000;
Future<Process> process = pool.submit(() -> ProcessInit.startRMIServer(pool, WEBINF, port, jar));
Registry reg = LocateRegistry.getRegistry(port);
TaskRunner generator = (TaskRunner) reg.lookup("runner" + port);
T result = generator.runTask(task);
destroyProcess(process);
return result;
}
private static void destroyProcess(Future<Process> process) {
try {
System.out.println("KILLING THIS PROCESS");
process.get().destroy();
System.out.println("DONE KILLING THIS PROCESS");
} catch (InterruptedException | ExecutionException ex) {
Logger.getLogger(Tasks.class.getName()).log(Level.SEVERE, null, ex);
System.out.println("DONE KILLING THIS PROCESS");
}
}
}
The questions:
Do I have to start a new separate RMI registry and bind a remote to it for each new process?
Is there a better way to achieve the same result?
You don't want JVM startup time to be part of the perceived transaction time. I would start a large number of RMI JVMs ahead of time, dependending on the expected number of concurrent requests, which could be in the hundreds or even thousands.
You only need one Registry: rmiregistry.exe. Start it on its default port and with an appropriate CLASSPATH so it can find all your stubs and application classes they depend on.
Bind each remote object into that Registry with sequentially-increasing names of the general form runner%d.
Have your RMI client pick a 'runner' at random from the known range 1-N where N is the number of runners. You may need a more sophisticated mechanism than mere randomness to ensure that the runner is free at the time.
You don't need multiple Registry ports or even multiple Registries.

Netty 4.0.23 multiple hosts single client

My question is about creating multiple TCP clients to multiple hosts using the same event loop group in Netty 4.0.23 Final, I must admit that I don't quite understand Netty 4's client threading business, especially with the loads of confusing references to Netty 3.X.X implementations I hit through my research on the internet.
with the following code, I establish a connection with a single server, and send random commands using a command queue:
public class TCPsocket {
private static final CircularFifoQueue CommandQueue = new CircularFifoQueue(20);
private final EventLoopGroup workerGroup;
private final TcpClientInitializer tcpHandlerInit; // all handlers shearable
public TCPsocket() {
workerGroup = new NioEventLoopGroup();
tcpHandlerInit = new TcpClientInitializer();
}
public void connect(String host, int port) throws InterruptedException {
try {
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.remoteAddress(host, port);
b.handler(tcpHandlerInit);
Channel ch = b.connect().sync().channel();
ChannelFuture writeCommand = null;
for (;;) {
if (!CommandQueue.isEmpty()) {
writeCommand = ch.writeAndFlush(CommandExecute()); // commandExecute() fetches a command form the commandQueue and encodes it into a byte array
}
if (CommandQueue.isFull()) { // this will never happen ... or should never happen
ch.closeFuture().sync();
break;
}
}
if (writeCommand != null) {
writeCommand.sync();
}
} finally {
workerGroup.shutdownGracefully();
}
}
public static void main(String args[]) throws InterruptedException {
TCPsocket socket = new TCPsocket();
socket.connect("192.168.0.1", 2101);
}
}
in addition to executing commands off of the command queue, this client keeps receiving periodic responses from the serve as a response to an initial command that is sent as soon as the channel becomes active, in one of the registered handlers (in TCPClientInitializer implementation), I have:
#Override
public void channelActive(ChannelHandlerContext ctx) {
ctx.writeAndFlush(firstMessage);
System.out.println("sent first message\n");
}
which activates a feature in the connected-to server, triggering a periodic packet that is returned from the server through the life span of my application.
The problem comes when I try to use this same setup to connect to multiple servers,
by looping through a string array of known server IPs:
public static void main(String args[]) throws InterruptedException {
String[] hosts = new String[]{"192.168.0.2", "192.168.0.4", "192.168.0.5"};
TCPsocket socket = new TCPsocket();
for (String host : hosts) {
socket.connect(host, 2101);
}
}
once the first connection is established, and the server (192.168.0.2) starts sending the designated periodic packets, no other connection is attempted, which (I think) is the result of the main thread waiting on the connection to die, hence never running the second iteration of the for loop, the discussion in this question leads me to think that the connection process is started in a separate thread, allowing the main thread to continue executing, but that's not what I see here, So what is actually happening? And how would I go about implementing multiple hosts connections using the same client in Netty 4.0.23 Final?
Thanks in advance

Netty slower than Tomcat

We just finished building a server to store data to disk and fronted it with Netty. During load testing we were seeing Netty scaling to about 8,000 messages per second. Given our systems, this looked really low. For a benchmark, we wrote a Tomcat front-end and run the same load tests. With these tests we were getting roughly 25,000 messages per second.
Here are the specs for our load testing machine:
Macbook Pro Quad core
16GB of RAM
Java 1.6
Here is the load test setup for Netty:
10 threads
100,000 messages per thread
Netty server code (pretty standard) - our Netty pipeline on the server is two handlers: a FrameDecoder and a SimpleChannelHandler that handles the request and response.
Client side JIO using Commons Pool to pool and reuse connections (the pool was sized the same as the # of threads)
Here is the load test setup for Tomcat:
10 threads
100,000 messages per thread
Tomcat 7.0.16 with default configuration using a Servlet to call the server code
Client side using URLConnection without any pooling
My main question is why such a huge different in performance? Is there something obvious with respect to Netty that can get it to run faster than Tomcat?
Edit: Here is the main Netty server code:
NioServerSocketChannelFactory factory = new NioServerSocketChannelFactory();
ServerBootstrap server = new ServerBootstrap(factory);
server.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
RequestDecoder decoder = injector.getInstance(RequestDecoder.class);
ContentStoreChannelHandler handler = injector.getInstance(ContentStoreChannelHandler.class);
return Channels.pipeline(decoder, handler);
}
});
server.setOption("child.tcpNoDelay", true);
server.setOption("child.keepAlive", true);
Channel channel = server.bind(new InetSocketAddress(port));
allChannels.add(channel);
Our handlers look like this:
public class RequestDecoder extends FrameDecoder {
#Override
protected ChannelBuffer decode(ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) {
if (buffer.readableBytes() < 4) {
return null;
}
buffer.markReaderIndex();
int length = buffer.readInt();
if (buffer.readableBytes() < length) {
buffer.resetReaderIndex();
return null;
}
return buffer;
}
}
public class ContentStoreChannelHandler extends SimpleChannelHandler {
private final RequestHandler handler;
#Inject
public ContentStoreChannelHandler(RequestHandler handler) {
this.handler = handler;
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
Throwable throwable = e.getCause();
ChannelBuffer out = ChannelBuffers.dynamicBuffer(8);
out.writeInt(0); // Length
out.writeInt(Errors.generalException.getCode()); // status
Channels.write(ctx, e.getFuture(), out);
}
#Override
public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) {
NettyContentStoreServer.allChannels.add(e.getChannel());
}
}
UPDATE:
I've managed to get my Netty solution to within 4,000/second. A few weeks back I was testing a client side PING in my connection pool as a safe guard against idle sockets but I forgot to remove that code before I started load testing. This code effectively PINGed the server every time a Socket was checked out from the pool (using Commons Pool). I commented that code out and I'm now getting 21,000/second with Netty and 25,000/second with Tomcat.
Although, this is great news on the Netty side, I'm still getting 4,000/second less with Netty than Tomcat. I can post my client side (which I thought I had ruled out but apparently not) if anyone is interested in seeing that.
The method messageReceived is executed using a worker thread that is possibly getting blocked by RequestHandler#handle which may be busy doing some I/O work.
You could try adding into the channel pipeline an OrderdMemoryAwareThreadPoolExecutor (recommended) for executing the handlers or alternatively, try dispatching your handler work to a new ThreadPoolExecutor and passing a reference to the socket channel for later writing the response back to client. Ex.:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
executor.submit(new Runnable() {
processHandlerAndRespond(e);
});
}
private void processHandlerAndRespond(MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}

Categories