Problem Description
Servlet-3.0 API allows to detach a request/response context and answer to it later.
However if I try to write a big amount of data, something like:
AsyncContext ac = getWaitingContext() ;
ServletOutputStream out = ac.getResponse().getOutputStream();
out.print(some_big_data);
out.flush()
It may actually block - and it does block in trivial test cases - for both Tomcat 7 and Jetty 8. The tutorials recommend to create a thread pool that would
handle such a setup - witch is generally the counter-positive to a traditional 10K architecture.
However if I have 10,000 open connections and a thread pool of let's say 10 threads,
it is enough for even 1% of clients that have low speed connections or just blocked
connection to block the thread pool and completely block the comet response or
slow it down significantly.
The expected practice is to get "write-ready" notification or I/O completion notification
and than continue to push the data.
How can this be done using Servlet-3.0 API, i.e. how do I get either:
Asynchronous Completion notification on I/O operation.
Get non-blocking I/O with write ready notification.
If this is not supported by the Servlet-3.0 API, are there any Web Server specific APIs (like Jetty Continuation or Tomcat CometEvent) that allow to handle such events truly asynchronously without faking asynchronous I/O using thread pool.
Does anybody know?
And if this is not possible can you confirm it with a reference to documentation?
Problem demonstration in a sample code
I had attached the code below that emulates event-stream.
Notes:
it uses ServletOutputStream that throws IOException to detect disconnected clients
it sends keep-alive messages to make sure clients are still there
I created a thread pool to "emulate" asynchronous operations.
In such an example I explicitly defined thread pool of size 1 to show the problem:
Start an application
Run from two terminals curl http://localhost:8080/path/to/app (twice)
Now send the data with curd -d m=message http://localhost:8080/path/to/app
Both clients received the data
Now suspend one of the clients (Ctrl+Z) and send the message once again curd -d m=message http://localhost:8080/path/to/app
Observe that another non-suspended client either received nothing or after the message was transfered stopped receiving keep-alive requests because other thread is blocked.
I want to solve such a problem without using thread pool, because with 1000-5000 open
connections I can exhaust the thread pool very fast.
The sample code below.
import java.io.IOException;
import java.util.HashSet;
import java.util.Iterator;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.LinkedBlockingQueue;
import javax.servlet.AsyncContext;
import javax.servlet.ServletConfig;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.ServletOutputStream;
#WebServlet(urlPatterns = "", asyncSupported = true)
public class HugeStreamWithThreads extends HttpServlet {
private long id = 0;
private String message = "";
private final ThreadPoolExecutor pool =
new ThreadPoolExecutor(1, 1, 50000L,TimeUnit.MILLISECONDS,new LinkedBlockingQueue<Runnable>());
// it is explicitly small for demonstration purpose
private final Thread timer = new Thread(new Runnable() {
public void run()
{
try {
while(true) {
Thread.sleep(1000);
sendKeepAlive();
}
}
catch(InterruptedException e) {
// exit
}
}
});
class RunJob implements Runnable {
volatile long lastUpdate = System.nanoTime();
long id = 0;
AsyncContext ac;
RunJob(AsyncContext ac)
{
this.ac = ac;
}
public void keepAlive()
{
if(System.nanoTime() - lastUpdate > 1000000000L)
pool.submit(this);
}
String formatMessage(String msg)
{
StringBuilder sb = new StringBuilder();
sb.append("id");
sb.append(id);
for(int i=0;i<100000;i++) {
sb.append("data:");
sb.append(msg);
sb.append("\n");
}
sb.append("\n");
return sb.toString();
}
public void run()
{
String message = null;
synchronized(HugeStreamWithThreads.this) {
if(this.id != HugeStreamWithThreads.this.id) {
this.id = HugeStreamWithThreads.this.id;
message = HugeStreamWithThreads.this.message;
}
}
if(message == null)
message = ":keep-alive\n\n";
else
message = formatMessage(message);
if(!sendMessage(message))
return;
boolean once_again = false;
synchronized(HugeStreamWithThreads.this) {
if(this.id != HugeStreamWithThreads.this.id)
once_again = true;
}
if(once_again)
pool.submit(this);
}
boolean sendMessage(String message)
{
try {
ServletOutputStream out = ac.getResponse().getOutputStream();
out.print(message);
out.flush();
lastUpdate = System.nanoTime();
return true;
}
catch(IOException e) {
ac.complete();
removeContext(this);
return false;
}
}
};
private HashSet<RunJob> asyncContexts = new HashSet<RunJob>();
#Override
public void init(ServletConfig config) throws ServletException
{
super.init(config);
timer.start();
}
#Override
public void destroy()
{
for(;;){
try {
timer.interrupt();
timer.join();
break;
}
catch(InterruptedException e) {
continue;
}
}
pool.shutdown();
super.destroy();
}
protected synchronized void removeContext(RunJob ac)
{
asyncContexts.remove(ac);
}
// GET method is used to establish a stream connection
#Override
protected synchronized void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
// Content-Type header
response.setContentType("text/event-stream");
response.setCharacterEncoding("utf-8");
// Access-Control-Allow-Origin header
response.setHeader("Access-Control-Allow-Origin", "*");
final AsyncContext ac = request.startAsync();
ac.setTimeout(0);
RunJob job = new RunJob(ac);
asyncContexts.add(job);
if(id!=0) {
pool.submit(job);
}
}
private synchronized void sendKeepAlive()
{
for(RunJob job : asyncContexts) {
job.keepAlive();
}
}
// POST method is used to communicate with the server
#Override
protected synchronized void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException
{
request.setCharacterEncoding("utf-8");
id++;
message = request.getParameter("m");
for(RunJob job : asyncContexts) {
pool.submit(job);
}
}
}
The sample above uses threads to prevent blocking... However if the number of blocking clients is bigger than the size of the thread pool it would block.
How could it be implemented without blocking?
I've found the Servlet 3.0 Asynchronous API tricky to implement correctly and helpful documentation to be sparse. After a lot of trial and error and trying many different approaches, I was able to find a robust solution that I've been very happy with. When I look at my code and compare it to yours, I notice one major difference that may help you with your particular problem. I use a ServletResponse to write the data and not a ServletOutputStream.
Here my go-to Asynchronous Servlet class adapted slightly for your some_big_data case:
import java.io.IOException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import javax.servlet.AsyncContext;
import javax.servlet.AsyncEvent;
import javax.servlet.AsyncListener;
import javax.servlet.ServletConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletResponse;
import javax.servlet.annotation.WebInitParam;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpSession;
import org.apache.log4j.Logger;
#javax.servlet.annotation.WebServlet(urlPatterns = { "/async" }, asyncSupported = true, initParams = { #WebInitParam(name = "threadpoolsize", value = "100") })
public class AsyncServlet extends HttpServlet {
private static final Logger logger = Logger.getLogger(AsyncServlet.class);
public static final int CALLBACK_TIMEOUT = 10000; // ms
/** executor service */
private ExecutorService exec;
#Override
public void init(ServletConfig config) throws ServletException {
super.init(config);
int size = Integer.parseInt(getInitParameter("threadpoolsize"));
exec = Executors.newFixedThreadPool(size);
}
#Override
public void service(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException {
final AsyncContext ctx = req.startAsync();
final HttpSession session = req.getSession();
// set the timeout
ctx.setTimeout(CALLBACK_TIMEOUT);
// attach listener to respond to lifecycle events of this AsyncContext
ctx.addListener(new AsyncListener() {
#Override
public void onComplete(AsyncEvent event) throws IOException {
logger.info("onComplete called");
}
#Override
public void onTimeout(AsyncEvent event) throws IOException {
logger.info("onTimeout called");
}
#Override
public void onError(AsyncEvent event) throws IOException {
logger.info("onError called: " + event.toString());
}
#Override
public void onStartAsync(AsyncEvent event) throws IOException {
logger.info("onStartAsync called");
}
});
enqueLongRunningTask(ctx, session);
}
/**
* if something goes wrong in the task, it simply causes timeout condition that causes the async context listener to be invoked (after the fact)
* <p/>
* if the {#link AsyncContext#getResponse()} is null, that means this context has already timed out (and context listener has been invoked).
*/
private void enqueLongRunningTask(final AsyncContext ctx, final HttpSession session) {
exec.execute(new Runnable() {
#Override
public void run() {
String some_big_data = getSomeBigData();
try {
ServletResponse response = ctx.getResponse();
if (response != null) {
response.getWriter().write(some_big_data);
ctx.complete();
} else {
throw new IllegalStateException(); // this is caught below
}
} catch (IllegalStateException ex) {
logger.error("Request object from context is null! (nothing to worry about.)"); // just means the context was already timeout, timeout listener already called.
} catch (Exception e) {
logger.error("ERROR IN AsyncServlet", e);
}
}
});
}
/** destroy the executor */
#Override
public void destroy() {
exec.shutdown();
}
}
During my research on this topic, this thread kept popping up, so figured I mention it here:
Servlet 3.1 introduced async operations on ServletInputStream and ServletOutputStream. See ServletOutputStream.setWriteListener.
An example can be found at http://docs.oracle.com/javaee/7/tutorial/servlets013.htm
this might be helpful
http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/async-servlet/async-servlets.html
We can't quite cause the writes to be asynchronous. We realistically have to live with the limitation that when we do write something out to a client, we expect to be able to do so promptly and are able to treat it as an error if we don't. That is, if our goal is to stream data to the client as fast as possible and use the blocking/non-blocking status of the channel as a way to control the flow, we're out of luck. But, if we're sending data at a low rate that a client should be able to handle, we are able at least to promptly disconnect clients that don't read quickly enough.
For example, in your application, we send the keepalives at a slow-ish rate (every few seconds) and expect clients to be able to keep up with all the events they're being sent. We splurge the data to the client, and if it can't keep up, we can disconnect it promptly and cleanly. That's a bit more limited than true asynchronous I/O, but it should meet your need (and incidentally, mine).
The trick is that all of the methods for writing out output which just throw IOExceptions actually do a bit more than that: in the implementation, all the calls to things that can be interrupt()ed will be wrapped with something like this (taken from Jetty 9):
catch (InterruptedException x)
throw (IOException)new InterruptedIOException().initCause(x);
(I also note that this doesn't happen in Jetty 8, where an InterruptedException is logged and the blocking loop is immediately retried. Presumably you make to make sure your servlet container is well-behaved to use this trick.)
That is, when a slow client causes a writing thread to block, we simply force the write to be thrown up as an IOException by calling interrupt() on the thread. Think about it: the non-blocking code would consume a unit of time on one of our processing threads to execute anyway, so using blocking writes that are just aborted (after say one millisecond) is really identical in principle. We're still just chewing up a short amount of time on the thread, only marginally less efficiently.
I've modified your code so that the main timer thread runs a job to bound the time in each write just before we start the write, and the job is cancelled if the write completes quickly, which it should.
A final note: in a well-implemented servlet container, causing the I/O to throw out ought to be safe. It would be nice if we could catch the InterruptedIOException and try the write again later. Perhaps we'd like to give slow clients a subset of the events if they can't keep up with the full stream. As far as I can tell, in Jetty this isn't entirely safe. If a write throws, the internal state of the HttpResponse object might not be consistent enough to handle re-entering the write safely later. I expect it's not wise to try to push a servlet container in this way unless there are specific docs I've missed offering this guarantee. I think the idea is that a connection is designed to be shut down if an IOException happens.
Here's the code, with a modified version of RunJob::run() using a grotty simple illustration (in reality, we'd want to use the main timer thread here rather than spin up one per-write which is silly).
public void run()
{
String message = null;
synchronized(HugeStreamWithThreads.this) {
if(this.id != HugeStreamWithThreads.this.id) {
this.id = HugeStreamWithThreads.this.id;
message = HugeStreamWithThreads.this.message;
}
}
if(message == null)
message = ":keep-alive\n\n";
else
message = formatMessage(message);
final Thread curr = Thread.currentThread();
Thread canceller = new Thread(new Runnable() {
public void run()
{
try {
Thread.sleep(2000);
curr.interrupt();
}
catch(InterruptedException e) {
// exit
}
}
});
canceller.start();
try {
if(!sendMessage(message))
return;
} finally {
canceller.interrupt();
while (true) {
try { canceller.join(); break; }
catch (InterruptedException e) { }
}
}
boolean once_again = false;
synchronized(HugeStreamWithThreads.this) {
if(this.id != HugeStreamWithThreads.this.id)
once_again = true;
}
if(once_again)
pool.submit(this);
}
Is Spring an option for you? Spring-MVC 3.2 has a class called DeferredResult, which will gracefully handle your "10,000 open connections/10 server pool threads" scenario.
Example: http://blog.springsource.org/2012/05/06/spring-mvc-3-2-preview-introducing-servlet-3-async-support/
I've had a quick look at your listing, so I may have missed some points.
The advantage of a pool thread is to share thread resources between several tasks over time. Maybe you can solve your problem by spacing keepAlive responses of different http connections, instead of grouping all of them at the same time.
Related
I'm fairly new to Java, so this may seem obvious to some. I've worked a lot with ActionScript, which is very much event based and I love that. I recently tried to write a small bit of Java code that does a POST request, but I've been faced with the problem that it's a synchronous request, so the code execution waits for the request to complete, time out or present an error.
How can I create an asynchronous request, where the code continues the execution and a callback is invoked when the HTTP request is complete? I've glanced at threads, but I'm thinking it's overkill.
If you are in a JEE7 environment, you must have a decent implementation of JAXRS hanging around, which would allow you to easily make asynchronous HTTP request using its client API.
This would looks like this:
public class Main {
public static Future<Response> getAsyncHttp(final String url) {
return ClientBuilder.newClient().target(url).request().async().get();
}
public static void main(String ...args) throws InterruptedException, ExecutionException {
Future<Response> response = getAsyncHttp("http://www.nofrag.com");
while (!response.isDone()) {
System.out.println("Still waiting...");
Thread.sleep(10);
}
System.out.println(response.get().readEntity(String.class));
}
}
Of course, this is just using futures. If you are OK with using some more libraries, you could take a look at RxJava, the code would then look like:
public static void main(String... args) {
final String url = "http://www.nofrag.com";
rx.Observable.from(ClientBuilder.newClient().target(url).request().async().get(String.class), Schedulers
.newThread())
.subscribe(
next -> System.out.println(next),
error -> System.err.println(error),
() -> System.out.println("Stream ended.")
);
System.out.println("Async proof");
}
And last but not least, if you want to reuse your async call, you might want to take a look at Hystrix, which - in addition to a bazillion super cool other stuff - would allow you to write something like this:
For example:
public class AsyncGetCommand extends HystrixCommand<String> {
private final String url;
public AsyncGetCommand(final String url) {
super(Setter.withGroupKey(HystrixCommandGroupKey.Factory.asKey("HTTP"))
.andCommandPropertiesDefaults(HystrixCommandProperties.Setter()
.withExecutionIsolationThreadTimeoutInMilliseconds(5000)));
this.url = url;
}
#Override
protected String run() throws Exception {
return ClientBuilder.newClient().target(url).request().get(String.class);
}
}
Calling this command would look like:
public static void main(String ...args) {
new AsyncGetCommand("http://www.nofrag.com").observe().subscribe(
next -> System.out.println(next),
error -> System.err.println(error),
() -> System.out.println("Stream ended.")
);
System.out.println("Async proof");
}
PS: I know the thread is old, but it felt wrong that no ones mentions the Rx/Hystrix way in the up-voted answers.
You may also want to look at Async Http Client.
Note that java11 now offers a new HTTP api HttpClient, which supports fully asynchronous operation, using java's CompletableFuture.
It also supports a synchronous version, with calls like send, which is synchronous, and sendAsync, which is asynchronous.
Example of an async request (taken from the apidoc):
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://example.com/"))
.timeout(Duration.ofMinutes(2))
.header("Content-Type", "application/json")
.POST(BodyPublishers.ofFile(Paths.get("file.json")))
.build();
client.sendAsync(request, BodyHandlers.ofString())
.thenApply(HttpResponse::body)
.thenAccept(System.out::println);
Based on a link to Apache HTTP Components on this SO thread, I came across the Fluent facade API for HTTP Components. An example there shows how to set up a queue of asynchronous HTTP requests (and get notified of their completion/failure/cancellation). In my case, I didn't need a queue, just one async request at a time.
Here's where I ended up (also using URIBuilder from HTTP Components, example here).
import java.net.URI;
import java.net.URISyntaxException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import org.apache.http.client.fluent.Async;
import org.apache.http.client.fluent.Content;
import org.apache.http.client.fluent.Request;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.concurrent.FutureCallback;
//...
URIBuilder builder = new URIBuilder();
builder.setScheme("http").setHost("myhost.com").setPath("/folder")
.setParameter("query0", "val0")
.setParameter("query1", "val1")
...;
URI requestURL = null;
try {
requestURL = builder.build();
} catch (URISyntaxException use) {}
ExecutorService threadpool = Executors.newFixedThreadPool(2);
Async async = Async.newInstance().use(threadpool);
final Request request = Request.Get(requestURL);
Future<Content> future = async.execute(request, new FutureCallback<Content>() {
public void failed (final Exception e) {
System.out.println(e.getMessage() +": "+ request);
}
public void completed (final Content content) {
System.out.println("Request completed: "+ request);
System.out.println("Response:\n"+ content.asString());
}
public void cancelled () {}
});
You may want to take a look at this question: Asynchronous IO in Java?
It looks like your best bet, if you don't want to wrangle the threads yourself is a framework. The previous post mentions
Grizzly, https://grizzly.dev.java.net/, and Netty, http://www.jboss.org/netty/.
From the netty docs:
The Netty project is an effort to provide an asynchronous event-driven network application framework and tools for rapid development of maintainable high performance & high scalability protocol servers & clients.
Apache HttpComponents also have an async http client now too:
/**
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpasyncclient</artifactId>
<version>4.0-beta4</version>
</dependency>
**/
import java.io.IOException;
import java.nio.CharBuffer;
import java.util.concurrent.Future;
import org.apache.http.HttpResponse;
import org.apache.http.impl.nio.client.CloseableHttpAsyncClient;
import org.apache.http.impl.nio.client.HttpAsyncClients;
import org.apache.http.nio.IOControl;
import org.apache.http.nio.client.methods.AsyncCharConsumer;
import org.apache.http.nio.client.methods.HttpAsyncMethods;
import org.apache.http.protocol.HttpContext;
public class HttpTest {
public static void main(final String[] args) throws Exception {
final CloseableHttpAsyncClient httpclient = HttpAsyncClients
.createDefault();
httpclient.start();
try {
final Future<Boolean> future = httpclient.execute(
HttpAsyncMethods.createGet("http://www.google.com/"),
new MyResponseConsumer(), null);
final Boolean result = future.get();
if (result != null && result.booleanValue()) {
System.out.println("Request successfully executed");
} else {
System.out.println("Request failed");
}
System.out.println("Shutting down");
} finally {
httpclient.close();
}
System.out.println("Done");
}
static class MyResponseConsumer extends AsyncCharConsumer<Boolean> {
#Override
protected void onResponseReceived(final HttpResponse response) {
}
#Override
protected void onCharReceived(final CharBuffer buf, final IOControl ioctrl)
throws IOException {
while (buf.hasRemaining()) {
System.out.print(buf.get());
}
}
#Override
protected void releaseResources() {
}
#Override
protected Boolean buildResult(final HttpContext context) {
return Boolean.TRUE;
}
}
}
It has to be made clear the HTTP protocol is synchronous and this has nothing to do with the programming language. Client sends a request and gets a synchronous response.
If you want to an asynchronous behavior over HTTP, this has to be built over HTTP (I don't know anything about ActionScript but I suppose that this is what the ActionScript does too). There are many libraries that could give you such functionality (e.g. Jersey SSE). Note that they do somehow define dependencies between the client and the server as they do have to agree on the exact non standard communication method above HTTP.
If you cannot control both the client and the server or if you don't want to have dependencies between them, the most common approach of implementing asynchronous (e.g. event based) communication over HTTP is using the webhooks approach (you can check this for an example implementation in java).
Hope I helped!
Here is a solution using apache HttpClient and making the call in a separate thread. This solution is useful if you are only making one async call. If you are making multiple calls I suggest using apache HttpAsyncClient and placing the calls in a thread pool.
import java.lang.Thread;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
import org.apache.hc.client5.http.impl.classic.HttpClients;
public class ApacheHttpClientExample {
public static void main(final String[] args) throws Exception {
try (final CloseableHttpClient httpclient = HttpClients.createDefault()) {
final HttpGet httpget = new HttpGet("http://httpbin.org/get");
new Thread(() -> {
final String responseBody = httpclient.execute(httpget);
}).start();
}
}
}
I'm creating a plugin on a certain platform (the details are irrelevant) and need to create a HTTP endpoint. In normal circumstances you'd create a http server and stop it whenever you're done using it or when the application stops, however, in my case I can't detect when the plugin is being uninstalled/reinstalled.
The problem
When someone installs my plugin twice, the second time it will throw an error because I'm trying to create a http server on a port which is already in use. Since it's being reinstalled, I can't save the http server on some static variable either. In other words, I need to be able to stop a previously created http server without having any reference to it.
My attempt
I figured the only way to interact with the original reference to the http server would be to create a thread whenever the http server starts, and then overwrite the interrupt() method to stop the server, but somehow I'm still receiving the 'port is already in use' error. I'm using Undertow as my http server library, but this problem applies to any http server implementation.
import io.undertow.Undertow;
import io.undertow.util.Headers;
public class SomeServlet extends Thread {
private static final String THREAD_NAME = "some-servlet-container-5391301";
private static final int PORT = 5839;
private Undertow server;
public static void listen() { // this method is called whenever my plugin is installed
deleteExistingServer();
new SomeServlet().start();
}
private static void deleteExistingServer() {
for (Thread t : Thread.getAllStackTraces().keySet()) {
if (t.getName().equals(THREAD_NAME)) {
t.interrupt();
}
}
}
#Override
public void run() {
createServer();
}
#Override
public void interrupt() {
try {
System.out.println("INTERRUPT");
this.server.stop();
} finally {
super.interrupt();
}
}
private void createServer() {
this.server = Undertow.builder()
.addHttpListener(PORT, "localhost")
.setHandler(exchange -> {
exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain");
exchange.getResponseSender().send("Hello World!");
})
.build();
this.server.start();
}
}
Desired behaviour
Whenever listen() is called, it will remove any previously existing http server and create a new one, without relying on storing the server on a static variable.
You could try com.sun.net.httpserver.HttpServer. Use http://localhost:8765/stop to stop and 'http://localhost:8765/test' for test request:
import com.sun.net.httpserver.HttpServer;
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
public class TestHttpServer {
public static void main(String[] args) throws IOException {
final HttpServer server = HttpServer.create();
server.bind(new InetSocketAddress(8765), 0);
server.createContext("/test", httpExchange -> {
String response = "<html>TEST!!!</html>";
httpExchange.sendResponseHeaders(200, response.length());
OutputStream os = httpExchange.getResponseBody();
os.write(response.getBytes());
os.close();
});
server.createContext("/stop", httpExchange -> server.stop(1));
server.start();
}
}
I have set up an HttpsServer in Java. All of my communication works perfectly. I set up multiple contexts, load a self-signed certificate, and even start up based on an external configuration file.
My problem now is getting multiple clients to be able to hit my secure server. To do so, I would like to somehow multi-thread the requests that come in from the HttpsServer but cannot figure out how to do so. Below is my basic HttpsConfiguration.
HttpsServer server = HttpsServer.create(new InetSocketAddress(secureConnection.getPort()), 0);
SSLContext sslContext = SSLContext.getInstance("TLS");
sslContext.init(secureConnection.getKeyManager().getKeyManagers(), secureConnection.getTrustManager().getTrustManagers(), null);
server.setHttpsConfigurator(new SecureServerConfiguration(sslContext));
server.createContext("/", new RootHandler());
server.createContext("/test", new TestHandler());
server.setExecutor(Executors.newCachedThreadPool());
server.start();
Where secureConnection is a custom class containing server setup and certificate information.
I attempted to set the executor to Executors.newCachedThreadPool() and a couple of other ones. However, they all produced the same result. Each managed the threads differently but the first request had to finish before the second could process.
I also tried writing my own Executor
public class AsyncExecutor extends ThreadPoolExecutor implements Executor
{
public static Executor create()
{
return new AsyncExecutor();
}
public AsyncExecutor()
{
super(5, 10, 10000, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(12));
}
#Override
public void execute(Runnable process)
{
System.out.println("New Process");
Thread newProcess = new Thread(process);
newProcess.setDaemon(false);
newProcess.start();
System.out.println("Thread created");
}
}
Unfortunately, with the same result as the other Executors.
To test I am using Postman to hit the /Test endpoint which is simulating a long running task by doing a Thread.sleep(10000). While that is running, I am using my Chrome browser to hit the root endpoint. The root page does not load until the 10 second sleep is over.
Any thoughts on how to handle multiple concurrent requests to the HTTPS server?
For ease of testing, I replicated my scenario using the standard HttpServer and condensed everything into a single java program.
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import java.util.concurrent.Executors;
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
public class Example
{
private final static int PORT = 80;
private final static int BACKLOG = 10;
/**
* To test hit:
* <p><b>http://localhost/test</b></p>
* <p>This will hit the endoint with the thread sleep<br>
* Then hit:</p>
* <p><b>http://localhost</b></p>
* <p>I would expect this to come back right away. However, it does not come back until the
* first request finishes. This can be tested with only a basic browser.</p>
* #param args
* #throws Exception
*/
public static void main(String[] args) throws Exception
{
new Example().start();
}
private void start() throws Exception
{
HttpServer server = HttpServer.create(new InetSocketAddress(PORT), BACKLOG);
server.createContext("/", new RootHandler());
server.createContext("/test", new TestHandler());
server.setExecutor(Executors.newCachedThreadPool());
server.start();
System.out.println("Server Started on " + PORT);
}
class RootHandler implements HttpHandler
{
#Override
public void handle(HttpExchange httpExchange) throws IOException
{
String body = "<html>Hello World</html>";
httpExchange.sendResponseHeaders(200, body.length());
OutputStream outputStream = httpExchange.getResponseBody();
outputStream.write(body.getBytes("UTF-8"));
outputStream.close();
}
}
class TestHandler implements HttpHandler
{
#Override
public void handle(HttpExchange httpExchange) throws IOException
{
try
{
Thread.sleep(10000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
String body = "<html>Test Handled</html>";
httpExchange.sendResponseHeaders(200, body.length());
OutputStream outputStream = httpExchange.getResponseBody();
outputStream.write(body.getBytes("UTF-8"));
outputStream.close();
}
}
}
TL;DR: It's OK, just use two different browsers, or specialized tool to test it.
You original implementation is OK and it work as expected, no custom Executor needed. For each request it executes method of "shared" handler class instance. It always picks up free thread from pool, so each method call is executed in different thread.
The problem seems to be, that when you use multiple windows of the same browser to test this behavior... for some reason requests get executed in serialised way (only one at the time). Tested with latest Firefox, Chrome, Edge and Postman. Edge and Postman work as expected. Also anonymous mode of Firefox and Chrome helps.
Same local URL opened at the same time from two Chrome windows. In first the page loaded after 5s, I got Thread.sleep(5000) so that's OK. Second window loaded respons in 8,71s, so there is 3,71s delay of unknown origin.
My guess? Probably some browser internal optimization or failsafe mechanism.
Try specifying a non-zero maximum backlog (the second argument to create()):
HttpsServer server = HttpsServer.create(new InetSocketAddress(secureConnection.getPort()), 10);
I did some experiments and what works for me is:
public void handler(HttpExchange exchange) {
executor.submit(new SomeOtherHandler());
}
public class SomeOtherHandler implements Runnable {
}
where the executor is the one you created as thread pool.
We just finished building a server to store data to disk and fronted it with Netty. During load testing we were seeing Netty scaling to about 8,000 messages per second. Given our systems, this looked really low. For a benchmark, we wrote a Tomcat front-end and run the same load tests. With these tests we were getting roughly 25,000 messages per second.
Here are the specs for our load testing machine:
Macbook Pro Quad core
16GB of RAM
Java 1.6
Here is the load test setup for Netty:
10 threads
100,000 messages per thread
Netty server code (pretty standard) - our Netty pipeline on the server is two handlers: a FrameDecoder and a SimpleChannelHandler that handles the request and response.
Client side JIO using Commons Pool to pool and reuse connections (the pool was sized the same as the # of threads)
Here is the load test setup for Tomcat:
10 threads
100,000 messages per thread
Tomcat 7.0.16 with default configuration using a Servlet to call the server code
Client side using URLConnection without any pooling
My main question is why such a huge different in performance? Is there something obvious with respect to Netty that can get it to run faster than Tomcat?
Edit: Here is the main Netty server code:
NioServerSocketChannelFactory factory = new NioServerSocketChannelFactory();
ServerBootstrap server = new ServerBootstrap(factory);
server.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
RequestDecoder decoder = injector.getInstance(RequestDecoder.class);
ContentStoreChannelHandler handler = injector.getInstance(ContentStoreChannelHandler.class);
return Channels.pipeline(decoder, handler);
}
});
server.setOption("child.tcpNoDelay", true);
server.setOption("child.keepAlive", true);
Channel channel = server.bind(new InetSocketAddress(port));
allChannels.add(channel);
Our handlers look like this:
public class RequestDecoder extends FrameDecoder {
#Override
protected ChannelBuffer decode(ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) {
if (buffer.readableBytes() < 4) {
return null;
}
buffer.markReaderIndex();
int length = buffer.readInt();
if (buffer.readableBytes() < length) {
buffer.resetReaderIndex();
return null;
}
return buffer;
}
}
public class ContentStoreChannelHandler extends SimpleChannelHandler {
private final RequestHandler handler;
#Inject
public ContentStoreChannelHandler(RequestHandler handler) {
this.handler = handler;
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
Throwable throwable = e.getCause();
ChannelBuffer out = ChannelBuffers.dynamicBuffer(8);
out.writeInt(0); // Length
out.writeInt(Errors.generalException.getCode()); // status
Channels.write(ctx, e.getFuture(), out);
}
#Override
public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) {
NettyContentStoreServer.allChannels.add(e.getChannel());
}
}
UPDATE:
I've managed to get my Netty solution to within 4,000/second. A few weeks back I was testing a client side PING in my connection pool as a safe guard against idle sockets but I forgot to remove that code before I started load testing. This code effectively PINGed the server every time a Socket was checked out from the pool (using Commons Pool). I commented that code out and I'm now getting 21,000/second with Netty and 25,000/second with Tomcat.
Although, this is great news on the Netty side, I'm still getting 4,000/second less with Netty than Tomcat. I can post my client side (which I thought I had ruled out but apparently not) if anyone is interested in seeing that.
The method messageReceived is executed using a worker thread that is possibly getting blocked by RequestHandler#handle which may be busy doing some I/O work.
You could try adding into the channel pipeline an OrderdMemoryAwareThreadPoolExecutor (recommended) for executing the handlers or alternatively, try dispatching your handler work to a new ThreadPoolExecutor and passing a reference to the socket channel for later writing the response back to client. Ex.:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
executor.submit(new Runnable() {
processHandlerAndRespond(e);
});
}
private void processHandlerAndRespond(MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}
I have a home grown web server in my app. This web server spawns a new thread for every request that comes into the socket to be accepted. I want the web server to wait until a specific point is hit in the thread it just created.
I have been through many posts on this site and examples on the web, but cant get the web server to proceed after I tell the thread to wait. A basic code example would be great.
Is the synchronized keyword the correct way to go about this? If so, how can this be achieved? Code examples are below of my app:
Web Server
while (true) {
//block here until a connection request is made
socket = server_socket.accept();
try {
//create a new HTTPRequest object for every file request
HttpRequest request = new HttpRequest(socket, this);
//create a new thread for each request
Thread thread = new Thread(request);
//run the thread and have it return after complete
thread.run();
///////////////////////////////
wait here until notifed to proceed
///////////////////////////////
} catch (Exception e) {
e.printStackTrace(logFile);
}
}
Thread code
public void run() {
//code here
//notify web server to continue here
}
Update - Final code is as below. The HttpRequest does just call resumeListener.resume() whenever I send a response header (of course also adding the interface as a separate class and the addResumeListener(ResumeListener r1) method in HttpRequest):
Web Server portion
// server infinite loop
while (true) {
//block here until a connection request is made
socket = server_socket.accept();
try {
final Object locker = new Object();
//create a new HTTPRequest object for every file request
HttpRequest request = new HttpRequest(socket, this);
request.addResumeListener(new ResumeListener() {
public void resume() {
//get control of the lock and release the server
synchronized(locker) {
locker.notify();
}
}
});
synchronized(locker) {
//create a new thread for each request
Thread thread = new Thread(request);
//run the thread and have it return after complete
thread.start();
//tell this thread to wait until HttpRequest releases
//the server
locker.wait();
}
} catch (Exception e) {
e.printStackTrace(Session.logFile);
}
}
You can use java.util.concurrent.CountDownLatch with a count of 1 for this. Arrange for an instance of it to be created and shared by the parent and child thread (for example, create it in HttpRequest's constructor, and have it retrievable by a member function). The server then calls await() on it, and the thread hits countDown() when it's ready to release its parent.
You probably need to use a Java Condition. From the docs:
Conditions (also known as condition
queues or condition variables) provide
a means for one thread to suspend
execution (to "wait") until notified
by another thread that some state
condition may now be true.
First of all, I echo the sentiment of others that re-inventing the wheel here will most likely lead to a variety of issues for you. However, if you want to go down this road anyway what you are trying to do is not difficult. Have you experimented with Jetty?
Maybe something like this:
public class MyWebServer {
public void foo() throws IOException {
while (true) {
//block here until a connection request is made
ServerSocket socket = new ServerSocket();
try {
final Object locker = new Object();
//create a new HTTPRequest object for every file request
MyRequest request = new MyRequest(socket);
request.addResumeListener(new ResumeListener() {
public void resume() {
locker.notify();
}
});
synchronized(locker){
//create a new thread for each request
Thread thread = new Thread(request);
//start() the thread - not run()
thread.start();
//this thread will block until the MyRequest run method calls resume
locker.wait();
}
} catch (Exception e) {
}
}
}
}
public interface ResumeListener {
public void resume();
}
public class MyRequest implements Runnable{
private ResumeListener resumeListener;
public MyRequest(ServerSocket socket) {
}
public void run() {
// do something
resumeListener.resume(); //notify server to continue accepting next request
}
public void addResumeListener(ResumeListener rl) {
this.resumeListener = rl;
}
}
Run under a debugger and set a breakpoint?
If unfeasible, then read a line from System.in?