java: identify the actual http connection - java

I'm developing a HTTP server using HttpServer class. The code is like the following.
public static void main(String[] args) throws Exception {
HttpServer server = HttpServer.create(new InetSocketAddress(8989), 0);
server.createContext("/", new MyHandler());
server.setExecutor(null); // creates a default executor
server.start();
}
static class MyHandler implements HttpHandler {
public void handle(HttpExchange httpExchange) throws IOException {
/*
Some code here
*/
}
}
What I want is to find something (an id variable or an object) that identifyies the actual connection in the handler function.
If I make a break-point in the handler, I debug the server, then I run a client, I can see the content of httpExchange:
I think that connection attribute is a good choice. But I can't find how to get it from httpExchange, or its id.
Is there any suggestion?

Related

Vert.x - How to get POST Value passed between Verticles?

This the class that runs verticles :
public class RequestResponseExample {
public static void main(String[] args) throws InterruptedException {
Logger LOG = LoggerFactory.getLogger(RequestResponseExample.class);
final Vertx vertx = Vertx.vertx();
final Handler<AsyncResult<String>> RequestHandler = dar-> vertx.deployVerticle(new ResponseVerticle());
vertx.deployVerticle(new ResponseVerticle(), RequestHandler);
}
}
This is The Request verticle Class :
public class RequestVerticle extends AbstractVerticle{
public static final Logger LOG = LoggerFactory.getLogger(RequestVerticle.class);
static final String ADDRESS = "my.request.address";
#Override
public void start() throws Exception {
Router router = Router.router(vertx);
router.get("/Test").handler(rc -> rc.response().sendFile("index.html"));
vertx.createHttpServer()
.requestHandler(router)
.listen(8080);
}
}
This is The Response verticle Class: Here Im having a difficulty getting The Inserted value in the HTML file
public class ResponseVerticle extends AbstractVerticle{
public static final Logger LOG = LoggerFactory.getLogger(RequestVerticle.class);
#Override
public void start() throws Exception {
Router router = Router.router(vertx);
// How to handle the POST value ?
router.post("/Test/Result").handler(rc -> rc.end("The Post Value"));
vertx.createHttpServer()
.requestHandler(router)
.listen(8080);
}
When the user invokes POST /Test/Result and sends some POST value, you receive it in your Response verticle class (third snippet). If you want to share that method with other verticles, you should store it somewhere inside the handler method so other verticles access it or immediately forward it to other verticle via the event bus.
One possible solution would be to create a third verticle (e.g. StorageVerticle) which has get and set methods. That way, ResponseVerticle invokes the set method to store the value it got, and the RequestVerticle invokes the get method to fetch the value the user sent on calling POST method.
The other solution of direct communication between verticles involves the Event Bus message exchange - one verticle publishes/sends a message and all other verticles can register as a consumer to get that message. More on this you can find here: https://vertx.io/docs/vertx-core/java/#_the_event_bus_api.
It is hard to say which approach is better because it is case-to-case basis and I have limited information here about the scope of the project.

channelReadComplete called before channelRead

Does anyone know why channelReadComplete gets called before channelRead in some situations? I have a server that is pretty dumb right now that another netty server connects to. The first initializer does this:
final EventExecutorGroup taskGroup = new DefaultEventExecutorGroup(50);
ChannelPipeline pipeline = ctx.pipeline();
pipeline.addLast(taskGroup, "ServerHandler", new ServerHandler());
And then in ServerHandler
public class ServerHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
System.out.println("channelRead");
// ctx.write(msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
System.out.println("channelReadComplete");
// ctx.flush();
}
I consistently get channelReadComplete before channelRead when the first netty server connects to the second netty server. Is there a reason for this?

Netty - fire ReadTimeoutHandler if multiple reads in channel

My I/O flow is following:
client sends data#1 to channel
server(handler) receives data from database according the client data and sends it to the client
client sends data#2 to channel
server(handler) receives data again from database according the client data and sends it back to the client and closes channel
If first read in channel takes too long, ReadTimeoutHandler fires exception as expected. But if first read is ok (= fast enough) and second read in channels takes too long, no TimeoutException is thrown and handler waits 5 minutes until it closes channel. It seems that ReadTimeoutHandler only works for the first read in channel. Is it even possible to get ReadTimeoutHandler work with multiple reads in the channel?
Used Netty version: 4.0.12
public class MyServer {
private static final class MyInitializer extends ChannelInitializer<SocketChannel> {
...
#Override
public void initChannel(SocketChannel channel) throws Exception {
channel.pipeline().addLast(
new ReadTimeoutHandler(5, TimeUnit.SECONDS),
new MyHandler(server, serverConnection));
}
...
}
}
public class MyHandler extends SimpleChannelInboundHandler {
private static final Logger LOG = LoggerFactory.getLogger(MyHandler.class);
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, String msg) throws Exception {
Message message = database.getMessage(msg);
ChannelFuture operation = ctx.writeAndFlush(message)
if (message.isEnd()) operation.addListener(new CloseConverstationListener(ctx));
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
if (cause instanceof ReadTimeoutException) {
LOG.error("ReadTimeoutException");
}
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
super.channelInactive(ctx);
}
private class CloseConverstationListener implements GenericFutureListener<ChannelFuture> {
private final ChannelHandlerContext ctx;
private CloseConverstationListener(ChannelHandlerContext ctx) {
this.ctx = ctx;
}
#Override
public void operationComplete(ChannelFuture future) throws Exception {
future.channel().close().sync();
}
}
}
The ReadTimeoutHandler behavior is - If no read happens in a channel for the specified duration it will fire exception and close the channel. It is not for the delay in responding or processing a read.
At the beginning of channel read read flag is set to true and that is set to false when when read is complete. A scheduler runs which checks if the channel is open and has not read for the specified duration then it fires the exception and closes the connection.
If first read in channel takes too long, ReadTimeoutHandler fires exception as expected.
The above does not sound correct to me.
If you want to timeout based on delay in writing a response, the you might consider WriteTimeoutHandler.

Handle variable path components in Java HttpServer

I'm new in Java. I understand the example in this site and applied it.
What I try to understand is how can I define /:module/:action kind of customization where :module or :action can be anything instead of calling a static url like /test
I read the documentation but it seems that createContext accepts static string instead of a regex.
public static void main(String[] args) throws Exception {
HttpServer server = HttpServer.create(new InetSocketAddress(8000), 0);
server.createContext("/test", new MyHandler());
server.setExecutor(null); // creates a default executor
server.start();
}

Can I make a Java HttpServer threaded/process requests in parallel?

I have built a simple HttpServer following tutorials i have found online, using Sun's lightweight HttpServer.
Basically the main function looks like this:
public static void main(String[] args) throws Exception {
HttpServer server = HttpServer.create(new InetSocketAddress(8000), 0);
//Create the context for the server.
server.createContext("/", new BaseHandler());
server.setExecutor(null); // creates a default executor
server.start();
}
And I have implemented the BaseHandler Interface's method to process the Http request and return a response.
static class BaseHandler implements HttpHandler {
//Handler method
public void handle(HttpExchange t) throws IOException {
//Implementation of http request processing
//Read the request, get the parameters and print them
//in the console, then build a response and send it back.
}
}
I have also created a Client that sends multiple requests via threads. Each thread sends the following request to the server:
http://localhost:8000/[context]?int="+threadID
On Each client run, The requests seem to arrive in different order to the server, but they are served in a serial manner.
What i wish to acomplish is for the requests to be processed in a parallel manner if that is possible.
Is it possible, for example, to run each handler in a seperate thread, and if so, is it a good thing to do.
Or should i just drop using Sun's lightweight server altogether and focus an building something from scratch?
Thanks for any help.
As you can see in ServerImpl, the default executor just "run" the task :
157 private static class DefaultExecutor implements Executor {
158 public void execute (Runnable task) {
159 task.run();
160 }
161 }
you must provide a real executor for your httpServer, like that :
server.setExecutor(java.util.concurrent.Executors.newCachedThreadPool());
and your server will run in parallel.
Carefull, this is a non-limited Executor, see Executors.newFixedThreadPool to limit the number of Thread.
You used server.setExecutor(null) that runs the handler in the same caller thread. In this case, the main thread which runs the server.
You only need to change the line as
public static void main(String[] args) throws Exception {
HttpServer server = HttpServer.create(new InetSocketAddress(8000), 0);
//Create the context for the server.
server.createContext("/", new BaseHandler());
server.setExecutor(Executors.newCachedThreadPool());
server.start();
}

Categories