I'm new to Netty, and am looking at using it to make a simple http proxy server that receives requests from a client, forwards the requests to another server, and then copies the response back to the response for the original request. One extra requirement is that I be able to support a timeout, so that if the proxied server takes too long to respond the proxy will respond by itself and close the connection to the proxied server.
I've already implemented such an application using Jetty, but with Jetty I need to use too many threads to keep inbound requests from getting blocked (this is a lightweight app that uses very little memory or cpu, but the latency of the proxied server is high enough that bursts in traffic cause either queueing in the proxy server, or require too many threads).
According to my understanding, I can use Netty to build a pipeline in which each stage performs a small amount of computation, then releases it's thread and waits until data is ready for the next stage in the pipeline to be executed.
My question is, is there a simple example of such an application? What I have so far is a simple modification of the server code for the basic Netty tutorial, but it lacks all support for a client. I saw the netty client tutorial, but am not sure how to mix code from the two to create a simple proxy app.
public static void main(String[] args) throws Exception {
ChannelFactory factory =
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
ServerBootstrap bootstrap = new ServerBootstrap(factory);
bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
return Channels.pipeline(
new HttpRequestDecoder(),
new HttpResponseEncoder(),
/*
* Is there something I can put here to make a
* request to another server asynchronously and
* copy the result to the response inside
* MySimpleChannelHandler?
*/
new MySimpleChannelHandler()
);
}
});
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setOption("child.keepAlive", true);
bootstrap.bind(new InetSocketAddress(8080));
}
private static class MySimpleChannelHandler extends SimpleChannelHandler {
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
HttpRequest request = (HttpRequest) e.getMessage();
HttpResponse response = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
response.setContent(request.getContent());
Channel ch = e.getChannel();
ChannelFuture f = ch.write(response);
f.addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) {
Channel ch = future.getChannel();
ch.close();
}
});
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
Channel ch = e.getChannel();
ch.close();
}
}
you would have to look at LittleProxy to see how they did it as it is written on top of Netty.
Related
1. Backgroud
I found most of the http client examples using Netty seem to be following this code structure:
public void run() {
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.handler(new HttpSnoopClientInitializer(sslCtx));
// Make the connection attempt.
Channel ch = b.connect(host, port).sync().channel();
// send something
ch.writeAndFlush(XXXX);
// Wait for the server to close the connection.
ch.closeFuture().sync();
} finally {
// Shut down executor threads to exit.
group.shutdownGracefully();
}
}
So if I understand it correctly, each time I send a request, I need to create a client, and call client.run() on it. That said, it seems I can only make one "fixed" request at a time.
2.My need
I need a long-standing client that can send multiple requests. More specifically, there will be another thread sending instruction to the client, and each time the client gets the instruction, it will send a request. Something like this:
Client client = new Client();
client.start();
client.sendRequest(request1);
client.sendRequest(request2);
...
client.shutDownGracefully(); // not sure if this shutdown is necessary or not
// because I need a long-standing client to wait for instructions to send requests
// in this sense it's kinda like a server.
3.What I've tried
I have tried something like this: from this link
public MyClient(String host, int port) {
System.out.println("Initializing client and connecting to server..");
EventLoopGroup workerGroup = new NioEventLoopGroup();
Bootstrap b = new Bootstrap();
b.group(workerGroup)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel channel) throws Exception {
channel.pipeline().addLast(new StringDecoder());
channel.pipeline().addLast(new StringEncoder());
channel.pipeline().addLast(new MyAppClientHandler());
}
});
channelFuture = b.connect(host, port);
}
public ResponseFuture send(final String msg) {
final ResponseFuture responseFuture = new ResponseFuture();
channelFuture.addListener(new GenericFutureListener<ChannelFuture>() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
channelFuture.channel().pipeline().get(MyAppClientHandler.class).setResponseFuture(responseFuture);
channelFuture.channel().writeAndFlush(msg);
}
});
return responseFuture;
}
public void close() {
channelFuture.channel().close();
}
The problem is that it seems this code didn't call workerGroup.shutDownGracefully(), so I'm guessing this can have problems. Is there a way to split the "start client", "send request", "shutdown client" into separate methods? Thanks in advance!
The easiest solution will be to use a ChannelPool provided by netty: https://netty.io/news/2015/05/07/4-0-28-Final.html
It provides gracefulshutdown, maxconnections, etc out-of-the-box.
We currently have some trouble on a productive server as it consumes way too much memory. One of the leaks could come from the jersey client. I found the following two other questions and a how to:
How to correctly share JAX-RS 2.0 client
Closing JAX RS Client/Response
https://blogs.oracle.com/japod/entry/how_to_use_jersey_client
What I get from it, I should reuse the Client and potentially also the WebTargets?
Also closing responses is advised, but how can I do this with .request()?
Code example, this is getting called about 1000 times per hour with different paths:
public byte[] getDocument(String path) {
Client client = ClientBuilder.newClient();
WebTarget target = client.target(config.getPublishHost() + path);
try {
byte[] bytes = target.request().get(byte[].class);
LOGGER.debug("Document size in bytes: " + bytes.length);
return bytes;
} catch (ProcessingException e) {
LOGGER.error(Constants.PROCESSING_ERROR, e);
throw new FailureException(Constants.PROCESSING_ERROR, e);
} catch (WebApplicationException e) {
LOGGER.error(Constants.RESPONSE_ERROR, e);
throw new FailureException(Constants.RESPONSE_ERROR, e);
} finally {
client.close();
}
}
So my question is how to properly use the API to prevent leaks for the above example?
Client instances should be reused
Client instances are heavy-weight objects that manage the underlying client-side communication infrastructure. Hence initialization as well as disposal of a Client instance may be a rather expensive operation.
The documentation advises to create only a small number of Client instances and reuse them when possible. It also states that Client instances must be properly closed before being disposed to avoid leaking resources.
WebTarget instances could be reused
You could reuse WebTarget instances if you perform multiple requests to the same path. And reusing WebTarget instances is recommended if they have some configuration.
Response instances should be closed if you don't read the entity
Response instances that contain an un-consumed entity input stream should be closed. This is typical for scenarios where only the response headers and the status code are processed, ignoring the response entity. See this answer for more details on closing Response instances.
Improving your code
For the situation mentioned in your question, you want you ensure that the Client instance is reused for all getDocument(String) method invocations.
For instance, if your application is CDI based, create a Client instance when the bean is constructed and dispose it before its destruction. In the example below, the Client instance is stored in a singleton bean:
#Singleton
public class MyBean {
private Client client;
#PostConstruct
public void onCreate() {
this.client = ClientBuilder.newClient();
}
...
#PreDestroy
public void onDestroy() {
this.client.close();
}
}
You don't need to (or maybe you can't) reuse the WebTarget instance (the requested path changes for each method invocation). And the Response instance is automatically closed when you read the entity into a byte[].
Using a connection pool
A connection pool can be a good performance improvement.
As mentioned in my older answer, by default, the transport layer in Jersey is provided by HttpURLConnection. This support is implemented in Jersey via HttpUrlConnectorProvider. You can replace the default connector if you want to and use a connection pool for better performance.
Jersey integrates with Apache HTTP Client via the ApacheConnectorProvider. To use it, add the following dependecy:
<dependency>
<groupId>org.glassfish.jersey.connectors</groupId>
<artifactId>jersey-apache-connector</artifactId>
<version>2.26</version>
</dependency>
And then create your Client instance as following:
PoolingHttpClientConnectionManager connectionManager =
new PoolingHttpClientConnectionManager();
connectionManager.setMaxTotal(100);
connectionManager.setDefaultMaxPerRoute(5);
ClientConfig clientConfig = new ClientConfig();
clientConfig.property(ApacheClientProperties.CONNECTION_MANAGER, connectionManager);
clientConfig.connectorProvider(new ApacheConnectorProvider());
Client client = ClientBuilder.newClient(clientConfig);
For additional details, refer to Jersey documentation about connectors.
Use the following example in this link to close Response on completed method: https://jersey.github.io/documentation/latest/async.html#d0e10209
final Future<Response> responseFuture = target().path("http://example.com/resource/")
.request().async().get(new InvocationCallback<Response>() {
#Override
public void completed(Response response) {
System.out.println("Response status code "
+ response.getStatus() + " received.");
//here you can close the response
}
#Override
public void failed(Throwable throwable) {
System.out.println("Invocation failed.");
throwable.printStackTrace();
}
});
tip 1 (Response or String):
You can close the response only when it is from type of Response class, not : String.
tip 2 (Auto-closing):
Referring to this question, When you read the entity, the response will be closed automatically:
String responseAsString = response.readEntity(String.class);
tip 3 (connection pooling):
Referring to this question, you can use connection-pools to have better performance. example:
public static JerseyClient getInstance() {
return InstanceHolder.INSTANCE;
}
private static class InstanceHolder {
private static final JerseyClient INSTANCE = createClient();
private static JerseyClient createClient() {
ClientConfig clientConfig = new ClientConfig();
clientConfig.property(ClientProperties.ASYNC_THREADPOOL_SIZE, 200);
clientConfig.property(ClientProperties.READ_TIMEOUT, 10000);
clientConfig.property(ClientProperties.CONNECT_TIMEOUT, 10000);
PoolingHttpClientConnectionManager connectionManager = new PoolingHttpClientConnectionManager();
connectionManager.setMaxTotal(200);
connectionManager.setDefaultMaxPerRoute(100);
clientConfig.property(ApacheClientProperties.CONNECTION_MANAGER, connectionManager);
clientConfig.connectorProvider(new ApacheConnectorProvider());
JerseyClient client = JerseyClientBuilder.createClient(clientConfig);
//client.register(RequestLogger.requestLoggingFilter);
return client;
}
}
ATTENTION! By using this solution, if you don't close the response, you can not send more than 100 requests to server (setDefaultMaxPerRoute(100))
I have a Echo server example from Official Netty
Echo Server
How to add ability to connect and streaming to it from websocket?
here is my ServerHandler code:
public class ServerHandler extends ChannelInboundHandlerAdapter
{
#Override
public void channelRegistered(ChannelHandlerContext ctx) throws Exception {
super.channelRegistered(ctx);
// !!!!! Think here should be WebSocket Handshake?
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg)
{
System.out.println(msg);
ctx.write(msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx)
{
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause)
{
// Close the connection when an exception is raised.
cause.printStackTrace();
}
}
Right now Chrome connection says: WebSocket connection to 'ws://127.0.0.1:8080/' failed: Error during WebSocket handshake: Invalid status line
Netty servers do not automatically handle all protocols, so you would need to add support for WebSockets.
I find the best place to start is to examine the relevant examples in Netty's xref page. Scroll down the package list until you get to the io.netty.example packages. In that list you will find a package called io.netty.example.http.websocketx.server. There is a fairly simple and well laid out example on how to implement a websocket server, or just the handler.
Websocket servers are slightly more complicated than other servers in that they must start life as an HTTP server because the protocol specifies that websockets must be initiated by "upgrading" an HTTP connection, but as I said, the example referenced above makes this fairly clear.
So, I found solution! It is not compliant to native documentation of web-socket, but who cares it works as I expected!
public void channelRead(ChannelHandlerContext ctx, Object msg)
{
DefaultHttpRequest httpRequest = null;
if (msg instanceof DefaultHttpRequest)
{
httpRequest = (DefaultHttpRequest) msg;
// Handshake
WebSocketServerHandshakerFactory wsFactory = new WebSocketServerHandshakerFactory("ws://127.0.0.1:8080/", null, false);
final Channel channel = ctx.channel();
final WebSocketServerHandshaker handshaker = wsFactory.newHandshaker(httpRequest);
if (handshaker == null) {
} else {
ChannelFuture handshake = handshaker.handshake(channel, httpRequest);
}
}
}
Do not forget to add
p.addLast(new HttpRequestDecoder(4096, 8192, 8192, false));
p.addLast(new HttpResponseEncoder());
to your pipeline.
Is there any mechanism/api by which I can control TPS hits from http client?
From HTTP client, I need to control numbers of hits to rest services (my HTTP client will hit to server in controlled manner).
you can close it immediately by adding a Netty's IpFilterHandler to server pipeline as the first handler. It will also stop propagating the upstream channel state events for filtered connection too.
#ChannelHandler.Sharable
public class FilterIPHandler extends IpFilteringHandlerImpl {
private final Set<InetSocketAddress> deniedIP;
public filter(Set<InetSocketAddress> deniedIP) {
this.deniedIP = deniedIP;
}
#Override
protected boolean isAnAccpetedIP(ChannelHandlerContext ctx, ChannelEvent e, InetSocketAddress inetSocketAddress) throws Exception {
return !deniedIP.contains(inetSocketAddress);
}
}
I'm trying to write a HTTP client that uses HTTP keep-alive connections. When I connection from the ClientBoostrap I get the channel. Can I reuse this for sending multiple HTTP requests? Is there any examples demonstrating the HTTP Keep Alive functionality?
Also I have another question. Now my client works without keep-alive connections. I'm calling the channel.close in the messageReceived method of the ClientHandler. But it seems the connections are not getting closed and after some time the sockets run out and I get a BindException. Any pointers will be really appreciated.
Thanks
As long as the Connection header is not set to CLOSE (and possible the HttpVersion is 1.1, though uncertain) by a line of code similar to this...
request.setHeader(HttpHeaders.Names.CONNECTION, HttpHeaders.Values.CLOSE);
...your channel should remain open for multiple request/response pairs.
Here is some example code that I whipped up today to test it. You can bounce any number of requests off of Google prior to the channel closing:
public class TestHttpClient {
static class HttpResponseReader extends SimpleChannelUpstreamHandler {
int remainingRequests = 2;
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception {
HttpResponse response = (HttpResponse) e.getMessage();
System.out.println("Beginning -------------------");
System.out.println(new String(response.getContent().slice(0, 50).array()));
System.out.println("End -------------------\n");
if(remainingRequests-- > 0)
sendRequest(ctx.getChannel());
else
ctx.getChannel().close();
}
}
public static void main(String[] args) {
ClientBootstrap bootstrap = new ClientBootstrap(new NioClientSocketChannelFactory());
bootstrap.setPipeline(Channels.pipeline(
new HttpClientCodec(),
new HttpResponseReader()));
// bootstrap.setOption("child.keepAlive", true); // no apparent effect
ChannelFuture future = bootstrap.connect(new InetSocketAddress("google.com", 80));
Channel channel = future.awaitUninterruptibly().getChannel();
channel.getCloseFuture().addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
// this winds up getting called immediately after the receipt of the first message by HttpResponseReader!
System.out.println("Channel closed");
}
});
sendRequest(channel);
while(true) {
try {
Thread.sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
private static void sendRequest(Channel channel) {
// Prepare the HTTP request.
HttpRequest request = new DefaultHttpRequest(
HttpVersion.HTTP_1_1, HttpMethod.GET, "http://www.google.com");
request.setHeader(HttpHeaders.Names.HOST, "google.com");
request.setHeader(HttpHeaders.Names.ACCEPT_ENCODING, HttpHeaders.Values.GZIP);
channel.write(request);
}
}