Receiving data causes "too many files open" - java

In my client I receive via ZeroMQ a lot of input, which needs to be constantly updated. My server is written in python, but that should not matter. So this is what I do in my MainActivity:
public class MainActivity extends AppCompatActivity {
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
/********************************NETWORK********************************/
new NetworkCall().execute("");
}
private class NetworkCall extends AsyncTask<String, Void, String> {
#Override
protected String doInBackground(String... params) {
while (true) {
try {
ZMQ.Context context = ZMQ.context(1);
// Connect to server
ZMQ.Socket requester = context.socket(ZMQ.REQ);
String address = "tcp://xxx.xx.xx.xx";
int port = 5000;
requester.connect(address + ":" + port);
// Initialize poll set
ZMQ.Poller poller = new ZMQ.Poller(1);
poller.register(requester, ZMQ.Poller.POLLIN);
requester.send("COORDINATES");
//while (true) {
String data;
poller.poll();
data = requester.recvStr();
System.out.println(data);
if (data == null) {
try {
sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
}
} requester.close();
} catch (IllegalStateException ise) {
ise.printStackTrace();
}
}
}
#Override
protected void onPostExecute(String result) {
}
#Override
protected void onPreExecute() {
}
#Override
protected void onProgressUpdate(Void... values) {
}
}
}
After executing this code on my device, I'll get like 5-9 input data strings, which I receive from the server, but then the following exception appears:
E/AndroidRuntime: FATAL EXCEPTION: AsyncTask #2
Process: com.example.viktoria.gazefocus, PID: 31339
java.lang.RuntimeException: An error occurred while executing doInBackground()
at android.os.AsyncTask$3.done(AsyncTask.java:353)
at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:383)
at java.util.concurrent.FutureTask.setException(FutureTask.java:252)
at java.util.concurrent.FutureTask.run(FutureTask.java:271)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:245)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
at java.lang.Thread.run(Thread.java:764)
Caused by: com.example.viktoria.gazefocus.zmq.ZError$IOException: java.io.IOException: Too many open files
at com.example.viktoria.gazefocus.zmq.Signaler.makeFdPair(Signaler.java:94)
at com.example.viktoria.gazefocus.zmq.Signaler.<init>(Signaler.java:50)
at com.example.viktoria.gazefocus.zmq.Mailbox.<init>(Mailbox.java:51)
at com.example.viktoria.gazefocus.zmq.Ctx.<init>(Ctx.java:128)
at com.example.viktoria.gazefocus.zmq.ZMQ.zmq_ctx_new(ZMQ.java:244)
at com.example.viktoria.gazefocus.zmq.ZMQ.zmqInit(ZMQ.java:277)
at org.zeromq.ZMQ$Context.<init>(ZMQ.java:269)
at org.zeromq.ZMQ.context(ZMQ.java:254)
at com.example.viktoria.gazefocus.MainActivity$NetworkCall.doInBackground(MainActivity.java:73)
at com.example.viktoria.gazefocus.MainActivity$NetworkCall.doInBackground(MainActivity.java:67)
at android.os.AsyncTask$2.call(AsyncTask.java:333)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:245) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636) 
at java.lang.Thread.run(Thread.java:764) 
Caused by: java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.makePipe(Native Method)
at sun.nio.ch.PipeImpl.<init>(PipeImpl.java:42)
at sun.nio.ch.SelectorProviderImpl.openPipe(SelectorProviderImpl.java:50)
at java.nio.channels.Pipe.open(Pipe.java:155)
at com.example.viktoria.gazefocus.zmq.Signaler.makeFdPair(Signaler.java:91)
at com.example.viktoria.gazefocus.zmq.Signaler.<init>(Signaler.java:50) 
at com.example.viktoria.gazefocus.zmq.Mailbox.<init>(Mailbox.java:51) 
at com.example.viktoria.gazefocus.zmq.Ctx.<init>(Ctx.java:128) 
at com.example.viktoria.gazefocus.zmq.ZMQ.zmq_ctx_new(ZMQ.java:244) 
at com.example.viktoria.gazefocus.zmq.ZMQ.zmqInit(ZMQ.java:277) 
at org.zeromq.ZMQ$Context.<init>(ZMQ.java:269) 
at org.zeromq.ZMQ.context(ZMQ.java:254) 
at com.example.viktoria.gazefocus.MainActivity$NetworkCall.doInBackground(MainActivity.java:73) 
at com.example.viktoria.gazefocus.MainActivity$NetworkCall.doInBackground(MainActivity.java:67) 
at android.os.AsyncTask$2.call(AsyncTask.java:333) 
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:245) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636) 
at java.lang.Thread.run(Thread.java:764) 
Apparently too many files are open. After research (I'm using Ubuntu 16.04) I changed the ulimit with ulimit -n 10000. Still this exception will happen. Sometimes I get more input data, sometimes less. Also if I set something like Executor executor = Executors.newFixedThreadPool(5); into the onCreate() method, nothing will change.
How to overcome this issue?
Thanks for reading!

You have a leak because you're not closing / ending / freeing something. I think that the context has to be terminated: context.term() after you close the requester...

Well, in distributed-system design, the infrastructure for signalling / messaging setup costs are not negligible. Some use-cases are more foregiving, some less.
Getting always a new Context() instance per each method-call and throwing it right away soon after by a clean-up call to it's .term()*-method is for sure better than having a hung-app or a frozen device, yet it is far from a fair design, respecting the process latencies and an "ecology"-of-resources.
Better first setup a semi-persistent infrastructure of resources ( each Context()-instance is typically a very expensive toy to instantiate ( API 4.2+ as of 2018-Q1 ), not so sharp for the Socket()-instances, but similar to the Poller() and all it's internal-AccessPoint(s) registration-hooks, yet the principle may extend on 'em too ).
Early re-factoring of the code will help not to extend the case with using expensive resources as a "consumable disposable".
The section:
while (true) {
try {
ZMQ.Context context = ZMQ.context(1);
// Connect to server
ZMQ.Socket requester = context.socket( ZMQ.REQ );
String address = "tcp://xxx.xx.xx.xx";
int port = 5000;
requester.connect( address + ":" + port );
...
}
...
}
is exactly a resources-devastating anti-pattern, altogether with repetitive latencies and even risks of remote-hangups and remote-rejections and similar issues.

Related

"INVITE SESSION ALREADY TERMINATED ERROR" while trying to handle incoming call via pjsip(PJSUA2)

I have successfully made outgoing call via PJSIP. Now facing a problem while try to handle incoming call.
Thread isanycall=new Thread(new Runnable() {
#Override
public void run() {
while(true)
{
if(Global.isanycall==1)
{
sipOperationIncoming(username, pwd, ip, number.getText().toString());
Global.isanycall=0;
}
}
}
});
isanycall.start();
This code is checking if there is any incoming call.
System.out.println("Incoming call handler");
//sip operation started
registration=SipRegistration.getSipRegistration(uname,pwd,ip);
registration.answerCall(da);
//sip operation ended
This code block is just responsible to call a function answerCall which is as follow
public void answerCall(DialerActivity activity){
call=new MyCall(myacc,1,this.ep,activity);
CallOpParam prm = new CallOpParam();
prm.setStatusCode(pjsip_status_code.PJSIP_SC_RINGING);
try {
call.answer(prm);
}catch(Exception e){
e.printStackTrace();
}
}
Now the exception I am getting is
java.lang.Exception: Title: pjsua_call_answer2(id, param.p_opt, prm.statusCode, param.p_reason, param.p_msg_data)
10-27 12:11:19.839 10090-10384/com.skyteloutsourcing.callnxt W/System.err: Code: 171140
10-27 12:11:19.839 10090-10384/com.skyteloutsourcing.callnxt W/System.err: Description: INVITE session already terminated (PJSIP_ESESSIONTERMINATED)
What can be the reason?
Solved it, I was responding with a different call id rather than which was the call id of incoming call. :)
I faced this error when I don't check this control
if(ci.state==pjsip_inv_state.PJSIP_INV_STATE_DISCONNECTED){
currentCall.delete()
currentCall=null
}

Java/Android: Socket closed when offloading work to a thread pool

I'm experiencing a very puzzling error writing a thread-pooled TCP server on Android. Basically, my code is structured as follows:
Standard server loop (blocking call to socket.accept() in a loop within its own thread), calling a handler upon an incoming connection:
socket = mServerSocket.accept();
myHandler.onIncomingConnection(socket);
The handler offloads all further processing of the incoming connection(s) to a thread pool:
public class X {
private final ExecutorService receiveThreadPool = Executors.newSingleThreadExecutor();
[...]
private final ConnectionHandler mHandler = new MyServer.ServerHandler() {
#Override
public void onIncomingConnection(final Socket socket) {
MLog.vv(TAG, "Socket status: " + (socket.isBound() ? "bound" : "unbound") + ", "
+ (socket.isConnected() ? "connected" : "unconnected") + ", "
+ (socket.isClosed() ? "closed" : "not closed") + ", "
+ (socket.isInputShutdown() ? "input shutdown" : "input not shut down") + ".");
// Process result
receiveThreadPool.execute(new Runnable() {
#Override
public void run() {
MLog.vv(TAG, "Socket status: " +
(socket.isBound() ? "bound" : "unbound") + ... ); // same code as above
BufferedOutputStream out = null;
try {
out = new BufferedOutputStream(socket.getOutputStream());
out.write(HELLO_MESSAGE);
out.flush();
[rest omitted...]
} catch (IOException e) {
[...]
} finally {
[close resources...]
}
}
Note, that the socket is defined as final in the handler method's signature, making it accessible from within the anonymous inner Runnable class. However, the first write out.write(HELLO_MESSAGE); to the output stream fails due to a closed socket exception. logcat output:
02-16 17:49:26.383 14000-14057/mypackage:remote V/ManagementServer﹕ Incoming connection from /192.168.8.33:47764
02-16 17:49:26.383 14000-14057/mypackage:remote V/ManagementServer﹕ Socket status: bound, connected, not closed, input not shut down.
02-16 17:49:26.393 14000-14077/mypackage:remote V/ManagementServer﹕ Socket status: bound, unconnected, closed, input not shut down.
02-16 17:49:26.398 14000-14077/mypackage:remote E/ManagementServer﹕ Error communicating with client:
java.net.SocketException: Socket is closed
at java.net.Socket.checkOpenAndCreate(Socket.java:665)
at java.net.Socket.getInputStream(Socket.java:359)
at net.semeion.tusynctest.network.ManagementServer$1$1.run(ManagementServer.java:79)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
at java.lang.Thread.run(Thread.java:818)
As shown in the log output, somehow the socket changes its status from connected to unconnected/closed right after offloading the Runnable into the thread pool. If I just remove the ThreadPool.execute lines, everything works as expected. I have also tried to create my own static Runnable class within the outer class X, passing the socket as a parameter to the Runnable's constructor. However, this triggers the same problem.
Am I missing something here? In theory, this should work like a charm; but somehow it seems, that upon starting the thread and terminating the handler's incomingConnection method, something happens to the socket instance.
Further facts:
It's not the client's fault either - to the client, it looks like the server closed the socket. And as said, commenting out the thread pool on the server side fixes the problem.)
You may notice the ":remote" in the logcat output - the server thread is created from within a background service that itself runs in a separate process.
At the moment, I'm employing a SingleThreadExecutor for the pool, just for testing. The type of executor should make no difference IMHO.
I tried to use the socket's OutputStream directly (unbuffered), which has not made a difference. The log statements show that the socket itself changes its status.
If I initialise the BufferedOutputstream in the handler right before executing the thread, this yields a strange "bad file number" SocketException at the out.flush(); line (possibly this is an IPC problem?!):
java.net.SocketException: sendto failed: EBADF (Bad file number)
at libcore.io.IoBridge.maybeThrowAfterSendto(IoBridge.java:546)
at libcore.io.IoBridge.sendto(IoBridge.java:515)
at java.net.PlainSocketImpl.write(PlainSocketImpl.java:504)
at java.net.PlainSocketImpl.access$100(PlainSocketImpl.java:37)
at java.net.PlainSocketImpl$PlainSocketOutputStream.write(PlainSocketImpl.java:266)
at java.io.BufferedOutputStream.flushInternal(BufferedOutputStream.java:185)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:85)
at net.semeion.tusynctest.network.ManagementServer$1$1.run(ManagementServer.java:88)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
at java.lang.Thread.run(Thread.java:818)
Caused by: android.system.ErrnoException: sendto failed: EBADF (Bad file number)
at libcore.io.Posix.sendtoBytes(Native Method)
at libcore.io.Posix.sendto(Posix.java:206)
at libcore.io.BlockGuardOs.sendto(BlockGuardOs.java:278)
at libcore.io.IoBridge.sendto(IoBridge.java:513)
Thanks for any hints :)
I've found the problem now. Embarrassingly, I've overlooked the most obvious source for trouble, the server loop in my Server class:
new Thread(new Runnable() {
#Override
public void run() {
while (mReceiving) {
Socket recSocket = null;
try {
recSocket = mServerSocket.accept();
// Process connection
mTcpServerHandler.onIncomingConnection(recSocket);
} catch (IOException e) {
// ...
} finally {
if (recSocket != null) {
try {
recSocket.close();
} catch (IOException e) {
// log, ignore...
}
}
}
}
}
}).start();
So, in case the handler offloads the processing to a new thread, the method returns immediately, and the finally block of the server loop closes the socket (which was intended originally, but doesn't fit the thread pool approach). Too obvious, sorry for bothering :)

How to use one single Bootstrap to connect to multiple servers in Netty

I don't know if there is a performance issue if I create(new) a bootstrap every time I connect to a remote server. So I want to use one single bootstrap instance to connect to multiple servers. My code below:
Bootstrap b = new Bootstrap();
b.group(new NioEventLoopGroup()).handler(new TestHandler()).channel(NioSocketChannel.class)
.option(ChannelOption.AUTO_READ, false);
ChannelFutureListener listener = new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// connection complete start to read first data
System.out.println("connection established + channel: " + future.channel());
} else {
// Close the connection if the connection attempt has failed.
System.out.println("connection failed");
}
}
};
String[] urls = { "www.google.com", "www.stackoverflow.com", "www.yahoo.com" };
for (String s : urls) {
b = b.clone();
b.remoteAddress(s, 80);
b.connect().addListener(listener);
}
System.in.read();
Unfortunately, it crashed with :
connection established + channel: [id: 0x5df86eec, /192.168.126.136:60414 => www.google.com/173.194.127.209:80]
Exception in thread "main" java.lang.IllegalStateException: channel not registered to an event loop
at io.netty.channel.AbstractChannel.eventLoop(AbstractChannel.java:107)
at io.netty.channel.nio.AbstractNioChannel.eventLoop(AbstractNioChannel.java:102)
at io.netty.channel.nio.AbstractNioChannel.eventLoop(AbstractNioChannel.java:41)
at io.netty.channel.CompleteChannelFuture.executor(CompleteChannelFuture.java:48)
at io.netty.util.concurrent.CompleteFuture.addListener(CompleteFuture.java:49)
at io.netty.channel.CompleteChannelFuture.addListener(CompleteChannelFuture.java:56)
at Test3.main(Test3.java:41)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
I think I am using bootstrap in a wrong way. If so, should I create a brand new bootstrap every time?
At least, I should use the same NioEventLoopGroup, right?
Alright, it's my fault, I forget to add a ChannelInitializer. Change it to code below:
b.group(new NioEventLoopGroup()).handler(new ChannelInitializer<NioSocketChannel>() {
#Override
protected void initChannel(NioSocketChannel ch) throws Exception {
ch.pipeline().addLast(new TestHandler());
}
})

Java.io.IOException, "bad file number" USB connection

I'm setting up a USB accessory connection between my Android phone and another device. Just sending bytes back and forth for now to test. I get some definite communication going at first, but it always ends up dying with Java.io.IOException: write failed: EBADF (Bad file number)" after a second or so. Sometimes the reading stays alive but the writing dies; others both die.
I'm not doing anything super fancy, reading and writing just like the Google documentation:
Initial connection (inside a broadcast receiver, I know this part works at least initially):
if (action.equals(ACTION_USB_PERMISSION))
{
ParcelFileDescriptor pfd = manager.openAccessory(accessory);
if (pfd != null) {
FileDescriptor fd = pfd.getFileDescriptor();
mIn = new FileInputStream(fd);
mOut = new FileOutputStream(fd);
}
}
Reading:
Thread thread = new Thread(new Runnable() {
#Override
public void run() {
byte[] buf = new byte[BUF_SIZE];
while (true)
{
try {
int recvd = mIn.read(buf);
if (recvd > 0) {
byte[] b = new byte[recvd];
System.arraycopy(buf, 0, b, 0, recvd);
//Parse message
}
}
catch (IOException e) {
Log.e("read error", "failed to read from stream");
e.printStackTrace();
}
}
}
});
thread.start();
Writing:
synchronized(mWriteLock) {
if (mOut !=null && byteArray.length>0) {
try {
//mOut.flush();
mOut.write(byteArray, 0, byteArray.length);
}
catch (IOException e) {
Log.e("error", "error writing");
e.printStackTrace();
return false;
}
}
else {
Log.e(TAG, "Can't send data, serial stream is null");
return false;
}
}
Error stacktrace:
java.io.IOException: write failed: EBADF (Bad file number)
W/System.err(14028): at libcore.io.IoBridge.write(IoBridge.java:452)
W/System.err(14028): at java.io.FileOutputStream.write(FileOutputStream.java:187)
W/System.err(14028): at com.my.android.transport.MyUSBService$5.send(MyUSBService.java:468)
W/System.err(14028): at com.my.android.transport.MyUSBService$3.onReceive(MyUSBService.java:164)
W/System.err(14028): at android.app.LoadedApk$ReceiverDispatcher$Args.run(LoadedApk.java:781)
W/System.err(14028): at android.os.Handler.handleCallback(Handler.java:608)
W/System.err(14028): at android.os.Handler.dispatchMessage(Handler.java:92)
W/System.err(14028): at android.os.Looper.loop(Looper.java:156)
W/System.err(14028): at android.app.ActivityThread.main(ActivityThread.java:5045)
W/System.err(14028): at java.lang.reflect.Method.invokeNative(Native Method)
W/System.err(14028): at java.lang.reflect.Method.invoke(Method.java:511)
W/System.err(14028): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784)
W/System.err(14028): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551)
W/System.err(14028): at dalvik.system.NativeStart.main(Native Method)
W/System.err(14028): Caused by: libcore.io.ErrnoException: write failed: EBADF (Bad file number)
W/System.err(14028): at libcore.io.Posix.writeBytes(Native Method)
W/System.err(14028): at libcore.io.Posix.write(Posix.java:178)
W/System.err(14028): at libcore.io.BlockGuardOs.write(BlockGuardOs.java:191)
W/System.err(14028): at libcore.io.IoBridge.write(IoBridge.java:447)
W/System.err(14028): ... 13 more
I have logging all over the place and thus I know it's not anything too obvious, such as another permission request being received (and thus the file streams being reinitialized mid-read). The streams aren't closing either, because I never have that happen anywhere in my code (for now). I'm not getting any detached or attached events either (I log that if it happens). Nothing seems too out of the ordinary; it just dies.
I thought maybe it was a concurrency issue, so I played with locks and sleeps, nothing worked that I tried. I don't think it's a throughput issue either because it still happens when I sleep every read (on both ends), and read one single packet at a time (super slow bitrate). Is there a chance the buffer is being overrun on the other end somehow? How would I go about clearing this? I do have access to the other end's code, it is an Android device as well, using Host mode. In case that it matters, I can post that code too - standard bulk transfers.
Does the phone just have lackluster support for Android Accessory Mode? I've tried two phones and they both fail similarly, so I doubt it's that.
I'm wondering what causes this error in general when writing or reading from USB on Android?
I got same problem in my code, and I found that it happens because FileDescriptor object was GCed.
I fixed this issue by adding ParcelFileDescriptor field in Activity(or Service).
I checked your first code snippet and the code you based on, and latter has ParcelFileDescriptor field in Thread.
I think if you edit your code like below, it works well.
ParcelFileDescriptor mPfd;
...
if (action.equals(ACTION_USB_PERMISSION))
{
mPfd = manager.openAccessory(accessory);
if (mPfd != null) {
FileDescriptor fd = mPfd.getFileDescriptor();
mIn = new FileInputStream(fd);
mOut = new FileOutputStream(fd);
}
}
It ended up being a threading issue. I needed to more properly segregate even writing as well, instead of just reading.
I ended up using this code as a basis.
OK, a few things I noticed that just seemed different from what I do for Open Accessory Mode, which I followed the documentation for USB accessory mostly, so it should be very similar, is that your mIn.read(buf); should be mIn.read(buf, 0, 64); as far as I know.
Also, you should declare in your class declarations thread myThread;. Then within your BroadcastReceiver after creating the new FileInput/OutputStream, have myThread = new thread(myHandler, myInputStream); followed my myThread.start();.
Now I noticed that you are communicating directly with the UI from your thread. You should use a handler instead that the thread will communicate to and then that will communicate back to your UI, at least from what I had read.
Here is an example of my handler and thread:
final Handler mHandler = new Handler() {
#Override
public void handleMessage(Message msg){
}
};
private class USB_Thread extends Thread {
Handler thisHandler;
FileInputStream thisInputStream;
USB_Thread(Handler handler, FileInputStream instream){
thisHandler = handler;
thisInputStream = instream;
}
#Override
public void run(){
while(true) {
try{
if((thisInputStream != null) && (dataReceived == false)) {
Message msg = thisHandler.obtainMessage();
int bytesRead = thisInputStream.read(USB_Data_In, 0, 63);
if (bytesRead > 0){
dataReceived = true;
thisHandler.sendMessage(msg);
}
}
}
catch(IOException e){
}
}
}
}
Also, there are some demo open accessory application here. They may help with your understanding of accessory mode.
And also there are known issues with an application not receiving the BroadcastReceiver for ACTION_USB_ACCESSORY/DEVICE_ATTACHED programmatically. It will only receive it via the manifest file. You can find more on this here and here.
I actually didn't test putting the dataReceived variable in the handler and only recently changed that part of my code. I tested it and it didn't work, so trying to remember what it was I had read, I think it was not about variables communicating within the threads, but trying to use something like .setText(). I have updated my code to include the dataReceived=true in the thread. The handler would then be used for updating items on the UI, such as TextViews, etc.
Thread
FileDescriptor fd = mFileDescriptor.getFileDescriptor();
mInputStream = new FileInputStream(fd);
mOutputStream = new FileOutputStream(fd);
usbThread = new USB_Thread(mHandler, mInputStream);
usbThread.start();

Netty slower than Tomcat

We just finished building a server to store data to disk and fronted it with Netty. During load testing we were seeing Netty scaling to about 8,000 messages per second. Given our systems, this looked really low. For a benchmark, we wrote a Tomcat front-end and run the same load tests. With these tests we were getting roughly 25,000 messages per second.
Here are the specs for our load testing machine:
Macbook Pro Quad core
16GB of RAM
Java 1.6
Here is the load test setup for Netty:
10 threads
100,000 messages per thread
Netty server code (pretty standard) - our Netty pipeline on the server is two handlers: a FrameDecoder and a SimpleChannelHandler that handles the request and response.
Client side JIO using Commons Pool to pool and reuse connections (the pool was sized the same as the # of threads)
Here is the load test setup for Tomcat:
10 threads
100,000 messages per thread
Tomcat 7.0.16 with default configuration using a Servlet to call the server code
Client side using URLConnection without any pooling
My main question is why such a huge different in performance? Is there something obvious with respect to Netty that can get it to run faster than Tomcat?
Edit: Here is the main Netty server code:
NioServerSocketChannelFactory factory = new NioServerSocketChannelFactory();
ServerBootstrap server = new ServerBootstrap(factory);
server.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
RequestDecoder decoder = injector.getInstance(RequestDecoder.class);
ContentStoreChannelHandler handler = injector.getInstance(ContentStoreChannelHandler.class);
return Channels.pipeline(decoder, handler);
}
});
server.setOption("child.tcpNoDelay", true);
server.setOption("child.keepAlive", true);
Channel channel = server.bind(new InetSocketAddress(port));
allChannels.add(channel);
Our handlers look like this:
public class RequestDecoder extends FrameDecoder {
#Override
protected ChannelBuffer decode(ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) {
if (buffer.readableBytes() < 4) {
return null;
}
buffer.markReaderIndex();
int length = buffer.readInt();
if (buffer.readableBytes() < length) {
buffer.resetReaderIndex();
return null;
}
return buffer;
}
}
public class ContentStoreChannelHandler extends SimpleChannelHandler {
private final RequestHandler handler;
#Inject
public ContentStoreChannelHandler(RequestHandler handler) {
this.handler = handler;
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
Throwable throwable = e.getCause();
ChannelBuffer out = ChannelBuffers.dynamicBuffer(8);
out.writeInt(0); // Length
out.writeInt(Errors.generalException.getCode()); // status
Channels.write(ctx, e.getFuture(), out);
}
#Override
public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) {
NettyContentStoreServer.allChannels.add(e.getChannel());
}
}
UPDATE:
I've managed to get my Netty solution to within 4,000/second. A few weeks back I was testing a client side PING in my connection pool as a safe guard against idle sockets but I forgot to remove that code before I started load testing. This code effectively PINGed the server every time a Socket was checked out from the pool (using Commons Pool). I commented that code out and I'm now getting 21,000/second with Netty and 25,000/second with Tomcat.
Although, this is great news on the Netty side, I'm still getting 4,000/second less with Netty than Tomcat. I can post my client side (which I thought I had ruled out but apparently not) if anyone is interested in seeing that.
The method messageReceived is executed using a worker thread that is possibly getting blocked by RequestHandler#handle which may be busy doing some I/O work.
You could try adding into the channel pipeline an OrderdMemoryAwareThreadPoolExecutor (recommended) for executing the handlers or alternatively, try dispatching your handler work to a new ThreadPoolExecutor and passing a reference to the socket channel for later writing the response back to client. Ex.:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
executor.submit(new Runnable() {
processHandlerAndRespond(e);
});
}
private void processHandlerAndRespond(MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}

Categories