I have built a custom message server in Java that takes a stream of messages and delivers each message to its client (1:1, drop msg if not connected - very simple). I am running Tomcat 7 on Win7x64 & Java 7 and am using the NIO connector (implemented a Comet servlet).
It works great but I am now looking into scaling that beast and am currently seeing about 85kb of RAM allocated for each connected client. 10.000 clients # under 900MB and scaling linearly. (I am not doing anything else but holding the connection yet) That's quite a lot to my opinion, so I am wondering whether there are some tweaks to make Tomcat or Java save more memory with their NIO impl. All the Tomcat settings I tried so far did not affect this at all.
Does anybody have experience how to put Java or Tomcat on a memory diet regarding socket connections?
UPDATE:
I am now down under 70kb/connection by trimming the socket buffers and some other tomcat internals. Not sure how this now influences throughput. I've also tried it on 32bit / 64bit linux with the same result.
After quite some research and playing around I got to the conclusion that it is simply not possible with Tomcat to handle a huge amount of concurrent connections with reasonable amount of memory. (I'd still be happy to be proven wrong here btw)
However, there is a savior:
Netty: http://www.jboss.org/netty/downloads
It's a Java IO framework that builds on Java's new NIO architecture and seems very well designed and written. You can stack some lightweight modules together and create a mini webserver or simply handle the TCP connections yourself in an asynchronous way.
I ran a loadtest on EC2 and made it to a mindblowing 7 MILLION connections # only 1.5GB of RAM! (like with the tomcat test I did nothing but store the connections, so a real app will of course consume a bit more mem, but 200 bytes / connection "overhead" is nothing!) And it only stopped there because I limited the Java VM to 1.5GB, I am sure a C10M test would be easily doable.
Big kudos to Netty and the Java VM guys! I'm impressed.
Related
Wowza is causing us troubles, not scaling past 6k concurrent users, sometimes freezing on a few hundred. It crashes and starts killing sessions. We step in to restart Wowza multiple times per streaming event.
Our server specs:
DL380 Gen10
2x Intel Xeon Silver 4110 / 2.1 GHz
64 GB RAM
300 GB HDD
The network 10 GB dedicated
Some servers running Centos 6, others Centos 7
Java version 1.8.0_20 64-bit
Wowza streaming engine 4.2.0
I asked and was told:
Wowza scales easily to millions if you put a CDN in front of it (which
is trivially easy to do). 6K users off a single instance simply ain’t
happening. For one, Java maxes out at around 4Gbps per JVM instance.
So even if you have a 10G NIC on the machine, you’ll want to run
multiple instances if you want to use the full bandwidth.
And:
How many 720 streams can you do on a 10gb network # 2mbps?
Without network overhead, it’s about 5,000
With the limitation of java at 4gbps, it’s only 2,000 per instance.
Then if you do manage to utilize that 10Gb network and saturate it,
what happens to all other applications people are accessing on other
servers?
If they want more streams, they need edge servers in multiple data
centers or have to somehow to get more 10Gb networks installed.
That’s for streaming only. No idea what transcoding would add in terms
of CPU load and disk IO.
So I began looking for an alternative to Wowza. Due to the nature of our business, we can't use CDN or cloud hosting except with very few clients. Everything should be hosted in-house, in the client's datacenter.
I found this article and reached out to the author to ask him about Flussonic and how it compares to Wowza. He said:
I can't speak to the 4 Gbps limit that you're seeing in Java. It's
also possible that your Wowza instance is configured incorrectly. We'd
need to look at your configuration parameters to see what's happening.
We've had great success scaling both Wowza and Flussonic media servers
by pairing them with our peer-to-peer (p2p) CDN service. That was the
whole point of the article we wrote in 2016
Because we reduce the number of HTTP requests that the server has to
handle by up to 90% (or more), we increase the capacity of each server
by 10x - meaning each server can handle 10x the number of concurrent
viewers.
Even on the Wowza forum, some people say that Java maxes out at around 5Gbps per JVM instance. Others say that this number is incorrect, or made up.
Due to the nature of our business, this number, as silly as it is, means so much for us. If Java cannot handle more than 7k viewers per instance, we need to hold meetings and discuss what to do with Wowza.
So is it true that Java maxes out at around 4Gbps or 5Gbps per JVM instance?
I've asked for help on this before, here, but the issue still exists and the previously accepted answer doesn't explain it. I've also read most every article and SO thread on this topic, and most point to application leaks or modest overhead, neither of which I believe explain what I'm seeing.
I have a fairly large web service (application alone is 600MB), which when left alone grows to 5GB or more as reported by the OS. The JVM, however, is limited to 1GB (Xms, Xmx).
I'd done extensive memory testing and have found no leaks whatsoever. I've also run Oracle's Java Mission Control (basically the JMX Console) and verified that actual JVM use is only about 1GB. So that means about 4GB are being consumed by Tomcat itself, native memory, or the like.
I don't think and JNI, etc. is to blame as this particular installation has been mostly unused. All it's been doing is periodically checking the database for work requests, and periodically monitoring its resource consumption. Also, this hasn't been a problem until recently after years of use.
The JMX Console does report a high level of fragmentation (70%). But can that alone explain the additional memory consumption?
Most importantly, though, is not so much why is this happening, but how can I fix/configure it so that it stops happening. Any suggestions?
Here are some of the details of the environment:
Windows Server 2008 R2 64-bit
Java 1.7.0.25
Tomcat 7.0.55
JvmMX, JvmMs = 1000 (1GB)
Thread count: 60
Perm Gen: 90MB
Also seen on:
Windows 2012 R2 64-bit
Java 1.8.0.121
I'm attempting to detect a bottleneck in our servers, and I'm having a hard time deciding where to start. The symptoms from the machine before it crashes are dropped connections (time outs, this is what happens when a client sees when a response takes too long, possibly indicating that a processor couldn't be allocated by the server side code, and the request couldn't be handled) and out of memory issues. The latter error code is actually given by the JVM in an error log, but I have a hard time believing that both the RAM and the lack of available CPUs are turning out to be the bottlenecks at the same time. (In the days of the previous crashes, it has consistently crashed in the manner described above.) We have our own in house server code, and I wouldn't classify it as analogous to Apache or any other code I've seen. (Sorry if this makes giving advice that much more difficult.)
I'd like to take some time to create a somewhat controlled test locally. I'm running the server, and I'm creating a program that will request different things from the local server. What's a good way to monitor RAM/CPU? I'm currently using Java's VisualVM, but monitors stop responding when I hammer it with some of the tests.
Any ideas out there would be greatly appreciated. Like I mentioned, I'm trying to grab as much useful data, to help me further troubleshoot. In general, when a bottleneck issue like this arises, what are some general strategies to take? The live servers are all running on Windows Server 2008. The version of Java is 7.03. My local box is running Windows 7, with Java 7.03 as well. I don't want to make too many assumptions, but I think it's reasonable to assume that Server 2008 and Windows 7 are pretty similar. (The OS architecture is the same.) Aside from that, my local box has identical hardware to our servers.
For Windows, you need to:
1) Establish a "performance baseline"
2) Try to reproduce the bottleneck, and compare the behavior under stress with the baseline
3) Identify the cause of the bottleneck, and correct it.
These links will help:
http://technet.microsoft.com/en-us/library/cc781394%28v=ws.10%29.aspx
http://msdn.microsoft.com/en-us/library/windows/hardware/gg463394.aspx
You should also look at these TCP/IP registry settings:
Increase the range of ephemeral ports
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters, MaxUserPort
Consider disabling auto-tuning:
http://geekswithblogs.net/cajunmcse/archive/2010/08/20/windows-2008-exchange-and-tcpip-auto-tuning.aspx
We have been profiling and profiling our application to get reduce latency as much as possible. Our application consists of 3 separate Java processes, all running on the same server, which are passing messages to each other over TCP/IP sockets.
We have reduced processing time in first component to 25 μs, but we see that the TCP/IP socket write (on localhost) to the next component invariably takes about 50 μs. We see one other anomalous behavior, in that the component which is accepting the connection can write faster (i.e. < 50 μs). Right now, all the components are running < 100 μs with the exception of the socket communications.
Not being a TCP/IP expert, I don't know what could be done to speed this up. Would Unix Domain Sockets be faster? MemoryMappedFiles? what other mechanisms could possibly be a faster way to pass the data from one Java Process to another?
UPDATE 6/21/2011
We created 2 benchmark applications, one in Java and one in C++ to benchmark TCP/IP more tightly and to compare. The Java app used NIO (blocking mode), and the C++ used Boost ASIO tcp library. The results were more or less equivalent, with the C++ app about 4 μs faster than Java (but in one of the tests Java beat C++). Also, both versions showed a lot of variability in the time per message.
I think we are agreeing with the basic conclusion that a shared memory implementation is going to be the fastest. (Although we would also like to evaluate the Informatica product, provided it fits the budget.)
If using native libraries via JNI is an option, I'd consider implementing IPC as usual (search for IPC, mmap, shm_open, etc.).
There's a lot of overhead associated with using JNI, but at least it's a little less than the full system calls needed to do anything with sockets or pipes. You'll likely be able to get down to about 3 microseconds one-way latency using a polling shared memory IPC implementation via JNI. (Make sure to use the -Xcomp JVM option or adjust the compilation threshold, too; otherwise your first 10,000 samples will be terrible. It makes a big difference.)
I'm a little surprised that a TCP socket write is taking 50 microseconds - most operating systems optimize TCP loopback to some extent. Solaris actually does a pretty good job of it with something called TCP Fusion. And if there has been any optimization for loopback communication at all, it's usually been for TCP. UDP tends to get neglected - so I wouldn't bother with it in this case. I also wouldn't bother with pipes (stdin/stdout or your own named pipes, etc.), because they're going to be even slower.
And generally, a lot of the latency you're seeing is likely coming from signaling - either waiting on an IO selector like select() in the case of sockets, or waiting on a semaphore, or waiting on something. If you want the lowest latency possible, you'll have to burn a core sitting in a tight loop polling for new data.
Of course, there's always the commercial off-the-shelf route - which I happen to know for a certainty would solve your problem in a hurry - but of course it does cost money. And in the interest of full disclosure: I do work for Informatica on their low-latency messaging software. (And my honest opinion, as an engineer, is that it's pretty fantastic software - certainly worth checking out for this project.)
"The O'Reilly book on NIO (Java NIO, page 84), seems to be vague about
whether the memory mapping stays in memory. Maybe it is just saying
that like other memory, if you run out of physical, this gets swapped
back to disk, but otherwise not?"
Linux. mmap() call allocates pages in OS page cache area (which are periodically get flushed to disk and can be evicted based on Clock-PRO which is approximation of LRU algorithm?) So the answer on your question is - yes. Memory mapped buffer can be evicted (in theory) from memory unless it is mlocke'd (mlock()). This is in theory. In practice, I think it is hardly possible if your system is not swapping In this case, first victims are page buffers.
See my answer to fastest (low latency) method for Inter Process Communication between Java and C/C++ - with memory mapped files (shared memory) java-to-java latency can be reduced to 0.3 microsecond
MemoryMappedFiles is not viable solution for low latency IPC at all - if mapped segment of memory gets updated it is eventually will be synced to disk thus introducing unpredictable delay which measures in milliseconds at least. For low latency one can try combinations of either Shared Memory + message queues (notifications), or shared memory + semaphores. This works on all Unixes especially System V version (not POSIX) but if you run application on Linux you pretty safe with POSIX IPC (most features are available in 2.6 kernel) Yes, you will need JNI to get this done.
UPD: I forgot that this is JVM - JVM IPC and we have already GCs which we can not control fully, so introducing additional several ms pauses due to OS file buffers flash to disk may be acceptable.
Check out https://github.com/pcdv/jocket
It's a low-latency replacement for local Java sockets that uses shared memory.
RTT latency between 2 processes is well below 1us on a modern CPU.
I've got a somewhat dated Java EE application running on Sun Application Server 8.1 (aka SJSAS, precursor to Glassfish). With 500+ simultaneous users the application becomes unacceptably slow and I'm trying to assist in identifying where most of the execution time is spent and what can be done to speed it up. So far, we've been experimenting and measuring with LoadRunner, the app server logs, Oracle statpack, snoop, adjusting the app server acceptor and session (worker) threads, adjusting Hibernate batch size and join fetch use, etc but after some initial gains we're struggling to improve matters more.
Ok, with that introduction to the problem, here's the real question: If you had a slow Java EE application running on a box whose CPU and memory use never went above 20% and while running with 500+ users you showed two things: 1) that requesting even static files within the same app server JVM process was exceedingly slow, and 2) that requesting a static file outside of the app server JVM process but on the same box was fast, what would you investigate?
My thoughts initially jumped to the application server threads, both acceptor and session threads, thinking that even requests for static files were being queued, waiting for an available thread, and if the CPU/memory weren't really taxed then more threads were in order. But then we upped both the acceptor and session threads substantially and there was no improvement.
Clarification Edits:
1) Static files should be served by a web server rather than an app server. I am using the fact that in our case this (unfortunately) is not the configuration so that I can see the app server performance for files that it doesn't execute -- therefore excluding any database performance costs, etc.
2) I don't think there is a proxy between the requesters and the app server but even if there was it doesn't seem to be overloaded because static files requested from the same application server machine but outside of the application's JVM instance return immediately.
3) The JVM heap size (Xmx) is set to 1GB.
Thanks for any help!
SunONE itself is a pain in the ass. I have a very same problem, and you know what? A simple redeploy of the same application to Weblogic reduced the memory consumption and CPU consumption by about 30%.
SunONE is a reference implementation server, and shouldn't be used for production (don't know about Glassfish).
I know, this answer doesn't really helps, but I've noticed considerable pauses even in a very simple operations, such as getting a bean instance from a pool.
May be, trying to deploy JBoss or Weblogic on the same machine would give you a hint?
P.S. You shouldn't serve static content from under application server (though I do it too sometimes, when CPU is abundant).
P.P.S. 500 concurrent users is quite high a load, I'd definetely put SunONE behind a caching proxy or Apache which serves static content.
After using a Sun performance monitoring tool we found that the garbage collector was running every couple seconds and that only about 100MB out of the 1GB heap was being used. So we tried adding the following JVM options and, so far, this new configuration as greatly improved performance.
-XX:+DisableExplicitGC -XX:+AggressiveHeap
See http://java.sun.com/docs/performance/appserver/AppServerPerfFaq.html
Our lesson: don't leave JVM option tuning and garbage collection adjustments to the end. If you're having performance trouble, look at these settings early in your troubleshooting process.