For the past few months, a developer and I have been working on a screensharing applet that streams to a media server like Wowza or Red5, but no matter what we do, we have about 5 seconds of latency, which is too long for a live application where people are interacting with each other. We've tried xuggle, different encoders, different players, different networks, different media servers, and even streaming locally, there's significant latency.
So, I'm beginning to wonder…
Is Java fast enough to do live screensharing?
I've seen lots of screen recording applets written in Java, but none of them are streaming live. Everything that's done live, such as GoToMeeting, seems to use C++. I'm thinking maybe there's a reason.
It's not a compression problem. Using ScreenVideo, we've compressed an hour-long stream down to about 100 MB, and we have plenty of bandwidth. The processor isn't overloaded doing the compression, either, but it seems to be taking too much time. We are getting the best results from some code pulled out of BigBlueButton, but still, the latency is terrible.
Streaming the WebCam, on the other hand, is nice and snappy. Almost no latency at all. So, the problem is the applet.
The only other idea I can think of is somehow emulating a WebCam with Java. Not sure if that would be faster or not.
Ideas? Or should I just give up on Java and do this in C++? I would hate to do that, because then I would have to create different versions for different platforms, but if it's the only way, it's the only way.
Many video streaming subsystems deliberately buffer so that a blip in connectivity doesn't impact the video, but that makes more sense in a recorded media scenario.
Make sure these systems have buffering turned off or turned down.
Also, while this isn't exactly scientific, you could run an app like wireshark on the outgoing and incoming computers, and try to see how long the traffic actually takes. If it's very fast, then I'd more seriously consider that buffering is the issue.
If you are on Windows, maybe just running Task Manager/Network tab would prove this or not (rather than installing something like wireshark, which isn't difficult... just trying to suggest a fast way to check)
Related
I'm developing a game for Android. It uses a surface view and uses the sort of standard 2D drawing APIs provided. When I first released the game, I was doing all sorts of daft things like re-drawing 9-patches on each frame and likewise with text. I have since optimised much of this by drawing to Bitmap objects and drawing them each frame, only re-drawing onto the Bitmap objects when required.
I've received complaints about battery drain before, and following my modifications I'd like to know (scientifically) if I've made any improvements. Unfortunately, I don't have any prior data to go by, so it would be most useful to compare the performance to some other game.
I've been running Traceview, and using the results of it mostly for the purposes of identifying CPU-time-consuming methods.
So -- what's the best way of determining my app's battery performance, and what's a good benchmark?
I know I can look at the %s of different apps through the settings, but this is again unscientific, as the figure I get from this also depends on what's happening in all of the other apps. I've looked through (most of) Google's documentation, and although the message is clear that you should be saving battery (and it gives the occasional tip as to how), there is little indication of how I can measure how well my app is performing. The last thing I want are more complaints of battery drain in the Android Market!
Thanks in advance.
EDIT
Thanks for all your helpful advice/suggestions. What I really want to know is how I can use the data coming from Traceview (ie: CPU time in ms spent on each frame of the game) to determine battery usage (if this is at all possible). Reading back on my original question, I can see I was a bit vague. Thanks again.
Here is my suggestion:
I watch power consumption while developing my apps (that sometimes poll the sensors at rates of <25ns) using PowerTutor. Check it out, it sounds like this maybe what you are looking for, the app tells you what you are using in mW, J, or relative to the rest of the system. Also, results are broken down by CPU, WiFi, Display, (other radios installed). The only catch is that it is written for a specific phone model, but I use it with great success on my EVO 4G, Galaxy S (Sprint Epic), and Hero.
Good luck,
-Steve
There is a possibility that your game is draining battery. I believe this depends on several reasons, which reads as follows:
Your application is a game. Games drains battery quickly.
You're iterating with help from a Thread. Have you limited the FPS to make the CPU skip unnecessary iterations? Since you're working with 2D I assume you're using the SurfaceView. 60 FPS will be enough for a real-time game.
You don't stop the Thread when your application terminates. Hence you reiterate code when your application isn't alive.
Have you an iterate lock that does wait(); during onPause?
The people commenting that your game is leaking battery probably aims when your application isn't in use. Otherwise, it would be wierd because every game on Android Market drains battery - more or less.
If you're trying to gauge the "improvement over your previous version", I don't think it makes sense to compare to another game! Unless those two games do the exact thing, this is as unscientific as it gets.
Instead, I would grab the previous version of your app from source control, run it, measure it, and then run it with the latest code and compare it again.
To compare, you could for example use the command line tool "top" (definitely available in busybox if your phone is rooted, not sure if it comes with a stock phone. Probably not). That shows you the CPU usage of your process in percent.
There is a battery profiler developed by Qualcomm called Trepn Profiler: https://developer.qualcomm.com/mobile-development/increase-app-performance/trepn-profiler
how I can use the data coming from Traceview (ie: CPU time in ms spent on each frame of the game) to determine battery usage (if this is at all possible)
In theory it would be possible to extrapolate the battery usage for your app by looking at the power consumption on a frame by frame basis. The best way to accomplish this would be to evaluate the power consumption of the CPU (only) for a given period (say two seconds) while your app is running the most CPU intensive operation, (additionally, GPU power usage could be gleaned this way also) while recording TraceView data (such as frames per second or flops per second) giving you the the traffic across the CPU/GPU for a given millisecond. Using this data you could accurately calculate the average peak power consumption for your app by running the above test a few times.
Here is why I say it is theory only: There are many variables to consider:
The number and nature of other processes running at the time of the above test (processor intensive)
Method of evaluating the power draw across the CPU/GPU (while tools such as PowerTutor are effective for evaluating power consumption, in this case the evaluation would not be as effective because of the need to collect time stamped power usage data. Additionally, just about any method of collecting power data would introduce an additional overhead (Schrödinger's cat) but that strictly depends on the level of accuracy you require/desire.)
The reason for the power consumption information - If you are looking to define the power consumption of your app for testing or BETA testing/evaluation purposes then it is a feasible task with some determination and the proper tools. If you are looking to gain usable information about power consumption "in the wild", on user's devices, then I would say it is plausible but not realistic. The vairables involved would make even the most determined and dedicated researcher faint. You would have to test on every possible combination of device/Android version in the wild. Additionally, the combinations of running processes/threads and installed apps is likely incalculable.
I hope this provides some insight to your question, although I may have gone deeper into it than needed.
-Steve
For anyone looking, one resource we've been using that is extremely helpful is a free app from AT&T called ARO.
Give it a look: ARO
It has helped me before and I don't see it mentioned very often so thought I'd drop it here for anyone looking.
"I know I can look at the %s of
different apps through the settings,
but this is again unscientific, as the
figure I get from this also depends on
what's happening in all of the other
apps."
The first thing I'd do is hunt for an app already out there that has a known, consistent battery usage, and then you can just use that as a reference to determine your app's usage.
If there is no such app, you will have to hope for an answer from someone else... and if you are successful making such an app, I would suggest selling your new "battery usage reference" app so that other programmers could use it. :)
I know this question is old and it's late, but for anyone who comes here looking for a solution I suggest you take a look at JouleUnit test:
http://dnlkntt.wordpress.com/2013/09/28/how-to-test-energy-consumption-on-android-devices/
It integrates into eclipse and gives you a great amount of detail about how much battery your app is consuming.
I know of three options that can help you for a having scientific measure:
Use a hardware specifically built for this. Monsoon HIGH VOLTAGE POWER MONITOR.
https://msoon.github.io/powermonitor/PowerTool/doc/Power%20Monitor%20Manual.pdf
Download and install Trepn Profiler (a tools from Qualcomm) on your phone. You wont need a computer for reporting. Reports are live and realtime on the phone. You can download Trepn Profiler from the following link: https://play.google.com/store/apps/details?id=com.quicinc.trepn&hl=en_US
Please take note that for recent phone (with android 6+) it works in estimation mode. If you need accurate numbers, you need to a list of select devices. Check the following link for the list:
https://developer.qualcomm.com/software/trepn-power-profiler/faq
You can profile apps separately, and the whole system.
Use Batterystats and Battery Historian from google.
https://developer.android.com/studio/profile/battery-historian
I'm writing a large scale financial application that I use mostly Java for. Now, to get some data, I need to write a small script (<200 LOC) to download CSV files (over 20,000 of them) and store them to disk. I need this to be fast, but, a few minutes doesn't make a difference to me. I was planning to write it in Java which isn't very hard, but, I would be done a lot faster if I wrote it in Ruby, so I was wondering if there would be a large difference in speed between Ruby (or JRuby) and Java. The 20,000 files are all about 1/2 a megabyte, and the server I'm downloading from isn't keen to give away data (its completely legal, don't worry about that), so, my application has to randomly sleep in between, and, if the website denies a request, it has to sleep for 3 minutes.
Recommendations to any other easier-than-Java type of languages is welcome.
Use whatever makes you comfortable. Language implementation speed probably won't be an issue there, network speed and the sleeps you have to put in will be a bottleneck anyway.
Sounds like you app will be I/O bound, so the speed of the language is not terribly important
In a language like Ruby or Python, I would expect this to be more like 20 LOC or less. Especially since you have a limited request rate, there is no point using simultaneous connections to try to speed things up
If you have a bunch of machines with different ip addresses ( or one machine with several external addresses), you could split the job across those to speed things up since the rate limiting is usually by ip address
Where do your urls come from?
We are doing some Java stress runs (involving network IO). Initially things are all fine and the system responds very fast (avg latency in test 2ms). But hours later when I redo the same test I observe the performance goes down (20 - 60ms). It's the same Jar files, same JVM, and the same LAN over which the stress is running. I am not understanding the reason for this behavior.
The LAN is 1GBPS and for the stress requirements I'm sure we are not using all of it.
So my questions:
Can it be because of some switches in the LANs?
Does the machine slow off after some time ( The machines are restarted .. say about 6months back well before the stress can start; They are RHEL5, XEON 64bit Quad core)
What is the general way to debug such an issues?
A few questions...
How much of the environment is under your control and are you putting any measures in place to ensure it's consistent for each run? i.e. are you sharing the network with other systems, is the machine you're using being used solely for your stress testing?
The way I'd look at this is to start gathering details on what your machine and code are up to. That means use perfmon (windows) sar (unix) to find out what the OS and hardware is doing and get a profiler attached to make sure your code is doing the same thing and help pin-point where the bottleneck is occuring from a code perspective.
Nothing terribly detailed but something I hope that will help get you started.
The general way is "measure everything". This, in particular might mean:
Ensure time on all servers is the same (use ntp or something similar);
Measure how long did it take to generate request (what if request generator has a bug?);
Measure when did request leave the client machine(s), or at least how long did it take to do i/o. Sometimes it is enough to know average time necessary for many requests.
Measure when did the request arrive.
Measure how long did it take to generate a response.
Measure how long did it take to send the response.
You can probably start from the 5th element, as this is (you believe) your critical chain. But it is best to log as much as you can - as according to what you've said yourself, it takes days to produce different results.
If you don't want to modify your code, look for cases where you can sniff data without intervening (e.g. define a servlet filter in your web.xml).
Is Java a suitable alternative to C / C++ for realtime audio processing?
I am considering an app with ~100 (at max) tracks of audio with delay lines (30s # 48khz), filtering (512 point FIR?), and other DSP type operations occurring on each track simultaneously.
The operations would be converted and performed in floating point.
The system would probably be a quad core 3GHz with 4GB RAM, running Ubuntu.
I have seen articles about Java being much faster than it used to be, coming close to C / C++, and now having realtime extensions as well. Is this reality? Does it require hard core coding and tuning to achieve the %50-%100 performance of C some are spec'ing?
I am really looking for a sense if this is possible and a heads up for any gotchas.
For an audio application you often have only very small parts of code where most of the time is spent.
In Java, you can always use the JNI (Java Native interface) and move your computational heavy code into a C-module (or assembly using SSE if you really need the power). So I'd say use Java and get your code working. If it turns out that you don't meet your performance goal use JNI.
90% of the code will most likely be glue code and application stuff anyway. But keep in mind that you loose some of the cross platform features that way. If you can live with that JNI will always leave you the door open for native code performance.
Java is fine for many audio applications. Contrary to some of the other posters, I find Java audio a joy to work with. Compare the API and resources available to you to the horrendous, barely documented mindf*k that is CoreAudio and you'll be a believer. Java audio suffers from some latency issues, though for many apps this is irrelevant, and a lack of codecs. There are also plenty of people who've never bothered to take the time to write good audio playback engines(hint, never close a SourceDataLine, instead write zeros to it), and subsequently blame Java for their problems. From an API point of view, Java audio is very straightforward, very easy to use, and there is lots and lots of guidance over at jsresources.org.
Sure, why not?
The crucial questions (independent of language, this is from queueing theory) are:
what is the maximum throughput you need to handle (you've specified 100 x 48kHz, is that mono or stereo, how many bits equivalent at that frequency?)
can your Java routines keep up with this rate on the average?
what is the maximum permissible latency?
If your program can keep up with the throughput on the average, and you have enough room for latency, then you should be able to use queues for inputs and outputs, and the only parts of the program that are critical for timing are the pieces that put the data into the input queue and take it out of the output queue and send it to a DAC/speaker/whatever.
Delay lines have low computational load, you just need enough memory (+ memory bandwidth)... in fact you should probably just use the input/output queues for it, i.e. start putting data into the input queue immediately, and start taking data out of the output queue 30s later. If it's not there, your program is too slow...).
FIRs are more expensive, that's probably going to be the bottleneck (& what you'd want to optimize) unless you have some other ugly nasty operation in mind.
I think latency will be your major problem - it is quite hard to maintain latency already in C/C++ on modern OSes, and java surely adds to the problem (garbage collector). The general design for "real-time" audio processing is to have your processing threads running at real time scheduling (SCHED_FIFO on linux kernels, equivalent on other OSes), and those threads should never block. This means no system calls, no malloc, no IO of course, etc... Even paging is a problem (getting a page from disk to memory can easily take several ms), so you should lock some pages to be sure they are never swapped out.
You may be able to do those things in Java, but java makes it more complicated, not easier. I would look into a mixed design, where the core would be in C, and the rest (GUI, etc...) would be in java if you want.
One thing I didn't see in your question is whether you need to play out these processed samples or if you're doing something else with them (encoding them into a file, for example). I'd be more worried about the state of Java's sound engine than in how fast the JVM can crunch samples.
I pushed pretty hard on javax.sound.sampled a few years back and came away deeply unimpressed -- it doesn't compare with equivalent frameworks like OpenAL or Mac/iPhone's Core Audio (both of which I've used at a similar level of intensity). javax.sound.sampled requires you to push your samples into an opaque buffer of unknown duration, which makes synchronization nigh impossible. It's also poorly documented (very hard to find examples of streaming indeterminate-length audio over a Line as opposed to the trivial examples of in-memory Clips), has unimplemented methods (DataLine.getLevel()... whose non-implementation isn't even documented), and to top it off, I believe Sun laid off the last JavaSound engineer years ago.
If I had to use a Java engine for sound mixing and output, I'd probably try to use the JOAL bindings to OpenAL as a first choice, since I'd at least know the engine was currently supported and capable of very low-latency. Though I suspect in the long run that Nils is correct and you'll end up using JNI to call the native sound API.
Yes, Java is great for audio applications. You can use Java and access audio layers via Asio and have really low latency (64 samples latency which is next to nothing) on Windows platform. It means you will have lip-sync on video/movie. More latency on Mac as there is no Asio to "shortcut" the combination of OS X and "Java on top", but still OK. Linux also, but I am more ignorant. See soundpimp.com for a practical (and world first) example of Java and Asio working in perfect harmony. Also see the NRK Radio&tv Android app containing a sw mp3 decoder (from Java). You can do most audio things with Java, and then use a native layer if extra time critical.
Check out a library called Jsyn.
http://www.softsynth.com/jsyn/
Why not spend a day and write a simple java application that does minimal processing and validate whether the performance is adaquate.
From http://www.jsresources.org/faq_performance.html#java_slow
Let's collect some ethernal wisdom:
The earth is flat.
and, not to forget: Java is slow.
As several applications prove (see links section), Java is sufficient
to build audio editors, multitrack recording systems and MIDI
processing software. Try it out!
I'm developing a Java application that streams music via HTTP, and one problem I've come up against is that while the app is reading the audio file from disk and sending it to the client it usually maxes out the CPU at 90-100% (which can cause users problems running other apps).
Is it possible to control the thread doing this work to use less CPU, or does this need to be controlled by the OS? Are there any techniques for managing how intensive your application is at present?
I know you can start threads with a high/low priority, but this doesn't seem to have any effect for me in this scenario.
(I can't get my head past "I've asked the computer to do something, so it's obviously going to do it as fast as it can...")
Thanks!
rod.
That task (reading a file from the disk and sending it via HTTP) should not use any significant amount of CPU, especially at the bitrates required for music streaming (unless you're talking about multi-channel uncompressed PCM or something like that, but even then it should be I/O-bound and not use a lot of CPU).
You're probably doing the reading/writing in a very inefficient way. Do you read/write each byte separately or are you using some kind of buffer?
I would check how much buffering you are using. If you read/write one byte at a time you will consume a lot of CPU. However, if you are reading/writing blocks of say 4 kB it shouldn't use much CPU at all. If your network is the internet your CPU shouldn't be much over 10% of a single client.
One approximation for the buffer size is the bandwidth * delay. e.g. if you expect users to stream at 500 KB/s and there is a network latency of up to 0.1 sec, then the buffer size should be around 50 KB.
You can lower it's priority using methods in Thread (via Thread.currentThread() if necessary).
You can also put delays in it's processing loop (Thread.sleep()).
Other than that, let the O/S take care of it. If your program can use 100% CPU, and nothing else needs the CPU your app might as well use it rather than letting the O/S idle task have it.
It's also true that streaming data should be I/O bound, so you should definitely review what's being done between reading the data and sending it. Are you reading/sending byte by byte, unbuffered, for example?
EDIT: In response to marr75's comment, I am absolutely not advocating that you write poor, inefficient code which wastes CPU resources - There is an article on my web site which clearly conveys what I think about that mind-set. Rather, what I am saying is that if your code legitimately needs the CPU, and you've prioritized it to behave nicely if the user wants to do other things, then there is no point at all in artificially delaying the outcome just to avoid pegging the CPU - that only does the user the disservice of making them wait longer for the end result, which they presumably want as quickly as possible.
Do you have one or more of:
Software RAID
Compressed folder
Intrusive virus checker
Loopback file system
I don't think you can lower the priority without losing the functionality (stream music). Your program gets this much cpu from the OS, because it needs it. It's not like the OS is giving cpu-time away for no reason or because "it's in the mood for it".
If you think, you can do the task without using that much cpu-utilization, you can profile your app and find out, where this high cpu-utilization takes place and then try to improve your code.
I think you are doing the streaming in an inefficient way, but I say streaming CAN be a highly utilizing task.
I repeat, don't think about reducing the cpu-utilization by lowering the priority of the process or telling the OS "Don't give that much cpu-time to this process". That's the whole wrong intuition in my eyes. Reduce the cpu-utilization by improving the algorithms and code after profiling.
A good start in profiling java is this article: http://www.ibm.com/developerworks/edu/os-dw-os-ecl-tptp.html
In addition to the information given above: the JVM is free in how it uses OS threads. The Thread in your Java application might run in a seperate OS thread, or it might share that thread with other Threads. Check the documentation for the JVM you are using for additional information.
Ok, thanks for the advice guys! Looks like I'm just going to have to look into trying to improve the efficiency of the way my app is streaming (though not sure this is going to go far as I'm basically just reading the file from disk and writing it to the client...).
VisualVM is very easy to use to find out where your CPU time is being spent for Java applications, and it is included in the latest versions of the JDK (named jvisualvm.exe on Windows)
Follow up to my "well thought out buffers" comment, a good rule of thumb for TCP buffering,
buffer size = 2 * bandwidth * delay
So if you want to stream 214kbps music (around 27kB/s) and have, let's say 60ms of latency, you're looking at 3.24 kilobytes, and rounding off to a nice 4kB buffer will work very well for you on a wide range of systems.