High performance file IO in Android - java

I'm creating an app which communicates with an ECG monitor. Data is read at a rate of 250 samples pr second. Each package from the ECG monitor contains 80 bytes and this is received 40 times per second.
I've tried using a RandomAcccessFile but packages were lost both in sync
RandomAccessFile(outputFile, "rws") and async RandomAccessFile(outputFile, "rw") mode.
In a recent experiment I've tried using the MappedByteBuffer. This should be extremely performant, but when I create the buffer I have to specify a size map(FileChannel.MapMode.READ_WRITE, 0, 10485760) for a 10MB buffer. But this results in a file that's always 10MB in size. Is it possible to use a MappedByteBuffer where the file size is only the actual amount of data stored?
Or is there another way to achieve this? Is it naive to write to a file this often?
On a side note this wasn't an issue at all on iOS - this can be achieved with no buffering at all.

Related

Memory-friendly way of writing an InputStream to a File

I'm trying to write a bulk-downloader for images. Getting the InputStream from an URLConnection is easy enough, but downloading all files takes a while. Using multithreading sure speeds it up, but having a lot of threads download files could use a lot of memory. Here's what I found:
Let in be the InputStream, file the target File and fos a FileOutputStream to file
The simple way
fos.write(in.readAllBytes());
Read whole file, write the returning byte[]. Probably useable for getting the website source, no good for bigger files such as images.
Writing chunks
byte[] buffer = new byte[bufsize];
int read;
while ((read = in.read(buffer, 0, bufsize)) >= 0) {
fos.write(buffer, 0, read);
}
Seems better to me.
in.transferTo(fos)
in.transferTo(fos);
Writes chunks internally, as seen above.
Files.copy()
Files.copy(in, file.toPath(), StandardCopyOption.REPLACE_EXISTING);
Appears to use native implementations.
Which one of these should I use to minimize memory usage when done dozens of times in parallel?
This is a small project fur fun, external libraries are overkill for that IMO. Also I can't use ImageIO, since that can't handle webms, some pngs/jpgs and animated gifs.
EDIT:
This question was based on the assumption that concurrent writing is possible. However, it doesn't seem like that is the case. I'll probably get the image links concurrently and then download them one after another. Thanks for the answers anyways!
The short answer is: from the memory usage perspective the best solution is to use the version which reads and stores data in chunks.
The buffer size should be basically choosen taking into account the number of simultaneuous downloads, available memory, download speed and efficiency of the target drive in terms of data tranfer rate and IOPS.
The long answer is that concurrent download of files doesn't neccesarilly mean the download will be faster.
The number of simultaneuous downloads to actually speed up the overall download time mostly depends on:
number of hosts from which you're downlading
speed of internet connection of the host from which you're
downloading, limited by the speed of the network adapter of this host
speed of your internet connection, limited by the speed of the network adapter of this host
IOps of the storage of the host from which you're downloading
IOps of the storage you're downloading into
Tranfer rate of the storage on the host from which you're downloading
Tranfer rate of the storage you're downloading into
Performance of the local and remote hosts. For instance some older or low cost android device could be limited by the CPU speed.
For instance it could appear that if the source host has single hdd drive and single connection already gives the full connection speed, then it is useless to use multiple connections, as it would make the download slower by creating overhead of switching beetwen tranfered files.
It could be also that the source host has a speed limit on single connection, so multiple connections could speed things up.
HDD drive usually have an IOPS value around 80 IOPS and tranfer rate about 80 MB/s, and it could limit the speed of download/upload by these factors. So practically you can't write or read from such disk more than 80 files per second, and more than the tranfer limit around 80MB/s, of course this hardly depends on the disk model.
SSD drive usually have tens of thousands of IOPS and transfer rate > 400 MB/s, so the limits are much bigger, but for really fast internet connections they are still important.
I found on the internet a time-based comparison (hence performance) here journaldev.com/861/java-copy-file
However if you are focused on memory you could try to measure the memory consumption yourself using something like the code proposed by #pasha701 here
Runtime runtime = Runtime.getRuntime();
long usedMemoryBefore = runtime.totalMemory() - runtime.freeMemory();
System.out.println("Used Memory before" + usedMemoryBefore);
// copy file method here
long usedMemoryAfter = runtime.totalMemory() - runtime.freeMemory();
System.out.println("Memory increased:" + (usedMemoryAfter-usedMemoryBefore));
Notice this returns values are in bytes, divide by 1000000 to get values in MB.

How to speed up data transfer over socket?

Currently I am using this code on both Server and Client Side. Client is an android device.
BufferedOutputStream os = new BufferedOutputStream(socket.getOutputStream(),10000000);
BufferedInputStream sin = new BufferedInputStream(socket.getInputStream(),10000000);
os.write("10000000\n".getBytes());
os.flush();
for (int i =0;i<10000000;i++){
os.write((sampleRead[i]+" ").getBytes());
}
os.flush();
The problem is that this code takes about 80 secs to transfer data from android client to server while it takes only 8 seconds to transfer the data back from server to client. The code is same on both sides and buffer is also same. I also tried with different buffer sizes but the problem is with this segment
for (int i =0;i<10000000;i++){
os.write((sampleRead[i]+" ").getBytes());
}
The buffering takes most of the time while the actual transfer takes only about 6-7 seconds on a 150mbps hotspot connection. What could be the problem and how to solve it?
First of all, as a commenter has already noted, using a monstrously large buffer is likely to be counter productive. Once your stream buffer is bigger than the size of a network packet, app-side buffering loses its effectiveness. (The data in your "big" buffer needs to be split packet-sized chunks by the TCP/IP stack before it goes onto the network.) Indeed, if the app-side buffer is really large, you may find that your data gets stuck in the buffer for a long time waiting for the buffer to fill ... while the network is effectively idle.
(The Buffered... readers, writers and streams are primarily designed to avoid lots of syscalls that transfer tiny amounts of data. Above 10K or so, the buffering doesn't performance help much.)
The other thing to now is that in a lot of OS environments, the network throughput is actually limited by virtualization and default network stack tuning parameters. To get a better throughput, you may need to tune at the OS level.
Finally, if your network path is going over a network path that is congested, has a high end-to-end latency or links with constrained data rate, then you are unlikely to get fast data transfers no matter how you tune things.
(Compression might help ... if you can afford the CPU overhead at both ends ... but some data links already do compression transparently.)
You could compress the data transfer, it will save a lot of memory and well to transfer a compress stream of data is cheaper... For that you need to implement compress logic in client side and decompress logic in server side, see GZIPInputStream... And try reducing the buffer size is huge for a mobile device...

Android Bluetooth OutputStream slow write

I'm building a client-server android application. One app is installed in Google Glass and sends video frames captured by the camera and sent via Bluetooth connection. Another application is installed in an Android device and reads the video frames. When reading from the stream len = mmInStream.read(buffer, 0, bufferSize); it seems like the maximum byte count read is 990. It doesn't matter if I set bufferSize to an extremely large number.
I'm sending a 320x240 image frame with 4 channels. So this is a total
of 307200 bytes. So when reading the entire image, it is being read
in chunks of 990 bytes which I think affects the speed of my app. It
takes 1-3 seconds to read all data. Is there a way to change the
maximum bytes read? Is this an application setting or controlled by
the Android OS? I'm not sure if reading all data at once would affect
performance but I am just curious.
UPDATE:
I notice the same thing when sending the data from Google Glass using OutputStream. It takes about 2 seconds to write to the OutputStream. Is this normal performance for Bluetooth connection? Is there a better way to transmit camera capture frames between 2 devices?
UPDATE 2:
I think the delay is in the write speed. Writing the data to the stream takes about 2 seconds. When the other app is trying to read the data, it probably waits for the complete data to be written to the stream. Still not sure if this is as expected or can still be improved.

GZIP Compressing http response before sending to the client

I have gzipped the response using filter. The data received has been compressed from 50 MB to 5 MB however, it didn't result in much saving of time. The time taken has reduced from 12 seconds to 10 seconds. Is there anything else which can be done to reduce the time period?
Initially, the data transfer over the network took 9 seconds, now it takes 6 seconds after compression and 1 sec to decompress approximately
What else can be done?
For the filter the possible measures are little:
There exist different compression levels, the more compression the slower. The default or GZIPOutputStream should be fast enough.
GZIPOutputStream has constructors with size to set.
Then there is buffered streaming, and not doing byte-wise int read().
Code review for plausibility: the original Content-Length header must be removed.
For the static content:
.bmp are a waste of space
.pdf can be optimized when images repeat, w.r.t. fonts.
.docx is a zip format, so inner image files might be optimized too
For dynamic content generation:
Fixed documents can be stored (xxxxxx.yyy.gz) with timestamp and then the generation time forfalls. Only of interest after measuring the real bottle neck; likely the network.
The code for delivery should be fast. In general chain streams, try not to write to a ByteArrayOutputStream, but immediately to a BufferedOutputStream(original output stream). Check that the buffering is not done twice. Some wrapping streams check that the wrapped stream is an instanceof a buffered.
Production environment:
Maybe you even need throttling (slowing down delivery) in order to serve multiple simultaneous requests.
You may need to do the delivery on another server.
Buy speed from the provider. Inquire from the provider, whether the througput was too high, and the provider slowed things down.

Real time audio processing in Android

I am using AudioRecord.read to capture PCM data to bytes.
However, I found that it restricted to initialize the AudioRecord object with at least 3904 buffers. Where the sampling rate is 44100.
Since I need to perform FFT of the data so I increased the samples to 4096.
As a result, the callback runs every 40-60ms by setPositionNotificationPeriod to 500. Since a further decrease the duration doesn't make any changes.
I'm wondoring if it is the fastest callback time with below configuration?
Sampling Rate: 44100
Channel: Mono
Encoding: PCM 16 BIT
BufferSize: 4096
(Im not sure if it is 4096 or 2048 since I read 4096 bytes every time and it can only fill 2048 2bytes buffer)
even 40-60ms is acceptable, I then perform FFT which eventually block each callback around 200-300ms. And there is still many noise affecting the accuracy.
I'm using these source code: FFT in Java and Complex class
Is there any other choice that perform fast, reliable and consume less memory processing FFT?
I found that the above classes new too much objects and pop up loads of gragarbage collection's messages.
In conclude, I have 3 questions:
Is the initial bufferSize equal to the buffers that I can read from the .read method?
Is 40-60ms the limitation to capture audio data with 44100 sampling rate?
Could you suggest some FFT library so that I can have a better performance in processing FFT? (I think if it is better to use C code library?)
Sorry for my bad english, also thank you for spending your time on my question.
P.S I tried it on iOS and it can just take 512 samples with 44100 sampling rate. So every callback takes around 10ms only.
Regarding question #3: Maybe not as fast as a native library, but I've started using these classes, and they seem to be fine for real-time work (although I am reading from files, not the microphone): FFTPack.
The most common native library is KissFFT, which you can find compiled for Android as part of libGDX.

Categories