I'm designing a server-side application that takes an image from a user, processes it, and sends it back over the network. Since the network connection might be quite slow, I'd like to speed things up by starting to process parts of the image while it is still being sent over the network and send parts of the processed image back to the client while other parts are still being processed.
Is this possible, preferably using the javax.imageio classes?
EDIT: I am mostly interested in writing PNGfiles. Wikipedia says: "IDAT contains the image, which may be split among multiple IDAT chunks. Doing so increases filesize slightly, but makes it possible to generate a PNG in a streaming manner."
This strongly depends on the encoding of the image. Some image formats require the whole file to be available before you can decode it. Others - like GIF and some PNG encodings (as far as I remember) decode to indvidual blocks which can then be processed.
You most likely need to write custom decoders, which may be quite a bit of work if you are not intimately familiar with the formats, and you need to support several.
I think you perhaps should work on an upload bar instead?
Related
We are constantly transferring gigabytes of compressed Tiffs overseas and it takes a long time for each batch of images to transfer. It is not uncommon for a batch to take over 6 hours to transfer. I would like to reduce the time to transfer a batch of images.
I understand that videos compress really well because most of the time each frame is generally very similar to the one before it and compression algorithms take advantage of that. In our scenario, the images often look similar to one another. Are there any image compression libraries I can use to take advantage of the fact that there is a lot of redundancy across images? Ideally I would want lossless compression.
Would it work if I turned the images into a video before transferring them and then turned them back to images on the other side? If this would work, what libraries would you recommend? I need to be able to call this from Java and preferably run it on Linux, but the library does not need to be written in Java. Windows could also be a possibility.
What I would try first:
Start from uncompressed tiffs (otherwise, it will be hard to find similarities).
tar them together (so they are contained within a single file, can be a specific range of images off course).
Then use a compression algorithm of your choice to see which one yields the best results (on the single file).
Easy enough to try out without much effort. How well it works depends on the source images themselves (and the compression algorithm used).
Alternative approach if the above does not yield enough results:
Make sure you have all uncompressed images.
Send over the first image.
Do a binary diff (or maybe diffing the hexdump) towards the next image.
Send over the diff file and apply it at the receiving end to reconstruct the image.
Repeat 3-4 for every image.
I personally don't think you will easily get good (lossless) results by using video compression algorithms (after all, they are specifically tailored to a different purpose).
I'm developing a download-anything app for Android and it works fine in most cases.
I have come across sites that have URLs with a long hash signature (it seems) at the end of it. But the standard video app for Android, and my web browser is able to play it directly, streaming.
I have no clue as to how to stream this to a file (progressive download?), which should be possible. The URL paramater after '?' is used for something. As Jessica pointed out the URL below is probably used for RTMP streams with rtmp://....
URL example (host domain edited out):
http://blush.im.54ca3830.919727.x.yesitisporn.com/videos/3gp/d/b/f/
filthysite.com_dbf7f0a9c3913d4d0e09a36fe8ab3aba.mp4?e=1348368010&ri=1024&rs=85&h=c81c6707b13714ac65b651ba2939d94a
In the URL above there is a link to an mp4 video file. Trying to download it with this shorter URL does not work: http://blush.im.54ca3830.919727.x.yesitisporn.com/videos/3gp/d/b/f/
filthysite.com_dbf7f0a9c3913d4d0e09a36fe8ab3aba.mp4. Returns an empty document.
Since popular video apps and browsers pick up these types of HTTP links just fine for playback; there should be a standard way of getting the byte stream and write it to file. Thanks for any help!
In response to the question as originallly posed:
It is quite common to add URL parameters, splitting the url from the parameters with a question-mark, and seperating the parameters with ampersands. Take the substring on everything up to the first non-esecaped question-mark in the url, if a question-mark is present, otherwise use the entire string.
Based on new feedback:
Like I said in my comment, and as confirmed by your tests without the parameters, I think you're barking up the wrong tree to try to change the URL. I would suspect the reason you can't save these specific streams is there is something different about the file format or server configuration that is different than the ones that work. In particular, my first thought would be that perhaps those URLs are served by a Streaming Server (Example: Icecast), and not a normal file-based HTTP server. Advanatages of a streaming server include being able to on the fly serve different bandwidth versions of the streams, and instant seeking to any part of the file and so forth, but the downside for people trying to build download anything applications is those servers don't send the data as a single file, they send the data in chunks--trying not to get too crazy technical, basically, a chunk might have the first frame plus a bunch of diffs for what's on the video in the next several frames and the audio, repeat. As it does this it can throttle what quality to send depending on the latency it's seeing or the resolution of your screen, or resize what it sends if you resize the window and so forth. This sort of streaming works particularly well for live events, but it is not without its advantages for recorded events as well--particularly random seeking. To complicate the matter of capturing the data, some streaming servers actually transmit the video data via RTMP, RTSP, or MMS protocols instead of over HTTP. HTTP Pseudo-streaming or straight HTTP downloads is going to be a lot easier to save than streaming via RTMP. Some streaming types you pretty much have to recreate the file from the individual packets or transcode it from what plays on the screen as it plays in real time. So you may need to spend some time learning about different streaming protocols to figure out the best way to save the specific stream you're looking at.
I'm writing a Java program and I'd like to convert a ogg file into mp3 file.
I've spend a lot of time trying to find a good library to do that, but without success for the moment.
I think I'll need a ogg decoder (jorbis ?) and a mp3 encoder (lameOnJ ?).
Moreover, once the conversion is done, I need to set some tags in the file (artist/track tag, etc).
This is a windows and OS X app.
Could you give me any hint about how to process, with examples if possible.
Thanks
You have lots of choices, and it depends on how much effort you want to put in, and what constraints you have regarding the execution platform.
Many developers would simply make System.exec() calls to external decode/encode/label executables, writing the intermediate files to disk. This is slightly clunky, but once it's set up properly, it works.
A more sophisticated option is to use libraries such as the ones you've found. You can still use the filesystem to temporarily store the uncompressed version.
You can, however, avoid storing the intermediate step -- and maybe make it faster -- by pipelining. You need to feed the output of the decoder as the input of the encoder, and set them both going.
The details of this depends on the API. If you're lucky, they can work with chunks, and you may be able to manage them in a single thread.
If they work with streams, you might need to get your hands dirty and work with threads. One thread for the encoder, one for the decoder.
I am creating an application that requires a lot of image thumbnails (~3000, 5-25KB). Because speed is essential I plan on loading these images into memory when the application starts. At runtime, new thumbnails will be downloaded and added to the collective.
I could store them all in a folder, but reading thousands of files into memory when a program starts hardly seems efficient.
My second option would be to save them in some kind of (compressed) archive. This would make storage itself and loading more efficient (I think). However, new files will be added regularly, and that will probably not go as smoothly as just saving them in a folder.
Is storing a cache of small files in a (compressed) archive a bad idea or not? Are ZIP files the way to go? Would I be better off using uncompressed archives (and if so, what kind)?
All image files will be JPEG's.
Thanks in advance!
EDIT: I am considering to drop the "load everything into memory on application start" thing. This would simplify my question a little. My initial idea to put everything in one big file now seems less beneficial, since the problem of many files in one directory can be solved by hashing into subdirectories.
Small files don't compress especially well, so you may not gain much compression.
While loading the files will be fast because they are smaller, decompression adds time. You'd have to experiment to see which is faster.
I would think the real issues would relate to the efficiency of the file system when it comes to iterating over all the little files, especially if they are all in one folder. Windows is notorious for being pretty inefficient when folders contain lots of files.
I would consider doing something like writing them out into one file, uncompressed, that could be streamed into memory -- maybe not necessarily contiguous memory, as that might be a problem. But the idea would be to put them all in one file. Then write some kind of index that ties a file name or other identifier to an offset from which the location of the image in memory could be determined.
New images could be added at the end, and the index updated appropriately.
It isn't fancy but that's what you're trying to avoid. An archive or even a file system gives you lots of power and flexibility but at the cost of efficiency. When you know what you want to do, sometimes simple is better.
I would consider implementing a solution that reads files from a folder, another that divides the files into subfolders and subsubfolders so there are no more than 100 or so files in any given folder, then time those solutions so you have something to compare to. I would think a simple indexed file would be fast enough that you wouldn't even need to pre-load the images like you're suggesting -- just retrieve them as you need them and keep them around once they're in memory.
All disk based storage, and most database, allocate space in chunks. The chunks on large capacity disks can be large. If you have 5kb files and a 32kb disk chunk you end up with 85% wasted space on your storage.
Using an archive won't compress jpeg much because the jpeg encoding algorithm already does that. It will however save you the wasted space on the storage media. It does make things more complicated and perhaps a little slower.
In my opinion I think that the zip file way it´s a bad idea, because you will slowdown everything with the process to load the zip file and unzip it to extract each image.
I think that the purpose of a thumbnail image is that by nature is small so your app plus hardware can load it as fast as possible. So I believe that it is a better idea to load each image as you need it.
Well, if you have small, "geometric" pictures, you may implement them as objects of type javax.swing.Icon rather than images to load from the filesystem.
http://download.oracle.com/javase/6/docs/api/javax/swing/Icon.html
http://download.oracle.com/javase/tutorial/uiswing/components/icon.html
So you will implement one or more objects which draw themselves onto a Graphics surface using the Graphics drawing primitives, instead of copying pixels.
If this is a web-application then the best performance boost you can get is setting good HTTP caching headers. Having a unique URL for every image (also different URLs for different versions of the same image) makes it possible to set VERY far future expire headers, because changing the image changes the URL leading into refetch.
I won't compress, because JPEG cannot be good compressed and it only costs CPU time.
I would recommend to simply store the images into filesystem and consider the use of libraries like jawr or implement your own caching strategy.
I know this question has already answered but I think you need more options other than zipping.
While zip is good, It's not really affect much for JPEG since JPEG has already compressed.
Other thing you may want to consider is :
Put the image in Content Delivery Network (CDN)
Compress components with gzip ( mean the server will automatically zip every response ) and you dont need to write any code to unzip it later - it's handled by the browser automatically.
Since you mention JPEG, you may want to use JPEGTran.Run jpegtran on all your JPEGs.
This tool does lossless JPEG operations such as rotation and can also be used to optimize and remove comments and other useless information (such as EXIF information) from your images.
jpegtran -copy none -optimize -perfect src.jpg dest.jpg
Use Image Sprites. Instead of asking browser to download many image at same time, ask the browser to only download one.
For the details read : http://developer.yahoo.com/performance/rules.html#opt_images
For the basic examination how to improve your website performance you can try install YSlow ( plugin to detect uneffecient code ) in Firefox.
Hope that helps.
I want to take the input from a standard, off the shelf, IP based webcam - which, has yet to be decided, so the API is not yet clear - manipulate it a little and then pump it back out so that others can view my manipulated image.
Given that this is a little vague, which technologies can you recommend?
I am thinking to use an Adroid slate, to save costs, so it's probably Java coding. So, how best to get an image stream (plus void, modify the stream and send the modified video plus unmodified audio?
I might also add file transfer & IM chat into the mix ...
FOSS solutions highly welcomed
Most IP cameras produce RTP/RTSP with jmpeg, mpeg4 or h.264 encoded stream.
You would need to write a RTP/RTSP client and then a decoder for the particular stream, then manipulate images, reencode stream and serve it over some standard protocol (again probably RTP/RTSP).
Not something Android devices are powerful enough to do. Also there are no pure Java libs that can do this.
What you should use is Xuggler. If you need to serve streams to Flash and/or iPhone you should add Wowza or Red5.