I'm writing a Java program and I'd like to convert a ogg file into mp3 file.
I've spend a lot of time trying to find a good library to do that, but without success for the moment.
I think I'll need a ogg decoder (jorbis ?) and a mp3 encoder (lameOnJ ?).
Moreover, once the conversion is done, I need to set some tags in the file (artist/track tag, etc).
This is a windows and OS X app.
Could you give me any hint about how to process, with examples if possible.
Thanks
You have lots of choices, and it depends on how much effort you want to put in, and what constraints you have regarding the execution platform.
Many developers would simply make System.exec() calls to external decode/encode/label executables, writing the intermediate files to disk. This is slightly clunky, but once it's set up properly, it works.
A more sophisticated option is to use libraries such as the ones you've found. You can still use the filesystem to temporarily store the uncompressed version.
You can, however, avoid storing the intermediate step -- and maybe make it faster -- by pipelining. You need to feed the output of the decoder as the input of the encoder, and set them both going.
The details of this depends on the API. If you're lucky, they can work with chunks, and you may be able to manage them in a single thread.
If they work with streams, you might need to get your hands dirty and work with threads. One thread for the encoder, one for the decoder.
Related
I was able to follow the examples of how to encode video with io.humble easily enough. But, the only example of including audio that I can find simply encodes audio at the beginning of the video. I can't figure out how to encode samples at arbitrary locations. Using setTimestamp doesn't do anything.
Here is the example I found:
https://www.javatips.net/api/myLib-master/myLib.AGPLv3/myLib.humble.test/src/test/java/com/ttProject/humble/test/BeepSoundTest.java
If I modify the beepSamples() method to increase the "sampleNum" value, I can create a longer tone. But calling the method multiple times or setting samples.setTimestamp() to other values or calling setTimestamp() on the packets, all do nothing.
No matter what I do, the audio always shows up at the beginning of the video.
Ultimately, I want to be able to load arbitrary mp3 files of various audioclips and then merge them into the audio stream of the video at specific timestamps. But I can't even get this example to encode at different points in the video stream.
The author of this tool unfortunately is not interested in maintaining it or providing examples. Luckily, I found JavaCV which is an alternative that turned out to be really easy to use.
So to anyone else having this problem, I recommend switching to JavaCV. Other options are also JCodec and Xuggler, but Xuggler is deprecated (same author as io.humble) and JCodec apparently is slow and produces much larger files.
If you need support with these kind of projects. I maintain a fork of Xuggler (https://github.com/olivierayache/xuggle-xuggler)..I can provide help on these topics.
We are constantly transferring gigabytes of compressed Tiffs overseas and it takes a long time for each batch of images to transfer. It is not uncommon for a batch to take over 6 hours to transfer. I would like to reduce the time to transfer a batch of images.
I understand that videos compress really well because most of the time each frame is generally very similar to the one before it and compression algorithms take advantage of that. In our scenario, the images often look similar to one another. Are there any image compression libraries I can use to take advantage of the fact that there is a lot of redundancy across images? Ideally I would want lossless compression.
Would it work if I turned the images into a video before transferring them and then turned them back to images on the other side? If this would work, what libraries would you recommend? I need to be able to call this from Java and preferably run it on Linux, but the library does not need to be written in Java. Windows could also be a possibility.
What I would try first:
Start from uncompressed tiffs (otherwise, it will be hard to find similarities).
tar them together (so they are contained within a single file, can be a specific range of images off course).
Then use a compression algorithm of your choice to see which one yields the best results (on the single file).
Easy enough to try out without much effort. How well it works depends on the source images themselves (and the compression algorithm used).
Alternative approach if the above does not yield enough results:
Make sure you have all uncompressed images.
Send over the first image.
Do a binary diff (or maybe diffing the hexdump) towards the next image.
Send over the diff file and apply it at the receiving end to reconstruct the image.
Repeat 3-4 for every image.
I personally don't think you will easily get good (lossless) results by using video compression algorithms (after all, they are specifically tailored to a different purpose).
The application I'm trying to build will have a lot of images displayed (in ImageViews), and I'm not fetching them from a server/online service as it will need to be used offline. I know I can just dump them in the res/drawable directories, but I was wondering if there's any way to optimize this. Is there a way to somehow compress these images (besides making them smaller, they're already as small as I need) or use some other sort of android tool to better store them locally on the device?
I could just be overlooking a well used feature, and if so, it'd be great if someone could point me to that.
Edit: If I were to compress the images somehow, I would need to decompress at runtime or something, and that would take another thread/loading time. I'm not sure how to do that either, so I'm just brainstorming various ways, and I thought someone here would've come across this at some point.
If you haven't already, this is a good read - http://developer.android.com/guide/practices/ui_guidelines/icon_design.html#design-tips
When saving image assets, remove unnecessary metadata
Although the Android SDK tools will automatically compress PNGs when
packaging application resources into the application binary, a good
practice is to remove unnecessary headers and metadata from your PNG
assets. Tools such as OptiPNG or Pngcrush can ensure that this
metadata is removed and that your image asset file sizes are
optimized.
Outside of all other compression logic the above would be the place to start. Also when you say "optimize" - do you mean optimize the way images/drawables are loaded in your app or just the amount of space (on disk) the app will consume?
I'm designing a server-side application that takes an image from a user, processes it, and sends it back over the network. Since the network connection might be quite slow, I'd like to speed things up by starting to process parts of the image while it is still being sent over the network and send parts of the processed image back to the client while other parts are still being processed.
Is this possible, preferably using the javax.imageio classes?
EDIT: I am mostly interested in writing PNGfiles. Wikipedia says: "IDAT contains the image, which may be split among multiple IDAT chunks. Doing so increases filesize slightly, but makes it possible to generate a PNG in a streaming manner."
This strongly depends on the encoding of the image. Some image formats require the whole file to be available before you can decode it. Others - like GIF and some PNG encodings (as far as I remember) decode to indvidual blocks which can then be processed.
You most likely need to write custom decoders, which may be quite a bit of work if you are not intimately familiar with the formats, and you need to support several.
I think you perhaps should work on an upload bar instead?
This has been discussed before here. Using Java, I have developed my web services on Tomcat for a media library. I want to add a functionality for streaming media while dynamically transcoding them as appropriate to mobile clients. There are few questions I am pondering over :
How exactly to stream the files (both audio and video) ? I am coming across many streaming servers - but I want something to be done on my code from Tomcat itself. Do I need to install one more server, i.e , the streaming server - and then redirect streaming requests to that server from Tomcat ?
Is it really a good idea to dynamically transcode ? Static transcoding means we have to replicate the same file in 'N' formats - something which is space consuming and I dont want. So is there a way out ?
Is it possible to stream the data "as it is transcoded"...that is, I dont want to start streaming when the transcoding has finished (as it introduces latency) - rather I want to stream the transcoded data bytes as they are produced. I apologize if this is an absurd requirement...I have no experience of either transcoding or streaming.
Other alternatives like ffmpeg, Xuggler and other technologies mentioned here - are they a better approach for getting the job done ?
I dont want to use any proprietary / cost based alternative to achieve this goal, and I also want this to work in production environments. Hope to get some help here...
Thanks a lot !
Red5 is another possible solution. Its open source and is essentially Tomcat with some added features. I don't know how far back in time the split from the Tomcat codebase occurred but the basics are all there (and the source - so you can patch what's missing).
Xuggler is a lib 'front end' for ffmpeg and plays nicely with Red5. If you intend to do lots of transcoding you'll probably run into this code along the way.
Between these two projects you can change A/V format and stream various media.
Unless you really need to roll your own I'd reccomend an OSS project with good community support.
For your questions:
1.) This is the standard space vs. performace tradeoff. You see the same thing in generating hash tables and other computationally expensive operations. If space is a larger issue than processor time, then dynamic transcoding is your only way out.
2.) Yes, you can stream during the transcode process. VLC http://www.videolan.org/vlc/ does this.
3.) I'd really look into VLC if I were you.