I've been looking for this for some time now but I couldn't find anything useful. Is there a way to reduce the size / resize of an existing video in android? I want to send it via the network.
Thanks a lot!
There is nothing built into Android for this. You are welcome to research third party code that might do this for you. Bear in mind that most Android devices run on low-power CPUs and have relatively slow flash memory, and so re-sizing a video may take a very long time.
Related
I am trying to run Example One from https://github.com/fyhertz/libstreaming-examples
It uses libstreaming-4.0.
I have forced it to use encodeWithMediaCodecMethod2(). This method uses the createInputSurface() method introduced in Android 4.3. This reduced the latency from 3 seconds to 1 second.
I am creating a video chat application (like Skype) and I need the video latency to be much lower than this.
I don't know where to go from here really.
Could anyone offer suggestions on how to get the latency down? Different libraries? techniques? maybe the NDK? I have done loads of research but I have had very little luck :(
Please help
Thanks
Thre are few open source projects
doubango
ffmpeg (you will need javacv - Java wrapper for C/C++ SDK)
Also thre is IMSDroid (open source 3GPP IMS Client for Android based on doubango) and FFMpeg's streaming guide about latency
I am an intern at company and my 'learning task' is to make Android application in Java, which takes H.264 format videos (at first they will be stored at SD card) and make like a very simple player, which would have the following features:
1.You can pause/play/fast-forward/fast-backward video
2.When you are at certain point of video and it is stopped, you can switch to the same time in a different video (same picture frame index i guess).
How could i do that? Is using Gstreamer a good way? I looked at the poor tutorial available on net and because of my lack of experience in video processing (I've never worked with video in Android applications) I have quite a hard time understanding what is pipelines, also the JNI and even setting up Gstreamer for Eclipse. Is there a better way of doing this? What should I get to know before starting to mess with this program?
Thanks, in advance!
All of your mentioned features are possible in Gstreamer, however, there is a learning curve.
To understand the GStreamer android tutorials, you must first go through the basic tutorials here: http://docs.gstreamer.com/display/GstSDK/Basic+tutorials
If you feel comfortable with the pipeline architecture, then go ahead and set up your android environment (which is no easy task by itself). Gstreamer is a very very powerful framework where you can do almost anything, if you're willing to make the effort to overcome the learning curve.
So i suggest to go ahead in gstreamer only if you have the time and patience, else go for a simpler solution. Unfortunately i'm not familiar with android, so i cannot suggest any. maybe a quick google search will help.
My application takes a long time to prepare and buffer an audio stream. I have read this question Why does it take so long for Android's MediaPlayer to prepare some live streams for playback?, however it just says people have experienced this issue, it does not state how to improve the problem.
I am experiencing this in all versions of Android, tested from 2.2 - 4.1.2.
The streams are in a suitable bit-rate for mobile and 3G connection. The same stream takes less than a second to start buffering in the equivalent iOS app.
Is there a way to specify the amount of time that should be buffered? I know that the Tune In radio application offers this feature ( https://play.google.com/store/apps/details?id=tunein.player ).
Thanks.
Edit: I've tested again and found that it only happens on devices running Gingerbread and above (>=2.3). I know that Android changed the underlying framework from OpenCore to StageFright. So how can I optimise the media framework? It just seems wrong that the old HTC Wildfire can prepare, stream and play, literally 10x faster than the brand new HTC One X and Nexus 7.
I have struggled with this question for months. Finally i found the solution.
The real problem is in the implementation of the MediaPlayer class. Particularly with the way MediaPlayer buffers the data. This is why the solution is to create your own buffering, save it to a temp file and feed that to MediaPlayer.
This tutorial and source code explain exactly how. http://androidstreamingtut.blogspot.nl/2012/08/custom-progressive-audio-streaming-with.html
By adapting this code, it is easy to create a much better streaming player.
Google Developers really screwed up here.
EDIT : This answer is rather old. Nowdays i would recommend not using MediaPlayer and use ExoPlayer instead. It is extendable, stable and can play many different types of media. You can find it here: https://github.com/google/ExoPlayer/
There really isn't much you can do since the Android MediaPlayer class doesn't provide access to lower level settings such as buffer size. The only alternative would be to make your own player using AudioTrack and a library like FFmpeg to do the decoding.
The one thing I'd recommend is to play around with encoding. For instance, for MP4s, ensure that the MOOV Atom is located at the beginning of the file (there are enough questions on S/O regarding how to do this with ffmpeg, etc). With MP3s, you can look at different codecs or bitrates for instance.
You can, for instance, try a number of audio files you find online, and if you see one that doesn't take a long time to buffer, try to encode your files in the same way.
I am creating an app that requires a sound or sounds to potentially be played every ~25ms. (300beats per minute with potentially 8 "plays" per beat)
At first I used SoundPool to accomplish this. I have 3 threads. One is updating the SurfaceView animation, one is updating the time using System.nanoTime(), and the other is playing the sounds (mp3s) using Soundpool.
This works, but seems to be using a lot of processor power as anytime a background process runs such as the WiFi rescanning, or GC, it starts skipping beats here and there, which is unacceptable.
I am looking for an alternative solution. I've looked at mixing and also the JET engine.
The JET engine doesn't seem like a solution as it only uses MIDIs. My app requires high-quality sounds (recordings from actual instruments). (correct me if I'm wrong on midi not being high quality)
Mixing seems very complicated with Android as it seems first you must get the raw sound (takes up a lot of memory) and also create "silence" in between sounds. I am not sure if this is the most elegant solution as my app will have variable speed (bpm) controlled by the user.
If anyone is experienced in this area, I would GREATLY appreciate any advice.
Thank you
I am looking to create a video training program which records videos - via webcam, user screen capture and captures sound. Now the main problem is that I need a cross-platform (mac and windows) solutions.
I know its possible to use flash to record webcam + audio. But its not possible to record the user's screen via flash.
So am wonder if I should use Java (which i believe will work on mac & windows). I do not want to develop to separate versions because of the cost involved in developing two versions.
Please guide me as I am new to this.
Thank you.
UPDATE
Hello again,
I had a look at the following site: www.screencast-o-matic.com or www.screentoaster.com. I see that they have developed a java applet which helps interact with Windows/Mac to record the screen.
I am wondering how to go about developing something like that and integrating it with Flash (for webcam and audio recording).
Is this a better idea?
This is not an answer to your question, but I strongly recommend against using video for educational programmes. Our company delivers university courses on-line, and we long ago learned that video feeds are only effective under particular scenarios. In general, a talking head is a waste of bandwidth. You're much better off to put together a well designed powerpoint presentation, record a voice-over (and edit it!) and then assemble the whole thing as a flash presentation. This is a non-trivial amount of work, but it provides a much more interesting product for the student.
When to use video:
1) When you are demonstrating something dynamic - Mechanics or Chemistry for example.
2) When you are acting out a scenario or case as an illustration -- For example, threat de-escalation techniques for high school teachers.
When you solve the screen recording problem, seriously consider whether you need full motion or if you can get away with stills. Often the motion is distracting, and a still with good voice over can be more effective. (Hint: Replace mouse pointers with something HUGE before recording -- Like Fox did with hockey pucks)
Try CamStudio. I don't know, if it works on Mac, but on windows, it's the best solution I know. It's open source, so you can use it's source code, if you want to :)
If you're looking to build an application that does all of the recording and screen capture itself, then you might consider using Adobe AIR (essentially, Flash running on the desktop) in combination with Merapi. Merapi is essentially a bridge between Adobe AIR and Java. So for example, for your project, you might use Java to handle the lower-level (but still cross-platform) stuff you can't do natively in AIR, and use Merapi to wire the Java application to your AIR UI.
This is by no means a simple project. Lets get that said and out the way. There are open source (and cross-platform) options for each element, but nothing (I know of) that will do everything for you.
I think the "cleanest" option would be to use Flash for webcam and audio, as you said, and run a VNC server to send the screen video... The only closed-platform code will be the VNC launching code. That should be pretty simple to maintain!
That raises a problem because most people are behind NAT firewalls these days. Setting up port forwarding is a pain in the behind. I've used an app called Gitso before which allows people to connect to me and send their desktop to my screen (for tech support). Its VNC-based and all it really does is add another layer on top of the VNC connection so rather than me connecting to them, they connect to me. That makes the whole business of port forwarding a non-issue.
And once you've recorded everything, there's the final issue of syncing it all back together... Might not be so hard.
Well, Camtasia provides the solution to get your problem done. It can record the onscreen activity and also the webcam video and put them in the same player template. Another screen recorder DemoCreator can publish the screen recording as Flash movie, but can not record the webcam.