Hi I am making an app that streams video from android to pc. I have the streams setup correctly and am able to open the .sdp file containing my RTP stream information in VLC on any pc fine. I can get the video and audio stream and it works great.
I am also writing a java app for PC that will be able to view this stream using vlcj and do other operations between it and the android app. My problem is that this app for some reason is only functioning correctly on my pc in my college dorm room. The pcs in my computer science lab and at home over my home network give me "Destination Unreachable (Port Unreachable)" errors when viewing the traffic in wireshark. Keep in mind this only happens through my vlcj implementation, it works perfectly using the actual vlc client. It also works fine on my home pc, which should have the same setup as the one in my dorm..
I have the following callback in my code for the MediaPlayer of vlcj:
#Override
public void buffering(MediaPlayer mediaPlayer, float newCache) {
System.out.println("Buffering " + newCache);
}
In the working computer it prints 0.0, and then continues up to 100 where the video appears and plays. On the computers where it doesn't work it just says 0.0 and then stops...
I really cannot understand this issue since it uses the same dlls as the vlc client and the packets get through when using the normal vlc client. Could the jre I'm using be denying udp packets?
Well I just got another computer to work with it... the only thing I can think of that would be different in the working computers is that they have actual video cards whereas the nonworking ones have integrated cards... But I don't think that would make a difference.
Related
I am new to stack overflow and I am developing an android app which can record and retransmit IR codes via audio input. But i dont know how to do it. Can someone please help me about it. I have built the hardware (An headphone jack with IR Emitter and IR Receiver connected to it) and i can send IR codes by first packing them into .Wav file and then playing that file using AudioTrack but i dont know how to make receiver to work. I want receiver to normally go to recording mode, record every IR code i give and then just repeat it. I have also checked the receiver via WinLIRC it works like a charm. Now its time to implement in android. Currently i am using Aurduino Nano and IR Scrutinizer software to record IR and make a .WAV file but i want this thing in android. There are some apps on playstore like AnyMote which record IR but they use built-in receiver of Android and my hardware uses Audio Input/Output for its function. I have also checked all drivers of ZaZa Remote but none of them showed support for Audio IR Receiver, it can just send IR codes using Audio IR Blaster and that is working very well. Please help me making my receiver to work.
We made an application that makes it possible to video call between 2 devices(iOS, Android and web). Using cordova, opentok, nodejs and the cordova-opentok-plugin. During testing we noticed that the sound on an Android device is kind of low, hard to hear the other person talk.
We tested the sound from our application and compared it to tests with Google Hangouts and a normal telephone call. From these tests we can see that the audio is on maximum volume in our application. The audio stream goes through the call channel for all these applications and our own.
We tested the same device with skype, which also goes over the call channel, and the sound on skype is a lot louder than our own application and Google Hangouts or even a normal telephone call.
So it seems Skype has found a way to boost the audio in Android. Does anyone know how we could implemented such kind of a boost/amplify to the audio channel?
Thanks in advance.
I am trying to capture video from an RTP stream into my Android application. I am using code from a project on github https://github.com/niqdev/ipcam-view. However, after I open up VLC on my computer and start streaming a video and connect my android device to the same network it does not display the video on my device. I don't know what I am doing wrong, any help would be greatly appreciated. Thanks.
This is the error message that I am getting after I run the application
if you need to play short videos you need to use the demo of VXG player for android. It is super easy to use but has a 2 min limitation.
It looks like the example you are working with works just with mjpeg not with rtp streams.
If limitations is not for you try this example: VLCSimple
It has the newest version of vlc-sdk and maybe they already fixed RTP deadlock bug.
Or just try to make the mjpeg stream using VLC:
DISPLAY=:0 cvlc -vvv --no-audio screen:// --screen-fps 1 --sout "#transcode{vcodec=MJPG,vb=800}:standard{access=http,mux=mpjpeg,dst=:18223/}" --sout-http-mime="multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a"
I would like to build an Android App to take audio data from two microphones, mix the sound with some from memory, and play the sound through headphones. This needs to be done in real-time. Could you please refer me to some tutorials or any references, for real-time audio input, mixing, and output with Java eclipse?
So far, I am able to record sound, save it, and then play it, but I cannot find any tutorials for real-time interfacing with sound-hardware this way.
Note: One microphone is connected to the 3.5 mm headphone jack of the Android through a splitter and the other is connected through a USB port.
Thanks!
There are two issues that I see here:
1) Audio input via USB.
Audio input can be done using android 3.2+ and libusb but it is not easy (You will need to get the USB descriptors from libusb, parse them yourself and send the right control transfers to the device etc). You can get input latency via USB in the order of 5-10 mS with some phones.
2) Audio out in real-time.
This is a perennial problem in Android and you are pretty much limited to the Galaxy Nexus at the moment if you want to approach real-time (using Native Audio output). However, if you master the USB you may be able to output with less latency as well.
I suppose if you go to the trouble of getting the USB to work, you can get a USB audio device with stereo in. If you had connected one mono mic to each of the input channels, then output via USB you would be very close to your stated goal. You might like to try "USB Audio Tester" or "usbEffects" apps to see what is currently possible.
In terms of coding the mixing and output etc, you will probably want one thread reading each separate input source and writing to a queue in small chunks (100-1000 samples at a time). Then have a separate thread reading off the queue(s) and mixing, placing the output onto another queue and finally a thread (possibly in native code if not doing output via USB) to read the mixed queue and do output.
The following Link http://code.google.com/p/loopmixer/ has a flavor for dealing with the audio itself.
I'm working on a web project, where a user can share his screen and the output of his sound card with other users. I've come pretty far with the Adobe LCCS service (http://www.adobe.com/devnet/flashplatform/services/collaboration.html), but the screen sharing isn't stable enough to transmit a running video from the user's computer - it stops every 2 seconds.
It seems the only other way is to use a Java Applet. There are several libraries to share the screen. I'm looking for a way to capture the screen contents and stream it via RTMP to a server.
I have found a Java Applet, which captures screen shots of a defined area in a certain interval, encodes it into the ScreenVideo codec and streams it to an RTMP capable server: http://code.google.com/p/red5-screenshare/