Hello, I'm trying to build client-server audio streaming app. Server has to be written in C and communicate with client using sockets (TCP). Client will be written in javafx (actually tornadofx, but that doesn't relate to the problem)
My doubts concern the streaming part. Server has several audio files. I want to be able to start, stop, rewind, skip to another song in the client app. Java has AFAIK two classes to play audio - Clip and SourceDataLine. I don't think sending the whole song before playing is the way to go (Clip, large sound files) so I guess I should stick with SourceDataLine. My simple guess is that I should send fairly small portion of audio every time the client comes close to listening the last portion he received. (e.g I send 30 seconds and when client is on 25th second he sends request for more) Is that right? Is that doable in previously mentioned technologies?
Maybe something like ffmpeg could be useful here
https://github.com/avTranscoder/avTranscoder
But again, I have to communicate over tcp sockets only.
Related
I am working on a little project with a very short amount of time and I’m totally new to developing with android. I’m experienced in Java but don’t know anything about android!
The project essentially consists of using an android phone as a “data collection probe”. My plan is to have an arduino connected via usb sending sensor data at regular intervals and ship that data off to a server in packets as fast as possible when network connection is regained. I also want to capture video with the camera and have it sent on a similar basis. Video is streamed into a file and when network connection is regained it empties out the file and sends it off. Not sure how to implement this video part, any help would be much appreciated.
I want to develop a audio streaming server in java with human voice.
I would like to know if someone has already test the some technologies as httpserver (icecast, httpservlet as jetty) , rtspserver or webrtc. I need something faster. I think also to send stream with simple UDP and add some informations for client.
As audio format i think i'll use opus because it is optimized for human voice.
Thanks
i checked some technologies as Kurento but finally my choice is:
Encoding with opus
Sending stream to clients with UDP server
I would like to know if you ever worked for sending a stream with multiple thread. Because i dont know how i can share encoded stream between threads.
Because i want to create
1 Thread who listens connections from client and saves ip+port
1 Thread who captures micro and encodes the stream
1 Thread for each client for sending the encoded stream
So I'm trying to make a server that open a socket to stream music for one client.
The problem that I have is that I have no idea how to make the server play the music and how to read the bytes dynamically with the client.
Right now, I've tried to use the java audio system without any results.
Keep in mind that I need the server to play the music and not the client because the server is an audio player that can play/pause etc.
Thank you in advance for the help
The scenario:
My friends and I are developing a product with a bluetooth reciever that connects to a speaker system. Currently we can pair with this unit and stream music over a bluetooth connection. I'd like to develop an app that I can use to control different settings on our unit while the music is streaming. My application needs to be able to do this no matter which app is streaming the music over bluetooth (native media player, spotify, pandora etc.). I'm thinking this potentially a permissions issue as opposed to a bandwidth issue...
I need to know:
Is it possible to send control data and an audio stream concurrently to the same bluetooth reciever? If so, could someone please point toward a good strategy that will help me accomplish this (eg what protocol to use for the control data)? If it is not possible, can someone recommend a better way to control the target device?
I wasn't able to find what I was looking for in the Android Developer docs.
Based on my knowledge from working with bluetooth, the best method would be to send the control packets in the same packet stream as the audio packets, appending a unique ID to allow the speaker the ability to recognize and act on the control packets.
Ideally, you would want a Connection Stream on the speaker built just for control. so that way your app just connects to that and sends the connection data, which the speaker then interprets.
Pseudo code for the speaker would look something like
connectionURL = "btspp://localhost: " + uuid + ";name=control";
bluetoothConnection.open(url);
while (true) {
// waits on this line until something tries to connect to it.
bluetoothConnection.acceptAndOpen();
/* Process control information */
}
As for pausing music from a third party app, the easiest way would be to have the App Request Audio Focus to pause and release it to resume. Alternatively, you could have a control packet sent that would cancel the bluetooth stream of audio, but that could become quite complicated to resume.
For a project, I need to be able to stream live audio from a Java server to the browser on the client. My first guess was to use RTMP with a Flash player, my second guess to make use of the HTML5 audio tag. But so far, I've failed to find anything useful (like a library), so does anyone have any pointers on how to do this?
Here's the setup: The sound comes in from a VoIP server as a bunch of PCM samples. From there, it has to go to get to the client, while usually only one client listens to one stream. So I need to be able to send many VoIP streams to several clients, a simple form of authentication would also be nice (like a token or a secret URL where the stream is located at).
So far, I've looked at Red5 (looks to me like one-to-many streaming only) and searched for Java-based RTMP libraries. Any help is gladly appreciated!