How to pass video from IP camera to LiveStream on Youtube? - java

I am trying to send a video captured from an IP Camera (stream from IP Webcam) through vlcj. My stream can be grabbed from http://<phoneIP>:8080/video
How can I send the video through Java to YT using YouTube Streaming API?
I saw the documentation about Youtube Streaming Api and Youtube Data Api v3 and by far I've managed to upload a video to my channel by using the code provided by them.
public static void main(String[] args) throws GeneralSecurityException, IOException, GoogleJsonResponseException {
YouTube youtubeService = getService();
// Define the Video object, which will be uploaded as the request body.
Video video = new Video();
// Add the snippet object property to the Video object.
VideoSnippet snippet = new VideoSnippet();
Random rand = new Random();
snippet.setCategoryId("22");
snippet.setDescription("Description of uploaded video.");
snippet.setTitle("Test video upload. "+ rand.nextInt());
video.setSnippet(snippet);
// Add the status object property to the Video object.
VideoStatus status = new VideoStatus();
status.setPrivacyStatus("unlisted");
video.setStatus(status);
File mediaFile = new File(FILE_PATH);
InputStreamContent mediaContent = new InputStreamContent("video/*",
new BufferedInputStream(new FileInputStream(mediaFile)));
mediaContent.setLength(mediaFile.length());
// Define and execute the API request
YouTube.Videos.Insert request = youtubeService.videos().insert("snippet,status",
video, mediaContent);
Video response = request.execute();
System.out.println(response);
}
But in the code presented by them about creating a Live stream isn't presented the part where you actually stream some content.
Thanks!
EDIT 1 25.06.2019/17:00
I found the field named ingestion address and completed it like this:
cdn.setIngestionInfo(new IngestionInfo().setIngestionAddress("http://192.168.0.100:8080/video"));, but in YouTube Studio, nothing shows up when I run the app (as seen in the photo below)
After some digging, i found out that LiveBroadcast is larger than LiveStream and it can embed a LiveStream. So far, i took the code from LiveBroadcast insert docs presented below.
public static void main(String[] args)
throws GeneralSecurityException, IOException, GoogleJsonResponseException {
YouTube youtubeService = getService();
// Define the LiveBroadcast object, which will be uploaded as the request body.
LiveBroadcast liveBroadcast = new LiveBroadcast();
LiveStream liveStream = new LiveStream();
// Add the contentDetails object property to the LiveBroadcast object.
LiveBroadcastContentDetails contentDetails = new LiveBroadcastContentDetails();
contentDetails.setEnableClosedCaptions(true);
contentDetails.setEnableContentEncryption(true);
contentDetails.setEnableDvr(true);
contentDetails.setEnableEmbed(true);
contentDetails.setRecordFromStart(true);
liveBroadcast.setContentDetails(contentDetails);
// Add the snippet object property to the LiveBroadcast object.
LiveBroadcastSnippet snippet = new LiveBroadcastSnippet();
snippet.setScheduledStartTime(new DateTime("2019-06-25T17:00:00+03:00"));
snippet.setScheduledEndTime(new DateTime("2019-06-25T17:05:00+03:00"));
snippet.setTitle("Test broadcast");
liveBroadcast.setSnippet(snippet);
// Add the status object property to the LiveBroadcast object.
LiveBroadcastStatus status = new LiveBroadcastStatus();
status.setPrivacyStatus("unlisted");
liveBroadcast.setStatus(status);
// Define and execute the API request
YouTube.LiveBroadcasts.Insert request = youtubeService.liveBroadcasts()
.insert("snippet,contentDetails,status", liveBroadcast);
LiveBroadcast response = request.execute();
System.out.println(response);
}
After running the code from above, I got this result on YouTube Studio:
Now I Don't know how to combine the two, or how to integrate LiveStream in LiveBroadcast so I can stream content from my phone.
Thanks again!
EDIT 2 25.06.2019/17:25
I found a function that can bind a stream to a broadcast, but when I open Live Control Room, i get this:
Still haven't managed to bind them, but i think i am getting closer, can someone push me towards the right direction here?

The LiveStream is a sort of metadata collection of information that the YouTube API uses to be aware of your stream and to hold information about it.
Part of the information is the CDN URL that you must send you actual video stream from your camera to (from https://developers.google.com/youtube/v3/live/docs/liveStreams)
You can see an answer here with an example of using this here: https://stackoverflow.com/a/29653174/334402

Related

How to precache a list of videos before playing them in a recyclerview of exoplayer?

I am using exoplayer in recyclerview to render a list of videos. This recyclerview is rendered in an activity and I know the list of video urls before opening the activity.
I want to be able to preload or cache the videos before going to that activity. The videos are usually less than 1 minute. So I am not looking for a solution to smooth stream videos. I just want the videos to be in the cache before opening the activity so that once the recyclerview opens the videos start playing without any buffering just like in tiktok.
I found a way to cache already played videos using LocalCacheDataSourceFactory in
MediaSource videoSource =
new ExtractorMediaSource(Uri.parse(mediaUrl),
new LocalCacheDataSourceFactory(context, 100 * 1024 * 1024, 5 * 1024 * 1024), new DefaultExtractorsFactory(), null, null);
This only allows me to cache the videos that are already played but not preloading or precaching them.
Found this medium article from exoplayer team but no other example integration for my specific requirement. article
Checkout this article. I have written a step by step guide to preload/precache videos using Exoplayer2.
In your case you must create a recursion or looping of caching method inside a background thread.
Video preloading/ precaching using Exoplayer 2 in Android
I have achieved this in my own app. Follow these steps:
add this video caching dependency:
implementation 'com.danikula:videocache:2.7.1'
Setup dependency like this:
It is advised that you do this in the application class:
private HttpProxyCacheServer proxy;
public static HttpProxyCacheServer getProxy(Context context) {
AppController app = (AppController) context.getApplicationContext();
return app.proxy == null ? (app.proxy = app.newProxy()) : app.proxy;
}
private HttpProxyCacheServer newProxy() {
return new HttpProxyCacheServer(this);
}
//Note that you need to have a single instance of HttpProxyCacheServer.
//Note the library is tradinally meant to allow you to cache a video so exoplayer does not re-buffer it every single time you scroll to recycler item.
Then setup your exoplayer like this:
a.
HttpProxyCacheServer proxyServer; // maintain a single instance of this. you can init this in oncreate method:
public ViewRecommendedShortVideosAdapter(Context context) {
proxyServer = AppController.getProxy(context);
}
b. In your method to play exoplayer file:
String streamingURL = shortVideo.getStreamingURL();
String proxyURL = proxyServer.getProxyUrl(streamingURL);
MediaItem mediaItem = MediaItem.fromUri(proxyURL); // you are now crating the media item from the proxy url you get by calling getProxyUrl on your proxyServer instance.
exoPlayer.setMediaItem(mediaItem);
exoPlayer.prepare();
exoPlayer.setPlayWhenReady(true);
Note: the above still only gives you caching of the video once it has been played, and not caching of yet to play videos.
To cache yet to play videos, do this:
a. you can cache only the next video in the recycler data arraylist (this helps save user internet costs) or you can cache the whole (I don't recommend, cos of data costs on users).
b. To cache the next video, create a method such as prepareNextVideo(), this will contain the logic to cache the next video. you will call this method in the original method to play an exoplayer media item. (maybe after setting up the first media item and set exoplayer.play()).
c. the code in the prepareNextVideo():
private void prepareNextVideo(int currentPosition){
URL url = null;
try {
ShortVideo shortVideo = videosArrayList.get(currentPosition+1); //you can put some if conditions here to check if the element exists. but a try cache should handle it to.
url = new URL(proxyServer.getProxyUrl(shortVideo.getStreamingURL())); // we are using our proxy url again.
InputStream inputStream = url.openStream();
int bufferSize = 1024;
byte[] buffer = new byte[bufferSize];
int length = 0;
while ((length = inputStream.read(buffer)) != -1) {
//nothing to do here. you are just reading the stream.
}
} catch (Exception e) {
e.printStackTrace();
//can fetch
}
}
Finally, simply call this method to buffer the next video. you can also put it in a loop, run asynchronously if you want to buffer all the videos.

GCP Speech to text - Java API not working

I have a sample .webm file recorded using MediaRecorder in Chrome Browser. When I use Google speech java client to get transcription for the video, it returns empty transcription. Here is what my code looks like
SpeechSettings settings = null;
Path path = Paths.get("D:\\scrap\\gcp_test.webm");
byte[] content = null;
try {
content = Files.readAllBytes(path);
settings = SpeechSettings.newBuilder().setCredentialsProvider(credentialsProvider).build();
} catch (IOException e1) {
throw new IllegalStateException(e1);
}
try (SpeechClient speech = SpeechClient.create(settings)) {
// Builds the request for remote FLAC file
RecognitionConfig config = RecognitionConfig.newBuilder()
.setEncoding(AudioEncoding.LINEAR16)
.setLanguageCode("en-US")
.setUseEnhanced(true)
.setModel("video")
.setEnableAutomaticPunctuation(true)
.setSampleRateHertz(48000)
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder().setContent(ByteString.copyFrom(content)).build();
// RecognitionAudio audio = RecognitionAudio.newBuilder().setUri("gs://xxxx/gcp_test.webm") .build();
// Use blocking call for getting audio transcript
RecognizeResponse response = speech.recognize(config, audio);
List<SpeechRecognitionResult> results = response.getResultsList();
for (SpeechRecognitionResult result : results) {
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s%n", alternative.getTranscript());
}
} catch (Exception e) {
e.printStackTrace();
System.err.println(e.getMessage());
}
If, I use the same file and visit https://cloud.google.com/speech-to-text/ and upload file in the demo section. It seems to work fine and shows transcription. I am clueless about whats going wrong here. I verified the request sent by demo and here it what looks like
I am sending the exact set of parameters, but that didn't work. Tried uploading file to Cloud storage but that too gave same result (no transcription).
After going through error and trials (and looking at the javascript samples), I could solve the issue. The serialized version of audio should be in FLAC format. I was sending the video file(webm) as is to Google Cloud. The demo on the site extracts audio stream using Javascript Audio API and then sents the data in base64 format to make it work.
Here are the steps that I executed to get the output.
Used FFMPEG to extract audio stream into FLAC format from webm.
ffmpeg -i sample.webm -vn -acodec flac sample.flac
The extracted file should be made available using either Storage cloud or send as ByteString.
Set the appropriate model while calling the speech API (for english language video model works, while for french language command_and_search). I don't have any logical reason for this. I realised it after trial and error with demo on Google cloud site.
I got results with flac encoded file.
Sample code results words with timestamp,
public class SpeechToTextSample {
public static void main(String... args) throws Exception {
try (SpeechClient speechClient = SpeechClient.create()) {
String gcsUriFlac = "gs://yourfile.flac";
RecognitionConfig config =
RecognitionConfig.newBuilder()
.setEncoding(AudioEncoding.FLAC)
.setEnableWordTimeOffsets(true)
.setLanguageCode("en-US")
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder().setUri(gcsUriFlac).build(); //for large files
OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> response = speechClient.longRunningRecognizeAsync(config, audio);
while (!response.isDone()) {
System.out.println("Waiting for response...");
Thread.sleep(1000);
}
// Performs speech recognition on the audio file
List<SpeechRecognitionResult> results = response.get().getResultsList();
for (SpeechRecognitionResult result : results) {
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s%n", alternative.getTranscript());
for (WordInfo wordInfo : alternative.getWordsList()) {
System.out.println(wordInfo.getWord());
System.out.printf(
"\t%s.%s sec - %s.%s sec\n",
wordInfo.getStartTime().getSeconds(),
wordInfo.getStartTime().getNanos() / 100000000,
wordInfo.getEndTime().getSeconds(),
wordInfo.getEndTime().getNanos() / 100000000);
}
}
}
}
}
GCP supports different languages, I have used "en-US" for my example.
Please refer following link document to know language list.

Google Drive resumable upload in v3

I am looking for some help/example to perform a resumeable upload to Google Drive using the new v3 REST API in Java.
I know there is a low level description here: Upload files | Google Drive API. But at the moment I am not willing to understand any of these low level requests, if there isn't another, simpler method ( like former MediaHttpUploader, which is deprecated now...)
What I currently do is:
File fileMetadata = new File();
fileMetadata.setName(name);
fileMetadata.setDescription(...);
fileMetadata.setParents(parents);
fileMetadata.setProperties(...);
FileContent mediaContent = new FileContent(..., file);
drive.files().create(fileMetadata, mediaContent).execute();
But for large files, this isn't good if the connection interrupts.
I've just created an implementation on that recently. It will create a new file on your DriveFolder and return its metadata when the task succeeds. While uploading, it will also update the listener with uploading info. I added comments to make it auto explanable:
public Task<File> createFile(java.io.File yourfile, MediaHttpUploaderProgressListener uploadListener) {
return Tasks.call(mExecutor, () -> {
//Generates an input stream with your file content to be uploaded
FileContent mediaContent = new FileContent("yourFileMimeType", yourfile);
//Creates an empty Drive file
File metadata = new File()
.setParents(parents)
.setMimeType(yourFileMimeType)
.setName(yourFileName);
//Builds up the upload request
Drive.Files.Create uploadFile = mDriveService.files().create(metadata, mediaContent);
//This will handle the resumable upload
MediaHttpUploader uploader = uploadBackup.getMediaHttpUploader();
//choose your chunk size and it will automatically divide parts
uploader.setChunkSize(MediaHttpUploader.MINIMUM_CHUNK_SIZE);
//according to Google, this enables gzip in future (optional)
uploader.setDisableGZipContent(false); versions
//important, this enables resumable upload
uploader.setDirectUploadEnabled(false);
//listener to be updated
uploader.setProgressListener(uploadListener);
return uploadFile.execute();
});
}
And make your Activity extends MediaHttpUploaderProgressListener so you have real time updates on the file progress:
#Override
public void progressChanged(MediaHttpUploader uploader) {
String sizeTemp = "Uploading"
+ ": "
+ Formatter.formatShortFileSize(this, uploader.getNumBytesUploaded())
+ "/"
+ Formatter.formatShortFileSize(this, totalFileSize);
runOnUiThread(() -> textView.setText(sizeTemp));
}
For calculating the progress percentage, you simply do:
double percentage = uploader.getNumBytesUploaded() / totalFileSize
Or use this one:
uploader.getProgress()
It gives you the percentage of bytes that have been uploaded, represented between 0.0 (0%) and 1.0 (100%). But be sure to have your content length specified, otherwise it will throw IllegalArgumentException.

How to upload a video to YouTube using YouTube API and MediaStreamSource

I'm building a relay service that will pass videos from an external server to YouTube. Currently my code is working as intended but I need to avoid to save the file locally ahead of time to do the upload to YouTube using MediaFileSource. Is there a way to pass an InputStream instead of a file and use MediaStreamSource instead to allow piping?
https://developers.google.com/gdata/javadoc/com/google/gdata/data/media/MediaStreamSource
That way I'm able to pipe the files directly like so
YouTubeService youTubeService = new YouTubeService("My-Service", developerKey);
youTubeService.setUserCredentials(user, password);
VideoEntry newEntry = new VideoEntry();
YouTubeMediaGroup mg = newEntry.getOrCreateMediaGroup();
mg.setTitle(new MediaTitle());
mg.getTitle().setPlainTextContent("Song Title");
mg.addCategory(new MediaCategory(YouTubeNamespace.CATEGORY_SCHEME, "Category"));
mg.setKeywords(new MediaKeywords());
mg.getKeywords().addKeyword("Test");
mg.setDescription(new MediaDescription());
mg.getDescription().setPlainTextContent("Song Description");
mg.setPrivate(false);
MediaStreamSource ms = new MediaFileSource(new URL("http://www.somewebsite.com/video.mp4").openStream(), "video/mp4");
newEntry.setMediaStream(ms);
String uploadUrl = "http://uploads.gdata.youtube.com/feeds/api/users/default/uploads";
VideoEntry createdEntry = youTubeService.insert(new URL(uploadUrl), newEntry);
return createdEntry.getHtmlLink().getHref();
I encountered an error and had to edit my code as:
String title = "Test Video";
youTubeService.getRequestFactory().setHeader("Slug", title);
Thought I should mention because I didn't see this in your snippet. But the upload is taking an insanely long time. Any progress at your end with the upload?

sending video file to browser over websocket

I want to send a video file from a server written in java to a web browser client.
The socket connection works fine and I have no trouble sending text.
The library I'm using to make a socket server is this https://github.com/TooTallNate/Java-WebSocket
This is the code for sending the file
public void sendFile(WebSocket conn,String path)
{
try
{
File file = new File(path);
byte[] data = new byte[(int)file.length()];
DataInputStream stream = new DataInputStream(new FileInputStream(file));
stream.readFully(data);
stream.close();
conn.send(data);
..snip catch statements..
Here is my javascript code for catching the file
function connect()
{
conn = new WebSocket('ws://localhost:8887');
conn.onopen = function(){alert("Connection Open");};
conn.onmessage = function(evt){if(evt.data instanceof Blob){readFile(evt);}else{alert(evt.data);}};
conn.onclose = function(){alert('connection closed');};
}
function readFile(file_data)
{
var video = document.getElementById('area');
video.src = window.URL.createObjectURL(file_data.data);
}
..skip to html element for playing the file..
<video id='area' controls="controls"></video>
I want to be able to receive the file in the browser and play it.
The error I get while trying to send a webm video file to fireox is:
HTTP "Content-Type" of "application/octet-stream" is not supported. Load of media resource blob:794345a5-4b6d-4585-b92b-3acb51612a6c failed.
Is it possible to receive a video file from a websocket and play it?
Am I implementing something wrong?
Video element requires right content-type, ws Blob comes with generic one, and it seems (to me) there is no way to set it serverside or clientside.
Fortunately, Blob has slice(start, end, contentType) method:
var rightBlob = originalBlob.slice(0, originalBlob.size, 'video/webm')

Categories