Relevant code:
YouTubeThumbnailView first_video = (YouTubeThumbnailView) findViewById(R.id.main_video);
first_video.initialize(Config.YOUTUBE_API, new YouTubeThumbnailView.OnInitializedListener() {
#Override
public void onInitializationSuccess(YouTubeThumbnailView youTubeThumbnailView, YouTubeThumbnailLoader youTubeThumbnailLoader) {
final String video = getResources().getString(R.string.principal_funcoes);
youTubeThumbnailLoader.setVideo(video);
}
The time it takes for the method 'youTubeThumbnailLoader.setVideo(String s)" to work is absurd.
It takes 30+ seconds for a thumbnail to show up, with full cabled connection (100mbps).
It's definitely impossible for a user to wait more than 2 seconds for the thumbnail to show up, and have a completely blank screen while he waits for that to happen.
What can I do to load the video thumbnail any faster, or at least make YouTubeThumbnailView show a loading image while it fetches the thumbnail?
You can use direct youtube api urls for getting youtube thumbnails and they are way to faster too. here are urls you can try.
Start:-- It is only give you default size thumbnails.
https://ytimg.googleusercontent.com/vi/<insert-youtube-video-id-here>/default.jpg
For the high quality version of the thumbnail use a url similar to this:
https://ytimg.googleusercontent.com/vi/<insert-youtube-video-id-here>/hqdefault.jpg
There is also a medium quality version of the thumbnail, using a url similar to the HQ:
https://ytimg.googleusercontent.com/vi/<insert-youtube-video-id-here>/mqdefault.jpg
For the standard definition version of the thumbnail, use a url similar to this:
https://ytimg.googleusercontent.com/vi/<insert-youtube-video-id-here>/sddefault.jpg
For the maximum resolution version of the thumbnail use a url similar to this:
https://ytimg.googleusercontent.com/vi/<insert-youtube-video-id-here>/maxresdefault.jpg
It helps me to develop youtube application, Hope it also help you.
Related
Struggling with Android MediaCodec, I'm looking for a straight forward process to change the resolution of a video file in Android.
For now I'm trying a single thread transcoding method that makes all the work step by step so I can understand it well, and at high level it looks as follows:
public void TranscodeVideo()
{
// Extract
MediaTrack[] tracks = ExtractTracks(InputPath);
// Decode
MediaTrack videoTrack = tracks.Where(o => o.IsVideo).FirstOrDefault();
MediaTrack rawVideoTrack = DecodeTrack(videoTrack);
// Edit?
// ResizeVideoTrack(rawVideoTrack);
// Encode
MediaFormat newFormat = MediaHelper.CreateVideoOutputFormat(videoTrack.Format);
MediaTrack encodeVideodTrack = EncodeTrack(rawVideoTrack , newFormat);
// Muxe
encodeVideodTrack.Index = videoTrack.Index;
tracks[Array.IndexOf(tracks, videoTrack)] = encodeVideodTrack;
MuxeTracks(OutputPath, tracks);
}
Extraction works fine, returning a track with audio only and a track with video only. Muxing works fine combining again two previous tracks. Decoding works but I don't know how to check it, the raw frames on the track weight much more than the originals so I assume that it's right.
Problem
The encoder input buffer size is smaller than the raw frames size, and also related to the encoding configured format, so I assume that I need to resize the frames in some way but I don't find anything useful. I'm correct on this? I'm missing something? What is the way to go resizing Raw video frames? Any help? :S
PD
Maybe you will notice that I'm using C# (Xamarin.Android) for more fun. But the underlaying API is of course Java.
I'm using ByteBuffers, not Surfaces because it seems easier. I will be the next step using surfaces, any advice is welcome.
I know that the single thread process is highly inefficient, but makes it simple. It will be another next step to connect the decoder output buffer to the encoder input buffer.
I digged through PhilLab, Grafika and Bigflake examples but nothing seems to be very useful for me.
Avoiding to use ffmpeg on Android.
Thank you everyone for your time.
Going off of the comment above to implement libVLC
Add this to your app root's build.gradle
allprojects {
repositories {
...
maven {
url 'https://jitpack.io'
}
}
}
Add this to your dependent app's build.gradle
dependancies {
...
implementation 'com.github.masterwok:libvlc-android-sdk:3.0.13'
}
Here is an example of loading an RTSP stream as an activity
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.camera_stream_layout);
// Get URL
this.rtspUrl = getIntent().getExtras().getString(RTSP_URL);
Log.d(TAG, "Playing back " + rtspUrl);
this.mSurface = findViewById(R.id.camera_surface);
this.holder = this.mSurface.getHolder();
ArrayList<String> options = new ArrayList<>();
options.add("-vvv"); // verbosity
//Add vlc transcoder options here
this.libvlc = new LibVLC(getApplicationContext(), options);
this.holder.setKeepScreenOn(true);
//this.holder.setFixedSize();
// Create media player
this.mMediaPlayer = new MediaPlayer(this.libvlc);
this.mMediaPlayer.setEventListener(this.mPlayerListener);
// Set up video output
final IVLCVout vout = this.mMediaPlayer.getVLCVout();
vout.setVideoView(this.mSurface);
//Set size of video to fit app screen
DisplayMetrics displayMetrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
ViewGroup.LayoutParams videoParams = this.mSurface.getLayoutParams();
videoParams.width = displayMetrics.widthPixels;
videoParams.height = displayMetrics.heightPixels;
vout.setWindowSize(videoParams.width, videoParams.height);
vout.addCallback(this);
vout.attachViews();
final Media m = new Media(this.libvlc, Uri.parse(this.rtspUrl));
//Use this to add transcoder options m.addOption("vlc transcode options here");
this.mMediaPlayer.setMedia(m);
this.mMediaPlayer.play();
}
Here is the documentation of vlc transcoder options
https://wiki.videolan.org/Documentation:Streaming_HowTo_New/
You are right, the input buffer size of the encoder is smaller because it expects input to be of the specified dimensions. The encoder only, like the name suggests, encodes.
I read your question as more of a "why" than a "how" question so i'll only point you to where you'll find the "why's"
The decoded frame is a YUV image (is suggest to quickly skim through the wikipedia article), usually NV21 if i'm not mistaken but might be different from device to device. To do this i suggest you use a library as every every plane of the image needs to be scaled down differently and it usually takes care of filtering.Check out libYUV. If you are interested in the actual resizing algorithms check out this and for implementations this.
If you are not required to handle the decoding and encoding with bytebuffers, i suggest to use a surface as you already mentioned. It has multiple benefits over decoding to bytebuffers.
More memory efficient as there is no copy between the native buffer and app allocated buffer, the native buffers are simply geting swapped from and to the surface.
If you plan to render the frame, be it for resizing or displaying, it can be done by the devices graphic processor. On how to do that check out BigFlakes DecodeEditEncode test.
In hope this answers some of your questions.
EDIT:
I asked a similar question a day before and I wrote this post based on information I gathered since that question -
First Cache Question. I know they are similar but this question is more concise to avoid extra information. I also didn't want to delete that post since it was answered, even though the answer didn't suffice.
I use a recyclerview to show photos I get from google places api. I know that I am not allowed to legally cache photos outside of runtime. I didn't know that was happening until I randomly opened my phone's gallery to see a lot of copies of the photos I get from google.
My first guess was that my use of Picasso was the issue so I added code to fix it,
.memoryPolicy(MemoryPolicy.NO_CACHE, MemoryPolicy.NO_STORE)
.networkPolicy(NetworkPolicy.NO_CACHE)
finding these in StackOverflow was pretty simple, but this didn't fix anything except that now it seems to only download them once especially when I delete them. I believe I eliminated the possible issues and am outlining the last one in this question.
private Uri getImageUri(Context inContext, Bitmap inImage) {
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
inImage.compress(Bitmap.CompressFormat.JPEG, 100, bytes);
String path = MediaStore.Images.Media.insertImage(inContext.getContentResolver(), inImage, "Title", null);
return Uri.parse(path);
}
#Override
public void onBindViewHolder(#NonNull VenueViewHolder venueViewHolder, int i) {
Collections.sort(vIAL, (o1, o2) -> o1.getVenueName().compareToIgnoreCase(o2.getVenueName()));
VenueItem currentItem = vIAL.get(i);
Picasso.with(context)
.load(getImageUri(context, currentItem.getVenueImage()))
.fit()
.memoryPolicy(MemoryPolicy.NO_CACHE, MemoryPolicy.NO_STORE)
.networkPolicy(NetworkPolicy.NO_CACHE)
.into(venueViewHolder.vIV);
venueViewHolder.vTV.setText(currentItem.getVenueName());
Log.i(TAG, "The photo should have been on screen");
}
The URI method getImageUri is something I found as an answer to another problem I had, that I needed a URI to implement the Picasso library so that I can manipulate the photos before displaying them.
My question is - How do I remove the photos when the app closes?
UPDATE:
I changed my tactic to see what would happened and used Glide
#Override
public void onBindViewHolder(#NonNull VenueViewHolder venueViewHolder, int i) {
Collections.sort(vIAL, (o1, o2) -> o1.getVenueName().compareToIgnoreCase(o2.getVenueName()));
VenueItem currentItem = vIAL.get(i);
Glide.with(context)
.load(currentItem.getVenueImage())
.into(venueViewHolder.vIV);
venueViewHolder.vTV.setText(currentItem.getVenueName());
}
and it gave a fatal Error
E/JavaBinder: !!! FAILED BINDER TRANSACTION !!! (parcel size = 4344032)
This was one of the errors and it didn't occur the first time I ran this new code but it got worse the second and third time I ran it.
I shifted my code based on an answer I got early from #Padmini S but they used a url in the load part and I pass a bitmap because for the life of me I can't figure out how to get a URL from Google Places API instead of the code they provide in
Google Place Photos.
I'm relatively new to coding so this is me trying to piece together what I need to learn more. I'm just out of ideas of what to search for so I ask here,
based on the new information I gathered from the answer, can I replace
MediaStore.Images.Media.insertImage(inContext.getContentResolver(), inImage, "Title", null);
in my code so that my photos don't get saved to my phone's photos?
Final Update
I ended up rewriting a lot of the code surrounding this and deleting this code. I ended up making an arraylist class to hold the array list for the duration of runtime and it let me remove most of the extra code I wrote out of ignorance.
I really don't know whether it'll solve your problem or not , but you are storing the images in device local storage here,
private Uri getImageUri(Context inContext, Bitmap inImage) {
String path = MediaStore.Images.Media.insertImage(inContext.getContentResolver(), inImage, "Title", null);
}
I have never used google places api but usually apis will return you with an image url you can store that url in POJO class and display it directly in recyclerview row image view like I am doing in below code,
public void onBindViewHolder(#NonNull ViewHolder viewHolder, final int i) {
viewHolder.description.setText(links.get(i).getTitle());
Glide.with(context).load(links.get(i).getImage_url()).into(viewHolder.image);
}
I'm making this netflix style app in which images are loaded into different categories. Let's say Dog videos (has 15 images), Cat videos (15 images), etc... All the images are loaded from a URL, it kind of takes a while for all to load. I was wondering if there was anything I could do to speed up the process? Or maybe show an empty container then fill it as the images load (that would be cool).
This is what I have done:
I have multiple async calls in one Activity, (1 async call per category)
JSONTask1 dogTask = new JSONTask1();
JSONTask2 catTask = new JSONTask2();
JSONTask3 pigTask = new JSONTask3();
JSONTask4 horseTask = new JSONTask4();
dogTask.execute("");
catTask.execute("");
pigTask.execute("");
horseTask.execute("");
I have all of those in a row in my actual code. Thanks.
I would use the "proxy pattern". Basically, you need to create a class that contains the minimal informations required for the display. In which, you have a preview image.
When ever you load everything you start by showing the preview content, ie : a loading gif for everypicture with the title of the movie or whatever. and basically the proxy would have a "loadImage" method that would make an ajax call or async call and the photos would load one by one. Plus, to make the loading easier, make sure the photos are not oversized.
You can see Picasso answers , in picasso i suggest you this way :
Picasso.with(getApplicationContext()).load("your url").placeholder(R.drawable.your_place_holder).error(R.drawable.showing_when_error_occured)
.into(imageView, new Callback() {
#Override
public void onSuccess() {
}
#Override
public void onError() {
}
});
Also another suggestion from me : convert your thumb images to base64 format in backend, then firstly retrieve your thumbs and show them. Then start an async task and change images when successfull.
Like whatsapp. In whatsapp you have thumb images they have so low resolution and super fast. When you click image if you have internet connection they load actual thumb images, and click again they load larger image.
Picasso website :http://square.github.io/picasso/
Load them asynchronously with Picasso, you can even show a placeholder image until the real one is loaded
In a webview in Android, I first load a webpage and then display only a part of it using some javascript commands. During the whole process a "loading" message is displayed.
Problem : It take s alot of time to load even if the internet speed is fast. (>60 sec always). How to reduce time ?
This is my WebViewClient class that I attach with the webview (and the class contains only 1 method):
#Override
public void onPageFinished(WebView view, String url){
view.loadUrl("javascript:document.getElementsByClassName('menu')[0].innerHTML=''");
view.loadUrl("javascript:document.getElementsByClassName('gbh')[0].innerHTML=''");
view.loadUrl("javascript:document.getElementsByClassName('input')[0].innerHTML=''");
view.loadUrl("javascript:document.getElementsByClassName('card_title')[0].innerHTML=''");
view.loadUrl("javascript:document.getElementsByClassName('cell_input')[0].innerHTML=''");
view.loadUrl("javascript:document.getElementsByName('pre')[0].innerHTML=''");
view.loadUrl("javascript:document.getElementById('footer').innerHTML=''");
view.loadUrl("javascript:document.getElementsByClassName('foot')[0].innerHTML=''");
}
Sir, You could use the YSlow or Google PageLoad plugins for your browser, to get specific tips on how to improve page load and speed up your page.
By default, a WebView provides no browser-like widgets, it does not enable JavaScript and also web page errors are ignored.
In my application I'd like to show some animations at the startup in some special events like Halloween, Thanksgiving, Christmas etc.
I didn't find a hot to show an animated .gif in a View. The closest solution I found is to make an AnimationDrawable with the gif frames.
I'm trying to implement that but have some questions about how to store (currently i'm using a LAMP server), transfer and recover the resources needed from the server to the Android device.
Downloading the .gif data to the phone and extract there the frames and framerate programmatically is a nice solution or it will add an unnecessary load to the client? If its appropriated is there some library or source for guiding me in with that task?
If I want to handle the gif in the server and then I want to serve it to the client frame-by-frame, how can I do that? I've thought in making a JSON with the URL's of the images and download them but maybe is not a nice option since I'd need a lot of http connections and the load could be slower if the network latency is high
Where can I find the internal structure of a gif? I have searched in Google but nothing found
Thanks in advance
Downloading the .gif data to the phone and extract there the frames and framerate
programmatically is a nice solution or it will add an unnecessary load to the client?
If its appropriated is there some library or source for guiding me in with that task?
if you want to extract the gif file to get the frames and the frame rate you need a GIF decoder and then applay them to the AnimationDrawable this will help you to add frames diagrammatically.
this two links can help you to extract the gif image in android
http://www.basic4ppc.com/android/help/gifdecoder.html
http://code.google.com/p/loon-simple/source/browse/trunk/android/LGame-Android-0.2.6S/org/loon/framework/android/game/core/GIFDecoder.java?r=7
Definitely do it on the client side. Have you seen this? Splitting the animation into multiple images and rendering it on the client side with an AnimationDrawable may be the only way to go.
You will require to extend a view to make it to play .gif
Consider the following code
private class GifView extends View{
Movie movie;
InputStream in=null;
long moviestart;
Url url
public GifView(Context context,String downloadUrl) {
super(context);
this.url=new Url(downloadUrl);
URLConnection con = url.openConnection();
in=con.getInputStream()
movie=Movie.decodeStream(in);
}
#Override
protected void onDraw(Canvas canvas) {
canvas.drawColor(Color.WHITE);
super.onDraw(canvas);
long now=android.os.SystemClock.uptimeMillis();
Log.v("tag","now="+now);
if (moviestart == 0) { // first time
moviestart = now;
}
Log.v("tag","\tmoviestart="+moviestart);
int relTime = (int)((now - moviestart) % movie.duration()) ;
Log.v("tag""time="+relTime+"\treltime="+movie.duration());
movie.setTime(relTime);
movie.draw(canvas,this.getWidth()/2-20,this.getHeight()/2-40);
this.invalidate();
}
}
Use This view in you xml and supply the url,
If network Latency is present download the gif in separate thread and supply the input stream to the constructor
Change it to
public GifView(Context context,InputStream in) {
super(context);
movie=Movie.decodeStream(in);
}