In Youtube API, I get empty pages after the second page - java
I try to get all the live videos of youtube api at a given time.
However, after the two firsts pages that contains 50 results each, all the others pages are empty.
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/7Mn5oZUWkgMIU4kfKx7cq0D0nDI\"","items":[{THIS ONE IS OK WITH 50 RESULTS}],"kind":"youtube#searchListResponse","nextPageToken":"CDIQAA","pageInfo":{"resultsPerPage":50,"totalResults":1994},"regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/wp4SIiAGHN1uLiJgvLGoMdQoehs\"","items":[{THIS ONE IS OK TOO WITH 50 RESULTS}],"kind":"youtube#searchListResponse","nextPageToken":"CGQQAA","pageInfo":{"resultsPerPage":50,"totalResults":1994},"prevPageToken":"CDIQAQ","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/3zCHC7x--dhww0e7udfLQn3DB5Y\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CJYBEAA","pageInfo":{"resultsPerPage":50,"totalResults":1988},"prevPageToken":"CGQQAQ","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/pGfYwQfcIDdD05OBzqTVFbdg39E\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CMgBEAA","pageInfo":{"resultsPerPage":50,"totalResults":1986},"prevPageToken":"CJYBEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/qtrckZRTf7czKQNL5nQpXRu7X5A\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CPoBEAA","pageInfo":{"resultsPerPage":50,"totalResults":1985},"prevPageToken":"CMgBEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/gTY6oWopjQcy6Cuf1MGQqYGYfog\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CKwCEAA","pageInfo":{"resultsPerPage":50,"totalResults":1993},"prevPageToken":"CPoBEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/umiznrtjy7vmWkIJ5Hsm8wtU0W0\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CN4CEAA","pageInfo":{"resultsPerPage":50,"totalResults":1985},"prevPageToken":"CKwCEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/thJws1H1pJJPtOkVeq6L6VGabn8\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CJADEAA","pageInfo":{"resultsPerPage":50,"totalResults":1988},"prevPageToken":"CN4CEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/TaDYlGeeLP2H1xanLHVTtmt_DNg\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CMIDEAA","pageInfo":{"resultsPerPage":50,"totalResults":1995},"prevPageToken":"CJADEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/w3wj8nt1NHhxyh-fuWLJ1v5ncvU\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CPQDEAA","pageInfo":{"resultsPerPage":50,"totalResults":1989},"prevPageToken":"CMIDEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/r1hbVDG2ANenKRichmZxj0J2dws\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CKYEEAA","pageInfo":{"resultsPerPage":50,"totalResults":1996},"prevPageToken":"CPQDEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/hdDkPLk2aNKrJLZ8tnMnCGqhUy8\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CNgEEAA","pageInfo":{"resultsPerPage":50,"totalResults":1994},"prevPageToken":"CKYEEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/OjWuZOr5rNl_HasCmAEhF5cAN9E\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CIoFEAA","pageInfo":{"resultsPerPage":50,"totalResults":1995},"prevPageToken":"CNgEEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/g8lxb3uVDwSfamhbbxFlyoVKBTg\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CLwFEAA","pageInfo":{"resultsPerPage":50,"totalResults":1986},"prevPageToken":"CIoFEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/wnTWwe1b-0gy--2ochuizsAjD9o\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CO4FEAA","pageInfo":{"resultsPerPage":50,"totalResults":1993},"prevPageToken":"CLwFEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/N1y4dp3ngAItfwukyLLitFQFDPA\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CKAGEAA","pageInfo":{"resultsPerPage":50,"totalResults":1981},"prevPageToken":"CO4FEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/F8oNrDOjlKMvHjzPIF7cJqfK_zo\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CNIGEAA","pageInfo":{"resultsPerPage":50,"totalResults":1986},"prevPageToken":"CKAGEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/JCWNr1NCzgl0GLh66R-SHeokbW8\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CIQHEAA","pageInfo":{"resultsPerPage":50,"totalResults":1992},"prevPageToken":"CNIGEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/uTbvXolYwtv5Zt4mpLQVHCQsbF4\"","items":[],"kind":"youtube#searchListResponse","nextPageToken":"CLYHEAA","pageInfo":{"resultsPerPage":50,"totalResults":1999},"prevPageToken":"CIQHEAE","regionCode":"FR"}
{"etag":"\"7991kDR-QPaa9r0pePmDjBEa2h8/xxkZZRO_iCVgmWLAsBW4Qk6VdcU\"","items":[],"kind":"youtube#searchListResponse","pageInfo":{"resultsPerPage":50,"totalResults":1995},"prevPageToken":"CLYHEAE","regionCode":"FR"}
Here is the code used :
List<SearchResult> searchItemList = new ArrayList<>();
YouTube.Search.List searchListRequest = youTubeService.search().list("snippet");
searchListRequest.setMaxResults(NUMBER_OF_VIDEOS_RETURNED);
searchListRequest.setEventType("live");
searchListRequest.setType("video");
searchListRequest.setVideoCategoryId(GAMING_VIDEO_CATEGORY);
searchListRequest.setOrder("viewCount");
searchListRequest.setSafeSearch("none");
searchListRequest.setPrettyPrint(true);
String nextToken = "";
do {
if (!nextToken.isEmpty()) {
searchListRequest.setPageToken(nextToken);
}
SearchListResponse searchListResponse = searchListRequest.execute();
System.out.println(searchListResponse);
if (searchListResponse.getItems().size() > 0) {
searchItemList.addAll(searchListResponse.getItems());
}
nextToken = searchListResponse.getNextPageToken();
} while (nextToken != null);
I don't get why the third to last request "items" field is empty. Are there some kind of restrictions from Youtube API?
Related
Loading all contacts using the Microsoft Graph API sometimes looses/skips pages
We have an application that loads all contacts stored in an account using the Microsoft Graph API. The initial call we issue is https://graph.microsoft.com/v1.0/users/{userPrincipalName}/contacts$count=true&$orderBy=displayName%20ASC&$top=100, but we use the Java JDK to do that. Then we iterate over all pages and store all loaded contacts in a Set (local cache). We do this every 5 minutes using an account with over 3000 contacts and sometimes, the count of contacts we received due to using $count does not match the number of contacts we loaded and stored in the local cache. Verifying the numbers manually we can say, that the count was always correct, but there are contacts missing. We use the following code to achieve this. public List<Contact> loadContacts() { Set<Contact> contacts = new TreeSet<>((contact1, contact2) -> StringUtils.compare(contact1.id, contact2.id)); List<QueryOption> requestOptions = List.of( new QueryOption("$count", true), new QueryOption("$orderBy", "displayName ASC"), new QueryOption("$top", 100) ); ContactCollectionRequestBuilder pageRequestBuilder = null; ContactCollectionRequest pageRequest; boolean hasNextPage = true; while (hasNextPage) { // initialize page request if (pageRequestBuilder == null) { pageRequestBuilder = graphClient.users(userId).contacts(); pageRequest = pageRequestBuilder.buildRequest(requestOptions); } else { pageRequest = pageRequestBuilder.buildRequest(); } // load ContactCollectionPage contactsPage = pageRequest.get(); if (contactsPage == null) { throw new IllegalStateException("request returned a null page"); } else { contacts.addAll(contactsPage.getCurrentPage()); } // handle next page hasNextPage = contactsPage.getNextPage() != null; if (hasNextPage) { pageRequestBuilder = contactsPage.getNextPage(); } else if (contactsPage.getCount() != null && !Objects.equals(contactsPage.getCount(), (long) contacts.size())) { throw new IllegalStateException(String.format("loaded %d contacts but response indicated %d contacts", contacts.size(), contactsPage.getCount())); } else { // done } } log.info("{} contacts loaded using graph API", contacts.size()); return new ArrayList<>(contacts); } Initially, we did not put the loaded contacts in a Set by ID but just in a List. With the List we very often got more contacts than $count. My idea was, that there is some caching going on and some pages get fetched multiple times. Using the Set we can make sure, that we only have unique contacts in our local cache. But using the Set, we sometimes have less contacts than $count, meaning some pages got skipped and we end up in the condition that throws the IllegalStateException. Currently, we use microsoft-graph 5.8.0 and azure-identiy 1.4.2. Have you experienced similar issues and can help us solve this problem? Or do you have any idea what could be causing these inconsistent results? Your help is very much appreciated!
List all things in thing groups in java
I am fetching things in thing group, I have around 500 things in a single thing group. When I using aws java sdk api I am getting only 25 things in result. ListThingsInThingGroupRequest listThingsInThingGroupRequest = new ListThingsInThingGroupRequest(); listThingsInThingGroupRequest.withThingGroupName(groupName); ListThingsInThingGroupResult listThingsInThingGroupResult = awsIot.listThingsInThingGroup(listThingsInThingGroupRequest); List<String> arl = listThingsInThingGroupResult.getThings(); System.out.println("Size of List"+arl.size()); Getting only 25 things in arraylist. Tell me how to get all the things in thing group.
The request is paginated. Use the next token and a while loop like this: String nextToken = null; List<String> things = new ArrayList<>(); do { ListThingsInThingGroupRequest listThingsInThingGroupRequest = new ListThingsInThingGroupRequest(); listThingsInThingGroupRequest.withThingGroupName(groupName).withNextToken(nextToken); ListThingsInThingGroupResult listThingsInThingGroupResult = awsIot.listThingsInThingGroup(listThingsInThingGroupRequest); things.addAll(listThingsInThingGroupResult.getThings()); nextToken = listThingsInThingGroupResults.nextToken; } while (nextToken != null)
How do you get the comments of a video using the YouTube Java Client API when given the video ID?
I am looking to write code that takes a video ID as input and retrieves the comments made on the corresponding video. Here's a link to the API docs. I tried this code String videoId = "id"; YouTube.Comments.List list2 = youtube.comments().list(Arrays.asList("snippet")); list2.setId(Arrays.asList(videoId)); list2.setKey(apiKey); Comment c = list2.execute().getItems().get(0); but I get an IndexOutOfBoundsException on the last line because getItems is returning an empty List. I set videoId as a valid YouTube video ID (one which I have already successfully been able to get video data like views, title, etc from), thinking that would work but clearly I was wrong. Unless I missed something I can't find anything in the docs for the Video class about getting comment data, so that's why I'm turning to SO for help again. EDIT: Per stvar's comment I tried changing the second line of the above code to YouTube.CommentThreads.List list2 = youtube.commentThreads().list(Arrays.asList("snippet")); and of course changed the type of c to CommentThread. It is the CommentThreads API I'm supposed to use, right? Either way, this is also returning an empty list...
Here is the complete Java code that retrieves all comments (top-level and replies) of any given video: List<Comment> get_comment_replies( YouTube youtube, String apiKey, String commentId) { YouTube.Comments.List request = youtube.comments() .list(Arrays.asList("id", "snippet")) .setParentId(commentId) .setMaxResults(100) .setKey(apiKey); List<Comment> replies = new ArrayList<Comment>(); String pageToken = ""; do { CommentListResponse response = request .setPageToken(pageToken) .execute(); replies.addAll(response.getItems()); pageToken = response.getNextPageToken(); } while (pageToken != null); return replies; } List<CommentThread> get_video_comments( YouTube youtube, String apiKey, String videoId) { YouTube.CommentThreads.List request = youtube.commentThreads() .list(Arrays.asList("id", "snippet", "replies")) .setVideoId(videoId) .setMaxResults(100) .setKey(apiKey); List<CommentThread> comments = new ArrayList<CommentThread>(); String pageToken = ""; do { CommentThreadListResponse response = request .setPageToken(pageToken) .execute(); for (CommentThread comment : respose.getItems()) { CommentThreadReplies replies = comment.getReplies(); if (replies != null && replies.getComments().size() != comment.getSnippet().getTotalReplyCount()) replies.setComments(get_comment_replies( youtube, apiKey, comment.getId())); } comments.addAll(response.getItems()); pageToken = response.getNextPageToken(); } while (pageToken != null); return comments; } You'll have to invoke get_video_comments, passing to it the ID of the video of your interest. The returned list contains all top-level comments of that video; each top-level comment has its replies property containing all the associated comment replies.
Size of RSS feed
I'm using ROME to generate a feed from data in my database. In all the samples I found, the Servlet extracts all the data from the database, and sends it as a feed. Now, if the database contains thousands of entries, how many entries should I send? protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { try { SyndFeed feed = getFeed(request); String feedType = request.getParameter("type"); feedType = feedType != null ? feedType : defaultType; feed.setFeedType(feedType); response.setContentType("application/xml; charset=UTF-8"); SyndFeedOutput output = new SyndFeedOutput(); output.output(feed, response.getWriter()); } catch (FeedException ex) { String msg = "Could not generate feed"; log(msg, ex); response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR, msg); } } protected SyndFeed getFeed(HttpServletRequest request) { // **** Here I query the database for posts, but I don't know how many // I should fetch or where should I stop? *** List<Post> posts = getPosts(); SyndFeed feed = new SyndFeedImpl(); feed.setTitle("My feed"); feed.setLink("http://myurl"); feed.setDescription("my desc"); // create the feeds.Each tutorial will be a feed entry List<SyndEntry> entries = new ArrayList<SyndEntry>(); for (Post post : posts) { SyndEntry entry = new SyndEntryImpl(); SyndContent description; String title = post.getTitle(); String link = post.getLink(); entry.setTitle(title); entry.setLink(link); // Create the description of the feed entry description = new SyndContentImpl(); description.setType("text/plain"); description.setValue(post.getDesc()); entry.setDescription(description); entries.add(entry); } feed.setEntries(entries); return feed; }
There really isn't a single way to do this that all rss clients will support, but i would recommend checking out rfc 5005 appendix B, you'll at least have a referenxe to give to clients. https://www.rfc-editor.org/rfc/rfc5005#appendix-B so long as your default query always shows the latest (page length you define) items, sorted descending, all clients will appear correct. Clients that need to be able to page can implement this standard.
I suggest a paging system. the user makes a request for page 0 to take 30 items. then the user makes a request for page 1 to take the next 30 items. first request: items 0->29, second request: items 30->59. to model this have a integer variable called skip keep track of what position to start at, for instance: int skip = page * numItems; // first request: 0 * 30 (starts at 0), sec request: 1 * 30 (starts at 30) So you will skip so many items and take only the value of numItems. Then the client requests for however many feed items the client wants at once.
Is there a limit on the number of comments to be extracted from Youtube?
I am trying to extract comments on some YouTubeVideos using the youtube-api with Java. Everything is going fine except the fact that I am not able to extract all the comments if the video has a large number of comments (it stops at somewhere in between 950 and 999). I am following a simple method of paging through the CommentFeed of the VideoEntry, getting comments on each page and then storing each comment in an ArrayList before writing them in an XML file. Here is my code for retrieving the comments int commentCount = 0; CommentFeed commentFeed = service.getFeed(new URL(commentUrl), CommentFeed.class); do { //Gets each comment in the current feed and prints to the console for(CommentEntry comment : commentFeed.getEntries()) { commentCount++; System.out.println("Comment " + commentCount + " plain text content: " + comment.getPlainTextContent()); } //Checks if there is a next page of comment feeds if (commentFeed.getNextLink() != null) { commentFeed = service.getFeed(new URL(commentFeed.getNextLink().getHref()), CommentFeed.class); } else { commentFeed = null; } } while (commentFeed != null); My question is: Is there some limit on the number of comments that I could extract or am I doing something wrong?
use/refer this String commentUrl = videoEntry.getComments().getFeedLink().getHref(); CommentFeed commentFeed = service.getFeed(new URL(commentUrl), CommentFeed.class); for(CommentEntry comment : commentFeed.getEntries()) { System.out.println(comment.getPlainTextContent()); } source Max number of results per iteration is 50 (it seems) as mentioned here and you can use start-index to retrieve multiple result sets as mentioned here
Google Search API and as well as Youtube Comments search limits max. 1000 results and u cant extract more than 1000 results