Getting All YouTube Video Ids/Urls From Video Manager - java

I'm using Selenium to log into my Google account and to visit YouTube.
Now on the video manager I would like to get all of my video ids. I tried copying the CSSSelector or XPath which the developer tools in Chrome give me but each of them contain the video id which makes them impossible to use like this:
List<WebElement> allVideoUrls = driver.findElements(By.cssSelector("my-selector-which-gives-all-videos-on-page"));
Note that I have to be logged in to be able to "see" unlisted or private videos as well so that's required.
So far I have a bad implementation which sometimes fails to work for some reason. I firstly get all the links on the page and only return the ones which are for editing a video. To avoid a StaleElementReferenceException I'm retrieving all links again inside the loop.
public void getVideoInformation()
{
// Visit video manager
driver.get("https://www.youtube.com/my_videos?o=U");
// Wait until video list has loaded
new WebDriverWait(driver, 10).until(ExpectedConditions
.visibilityOfElementLocated(By
.cssSelector("#vm-playlist-video-list-ol")));
// Return all links on page
List<WebElement> allLinks = driver.findElements(By.tagName("a"));
HashSet<String> videoLinks = new HashSet<>();
for (int linksIndex = 0; linksIndex < allLinks.size(); linksIndex++)
{
String link = driver.findElements(By.tagName("a")).get(linksIndex)
.getAttribute("href");
try
{
if (link.contains("edit"))
{
System.out.println(link);
// No duplicates
videoLinks.add(link);
}
} catch (Exception error)
{
error.printStackTrace();
}
}
// ...
}
I'm fine with the fact that I need to load every other page as well to get all the videos but please help me to find an efficient/reliable way of getting the video ids.

Related

AngularJs page issue with selecting an element and clicking it

I have a problem with selecting and clicking an element it so the drop down occurs here is what i have tried uptill now:-
String csspath = "html body.ng-scope f:view form#wdesk.ng-pristine.ng-valid div.container div.ng-scope md-content.md-padding._md md-tabs.ng-isolate-scope.md-dynamic-height md-tabs-content-wrapper._md md-tab-content#tab-content-7._md.ng-scope.md-active.md-no-scroll div.ng-scope.ng-isolate-scope ng-include.ng-scope div.ng-scope accordion div.accordion div.accordion-group.ng-isolate-scope div.accordion-heading a.accordion-toggle.ng-binding span.ng-scope b.ng-binding";
String uxpath = "//html//body//f:view//form//div//div[2]//md-content//md-tabs//md-tabs-content-wrapper//md-tab-content[1]//div//ng-include//div//accordion//div//div[1]//div[1]//a";
String xpath2 = "/html/body/pre/span[202]/a";
xpath = "/html/body/f:view/form/div/div[2]/md-content/md-tabs/md-tabs-content-wrapper/md-tab-content[1]/div/ng-include/div/accordion/div/div[1]/div[1]/a/span/b";
try {
element = wait.until(ExpectedConditions.visibilityOfElementLocated(By.cssSelector(csspath)));
locator = By.cssSelector(csspath);
driver.findElement(locator).click();
} catch (Exception e) {
System.out.println("Not foune csspath");
}
try {
element = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath(xpath)));
locator = By.xpath(xpath);
driver.findElement(locator).click();
} catch (Exception e) {
System.out.println("Not foune xpath");
}
try {
element = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath(uxpath)));
locator = By.xpath(uxpath);
driver.findElement(locator).click();
} catch (Exception e) {
System.out.println("Not foune uxpath");
}
try {
element = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath(xpath2)));
locator = By.xpath(xpath2);
driver.findElement(locator).click();
} catch (Exception e) {
System.out.println("Not foune xpath2");
}
However nothing has worked till now i want to select responsibility code and give it values
It would be really appreciated if you can give me any insight
Thanks in advance
Here is a screenshot of my issue
enter image description here
First issue (as already pointed out in comments) is the absolute selectors you are using. For example, try to refactor your xpath selectors and make those relative.
Next issue is related to the
AngularJs page
itself. Let's look at Protractor, the testing framework for Angular built upon WebDriverJS, it provides additional WebDriver-like functionality to test Angular based websites. Put simple - your code needs extra functionality that will know when Angular elements are available for interaction.
Here is how to port some of the most useful Protractor functions to Java (and Python):

gather all video links from a youtube users webpage

I am trying to pull all the links off a youtube users uploads page but I am only getting the first 30 videos, I want to be able to get all 56 videos and I would like to be able to continue using jsoup if possible.
Document BeInspired = Jsoup.connect("https://www.youtube.com/channel/UCaKZDEMDdQc8t6GzFj1_TDw/videos").get();
Elements links = BeInspired.select("a[href]");
for (Element link : links) {
if (link.attr("href").contains("/watch?v=")) {
if (videos.contains(link.attr("href"))) {
} else {
videos.add(link.attr("href"));
}
}
System.out.println(link.attr("href"));
}
This is the code I am currently using.

How to select item in a list from search Result page using Selenium in Java?

am trying to click on this game to open it's page but every time gives me null pointer exception whatever locator am using still gives me same error also i tried to do a select from list as the link seems to be inside an "li" but didnt work also.
anyone could help me with the code to click this item ??
Targeted Page Url:
https://staging-kw.games.getmo.com:/game/43321031
Search(testCase);
WebElement ResultList = driver.findElement(By.xpath(testData.getParam("ResultList")));
log.info("list located ..");
List<WebElement> Results = ResultList.findElements(By.tagName(testData.getParam("ResultListItems")));
for (WebElement List : Results) {
String Link = List.getAttribute("href");
try {
if (Link.equals(null)) {
log.info("null");
}
if (Link.equals(testData.getParam("GameLink")) && !Link.equals("") && !Link.equals(null)) {
List.click();
}
} catch (NullPointerException countryIsNull) {
log.info("LinkIsNull");
}
}
//clickLink(GameLocator,driver);
}`
I got this code to work by just adding this line after the search method
driver.findElement(By.cssSelector("li.title")).click();
and how did i get it ? .. i used Selenium IDE to record the actions then converted the code to Java -TestNG to get the exact web element selector

YouTube's response array, sorting programmatically

I'm creating an app where I get a youtube search. It is currently working ok, I just want to show better results, by sorting them by title name.
I got a VideoListResponse with 50 items
VideoListResponse videoListResponse = null;
try {
videoListResponse = mYouTubeDataApi.videos()
.list(YOUTUBE_VIDEOS_PART)
.setFields(YOUTUBE_VIDEOS_FIELDS)
.setKey(ApiKey.YOUTUBE_API_KEY)
.setId(TextUtils.join(",", videoIds)).execute();
} catch (IOException e) {
e.printStackTrace();
}
and I want to sort them by title. Let me show an image of the item list:
Well, the YouTube API already supports ordering the results by title. You won't have to do anything on your end....
https://developers.google.com/youtube/v3/docs/search/list#parameters
You could retrieve the List within and sort that:
List<Video> items = videoListResponse.getItems();
items.sort(Comparator.comparing(e -> e.getSnippet().getTitle()));

I want to pull Facebook posts from a public page to a Java application

I am creating an app in Java that will take all the information from a public website and load it in the app for people to read using jsoup. I was trying the same kind of function with Facebook but it wasn't working the same way. Does anyone have a good idea about how I should go about this?
Thanks,
Calland
public String[] scrapeEvents(String... args) throws Exception {
Document doc = Jsoup.connect("http://www.facebook.com/cedarstreettimes?fref=ts").get();
Elements elements = doc.select("div._wk");
String s = elements.toString();
return s;
}
edit: I found this link of information,but I'm a little confused on how to manipulate it to get me only the content of what the specific user posts on their wall. http://developers.facebook.com/docs/getting-started/graphapi/
I had a look at the source of that page -- the thing that is tripping up the parse is that all the real content is wrapped in comments, like this:
<code class="hidden_elem" id="u_0_42"><!-- <div class="fbTimelineSection ...> --></code>
There is JS on the page that lifts that data into the real DOM, but as jsoup doesn't execute JS it stays as comments. So before extracting the content, we need to emulate that JS and "un-hide" those elements. Here's an example to get you started:
String url = "https://www.facebook.com/cedarstreettimes?fref=ts";
String ua = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/537.33 (KHTML, like Gecko) Chrome/27.0.1438.7 Safari/537.33";
Document doc = Jsoup.connect(url).userAgent(ua).timeout(10*1000).get();
// move the hidden commented out html into the DOM proper:
Elements hiddenElements = doc.select("code.hidden_elem");
for (Element hidden: hiddenElements) {
for (Node child: hidden.childNodesCopy()) {
if (child instanceof Comment) {
hidden.append(((Comment) child).getData()); // comment data parsed as html
}
}
}
Elements articles = doc.select("div[role=article]");
for (Element article: articles) {
if (article.select("span.userContent").size() > 0) {
String text = article.select("span.userContent").text();
String imgUrl = article.select("div.photo img").attr("abs:src");
System.out.println(String.format("%s\n%s\n\n", text,imgUrl));
}
}
That example pulls out the article text and any photo that is associated with it.
(It's possibly better to use the FB API that this method; I wanted to show how you can emulate little bits of JS to make a scrape work properly.)

Categories