I am using Selenium web driver. I have below method to navigate to page.
public String navigate(String url){
driver = new FirefoxDriver();
driver.get(url);
return "Success";
}
Above code works fine if the server is up. some times server might be down then the page will not be loaded. Now how can I return "failure" string if the page is not loaded?
Thanks!
You can't directly test that a get() failed because the navigator always displays a page. You can either check that this page is a known error page, or check that you are not in the expected page.
First solution
It depends on the navigator. Chrome displays a special page when it can't find an url, firefox another page, etc.. You can test the title of those pages. For example firefox error page title is something like "Page load error" or "Problem loading page". Then all you have to do is something like :
if(driver.getTitle().equals("Problem loading page"))
return "failure";
Second solution
You must check the non-existence of an element that is present in every pages of your website (for example a logo or a home button). Say the ID of this element is "foo", you can do something like :
if(driver().findElements(By.id("foo")).isEmpty())
return "failure";
Dave Haeffner has a good solution for checking status codes using a proxy with the webdriver configuration.
http://elementalselenium.com/tips/17-retrieve-http-status-codes
The examples are in python, but the API is pretty close between python and java. I've not had much difficulty finding the java-analagous methods from the tips I've implemented myself.
That site has a lot of good information.
If using the Page Object Model, leveraging the LoadableComponentClass can help in determining whether the page is loaded or not either as a result of server down or something else.
Here's the link
https://code.google.com/p/selenium/wiki/LoadableComponent
Related
I am working on an app in Android Studio and am having some trouble web-scraping with JSoup. I have successfully connected to the webpage and returned some basic elements to test the library, but now I cannot actually get the elements I need for my app.
I am trying to get a number of elements with the "data-at" attribute. The weird thing is, a few elements with the "data-at" attribute are returned, but not the ones I am looking for. For whatever reason my code is not extracting all of the elements that share the "data-at" attribute on the web page.
This is the URL of the webpage I am scraping:
https://express.liatoyotaofcolonie.com/inventory?f=dealer.name%3ALia%20Toyota%20of%20Colonie&f=submodel%3ACamry&f=trim%3ALE&f=year%3A2020
The method containing the web-scraping code:
#Override
protected String doInBackground(Void... params) {
String title = "";
Document doc;
Log.d(TAG, queryString.toString());
try {
doc = Jsoup.connect(queryString.toString()).get();
Elements content = doc.select("[data-at]");
for (Element e: content) {
Log.d(TAG, e.text());
}
} catch (IOException e) {
Log.e(TAG, e.toString());
}
return title;
}
The results in Logcat
The element I want to retrieve
One of the elements that is actually being retrieved
This is because some of the content - including the one you are looking for - is created asyncronously and is not present in initial DOM (Javascript ;))
When you view the source of the page you will notice that there is only 17 data-at occurences, while running document.querySelector("[data-at]") 29 nodes are returned.
What you are able to get in the JSoup is static content of the page (initial DOM). You wont be able to fetch dynamically created content as you do not run required JS scripts.
In order to overcome this, you will have to either fetch and parse required resources manually (eg trace what AJAX calls are made by the browser) or use headless browser setup. Selenium + headless Chrome should be enough.
Letter option will allow you to scrape ANY posible web application, including SPA apps, which is not possible using plaing Jsoup.
I don't quite know what to do about this, but I'm going to try one more time... The "Problematic Lines" in your code are these:
doc = Jsoup.connect(queryString.toString()).get();
Elements content = doc.select("[data-at]");
It is the queryString that you have requested - the URL points to a page that contains quite a bit of script code. When you load up a browser and click the button (or menu-option) that reads: "View Source", the HTML you see is not the same exact HTML that is broadcast to and received by JSoup.
If the HTML that is broadcast contains any <SCRIPT TYPE="text/javascript"> ... </SCRIPT> in it (and the named URL in your question does), AND those <SCRIPT> tags are involved in the initial loading of the page, then JSoup will not know anything about it... It only parses what it receives, it cannot process any dynamic content.
There are four ways that I know of to get the "Post Script Loaded" version of the HTML from a dynamic web-page, and I will type them here, now. The first is likely the most popular method (in Java) that I have heard about on Stack Overflow:
Selenium This Answer will show how the tool can run Java-Script. These are some Selenium Docs. And then there is this page right here has a great "first class" for using the tool to retrieve post-script processed HTML. Again, there is no way JSoup can retrieve HTML that is sent to the browser by script (JS/AJAX/Angular/React) since it just a parser.
Puppeteer This requires running a language called Node.js Perhaps calling a simple Node.js program from Java could work, but it would be a "Two Language" solution. I've never used it. Here is an answer that shows getting, sort of, what you are trying to get... The HTML after the script.
WebView Android Java Programmers have a popular class called "WebView" (documented here), that I have recently been told about (yesterday ... but it has been out for years) that will execute script in a browser, and return the HTML. Here is an answer that shows "JavaScript Injection" to retrieve DOM Tree elements from a "WebView" instance (which is how I was told it was done)
Splash My favorite tool, which I don't think anyone has heard of, but has been the simplest for me... So there is an A.P.I. called the "Splash API". Here is their explanation for a "Java-Script Rendering Service." Since this one I have been using... I'll post a code snippet that shows how "Splash Tool" can retrieve post-script processed HTML below.
To run the Splash API (only if you have access to the docker loading program) ... You start a Splash Server as below. These two lines are typed into a GCP (Google Cloud Platform) Shell instance, and the server starts right up without any configurations:
Pull the image:
$ sudo docker pull scrapinghub/splash
Start the container:
$ sudo docker run -it -p 8050:8050 --rm scrapinghub/splash
In your code, just prepend the String to your URL's:
"http://localhost:8050/render.html?url="
So in your code, you would use the following command (instead), and the script would (more likely) load all the HTML Elements that you are not finding:
String SPLASH_URL = "http://localhost:8050/render.html?url=";
doc = Jsoup.connect(SPLASH_URL + queryString.toString()).get();
I'm trying to compile the results of my class from my college's results page (exam.msrit.edu).
The USNs for my class are from 1MS16CS001-100
Is there any way that I could go about writing a scraper program to enter different values in the USN box and gather data?
I am quite new to scraping but have decent enough exposure to Python and java
Any advice is much appreciated :)
Not necessarily a scrape, but you can use Selenium Web Driver to browse to the page and input everything for you. Selenium Web Driver can be found here.
Essentially, once it is installed you only need to instantiate the driver and then loop through a list of values inputting them as you go.
from selenium import webdriver
# V sets up browser. If you want to use chrome addtional setup required
browser = webdriver.Firefox()
for i in len(100): #loops to arbitrary amount
browser.get("http://exam.msrit.edu/") #HTTP GET Request to page
elem = browser.find_element_by_id('id') #This is an html id. Could also use name, xpath, etc.
elem.send_keys("your string {}".format(i)) #sends up keys
elem. browser.find_element_by_id('id) #id for search button
elem.click() #clicks that element
The documentation on selenium is pretty good. http://selenium-python.readthedocs.io/navigating.html
This will open a actual browser and will take some time to load so it will not be the quickest way to do it but it will work. You can even take screenshots.
I have my webpage opened using RFT. In that page, I have a link I want to click.
For that I am using
objMap.ClickTabLink(objMap.document_eBenefitsHome(), "Upload Documentation", "Upload Documentation");
The current page link name is "Upload Documentation"
I know that objMap.document_eBenefitsHome() takes it back to the initial page, what can I use in that place which uses the "current page opened" ?
Many thanks in advance.
There are some alternatives that could solve your problem:
Open the Test Object Map; select from the map the object that represents the document document_eBenefitsHome; modify the .url property using regular expression, so that the URLs of the two pages you cited in your question match the regex.
Find dinamically the document object using the find method. Once the page containing the link you want to click was fully loaded, try to use this code to find the document: find(atDescendant(".class", "Html.HtmlDocument"), false). The false boolean value allow the find method to search also among object that are not previously recorded.
I wanted to automate some processes on www.imgur.com, and I decided to use the Selenium WebDriver library for Java. I have been able to get much of my code to work with one hitch: when I access imgur directly only a white screen shoes up and will not change upon refresh. Accessing the sign in page directly yields an SSL error.
System.setProperty("webdriver.chrome.driver","C:\\workspace\\Test\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.get("https://www.imgur.com/signin");
WebElement username = driver.findElement(By.id("username"));
username.sendKeys("username");
WebElement password = driver.findElement(By.id("password"));
String pass = "password";
password.sendKeys(pass);
password.submit();
driver.get("http://www.imgur.com");
I have been able to work around this by using links google searches provide to imgur, but adding more features will require I be able to manage the URL directly.
Thanks in advance!
It's just http://imgur.com/, not http://www.imgur.com. That's why Google's links work, they are linking to the first one - a different url.
The www prefix is not required by any technical policy. Some choose to have urls both with and without the prefix point to the same server. Some choose to use only one or the other. It seems imgur is going without the prefix.
Here's a little more info on the www prefix:
http://en.wikipedia.org/wiki/World_Wide_Web#WWW_prefix
I am using htmlunit in jython and am having trouble selecting a pull down link. The page I am going to has a table with other ajax links, and I can click on them and move around and it seems okay but I can't seem to figure out how to click on a pulldown menu that allows for more links on the page(this pulldown affects the ajax table so its not redirecting me or anything).
Here's my code:
selectField1 = page.getElementById("pageNumSelection")
options2 = selectField1.getOptions()
theOption3 = options2[4]
This gets the option I want, I verify its right. so I select it:
MoreOnPage = selectField1.setSelectedAttribute(theOption3, True)
and I am stuck here(not sure if selecting it works or not because I don't get any message, but I'm not sure what to do next. How do I refresh the page to see the larger list? When clicking on links all you have to do is find the link and then select linkNameVariable.click() into a variable and it works. but I'm not sure how to refresh a pulldown. when I try to use the webclient to create an xml page based on the the select variable, I still get the old page.
to make it a bit easier, I used htmlunit scripter and got some code that should work but its java and I'm not sure how to port it to jython. Here it is:
try
{
page = webClient.getPage( url );
HtmlSelect selectField1 = (HtmlSelect) page.getElementById("pageNumSelection");
List<HtmlOption> options2 = selectField1.getOptions();
HtmlOption theOption3 = null;
for(HtmlOption option: options2)
{
if(option.getText().equals("100") )
{
theOption3 = option;
break;
}
}
selectField1.setSelectedAttribute(theOption3, true );
Have a look at HtmlForm getSelectedByName
HtmlSelect htmlSelect = form.getSelectByName("stuff[1].type");
HtmlOption htmlOption = htmlSelect.getOption(3);
htmlOption.setSelected(true);
Be sure that WebClient.setJavaScriptEnabled is called. The documentation seems to indicate that it is on by default, but I think this is wrong.
Alternatively, you can use WebDriver, which is a framework that supports both HtmlUnit and Selenium. I personally find the syntax easier to deal with than HtmlUnit.
If I understand correctly, the selection of an option in the select box triggers an AJAX calls which, once finished, modifies some part of the page.
The problem here is that since AJAX is, by definition, asynchronous, you can't really know when the call is finished and when you may inspect the page again to find the new content.
HtmlUnit has a class named NicelyResynchronizingAjaxController, which you can pass an instance of to the WebClient's setAjaxController method. As indicated in the javadoc, using this ajax controller will automatically make the asynchronous calls coming from a direct user interaction synchronous instead of asynchronous. Once the setSelectedAttribute method is called, you'll thus be able to see the changed made to the original page.
The other option is to use WebClient's waitForBackgrounfJavascript method after the selection is done, and inspect he page once the background JavaScript has ended, or the timeout has been reached.
This isn't really an answer to the question because I've not used HtmlUnit much before, but you might want to look at Selenium, and in particular Selenium RC. With Selenium RC you are able to control the interactions with a page displayed in a native browser (Firefox for example). It has developer API's for Java and Python amongst others.
I understand that HtmlUnit uses its own javascript and web browser rendering engine and I'm wondering whether that may be a problem.