My application is service running on background and making changes to network traffic. I want to make a test with Robotium calling for web source and checking the the page.
Here is my code for calling for web content:
Intent i = new Intent(Intent.ACTION_VIEW);
i.setData(Uri.parse(url));
Log.d(TAG, "Looking for text ["+expected+"] at address ["+url+"]");
getSolo().getCurrentActivity().startActivity(i);
getSolo().waitForWebElement(By.textContent(expected), timeout, false);
Here are few things i have tried but with no result
bool foundContent = getSolo().waitForText(expected));
foundContent = foundContent | getSolo().getCurrentWebElements().size() > 0;
foundContent = foundContent | getSolo().getWebElement(By.tagName("div"), 0));
It is enough for me when the test checks "Yeah the page contain word porn" or "Nope the page contain word porn" but if its possible I would be glad if have access also to whole page code in string if its possible.
My first question here so i hope its OK
Thank you for help in advance!
P.S.: I read that solo object have method waitForWebElement returning bool (false when the content is not present till timeout) in 4.0 but in code (using 4.0 jar Robotium) and in other reference there is only a void. Is it something that they are still working on?
Related
I am working on an app in Android Studio and am having some trouble web-scraping with JSoup. I have successfully connected to the webpage and returned some basic elements to test the library, but now I cannot actually get the elements I need for my app.
I am trying to get a number of elements with the "data-at" attribute. The weird thing is, a few elements with the "data-at" attribute are returned, but not the ones I am looking for. For whatever reason my code is not extracting all of the elements that share the "data-at" attribute on the web page.
This is the URL of the webpage I am scraping:
https://express.liatoyotaofcolonie.com/inventory?f=dealer.name%3ALia%20Toyota%20of%20Colonie&f=submodel%3ACamry&f=trim%3ALE&f=year%3A2020
The method containing the web-scraping code:
#Override
protected String doInBackground(Void... params) {
String title = "";
Document doc;
Log.d(TAG, queryString.toString());
try {
doc = Jsoup.connect(queryString.toString()).get();
Elements content = doc.select("[data-at]");
for (Element e: content) {
Log.d(TAG, e.text());
}
} catch (IOException e) {
Log.e(TAG, e.toString());
}
return title;
}
The results in Logcat
The element I want to retrieve
One of the elements that is actually being retrieved
This is because some of the content - including the one you are looking for - is created asyncronously and is not present in initial DOM (Javascript ;))
When you view the source of the page you will notice that there is only 17 data-at occurences, while running document.querySelector("[data-at]") 29 nodes are returned.
What you are able to get in the JSoup is static content of the page (initial DOM). You wont be able to fetch dynamically created content as you do not run required JS scripts.
In order to overcome this, you will have to either fetch and parse required resources manually (eg trace what AJAX calls are made by the browser) or use headless browser setup. Selenium + headless Chrome should be enough.
Letter option will allow you to scrape ANY posible web application, including SPA apps, which is not possible using plaing Jsoup.
I don't quite know what to do about this, but I'm going to try one more time... The "Problematic Lines" in your code are these:
doc = Jsoup.connect(queryString.toString()).get();
Elements content = doc.select("[data-at]");
It is the queryString that you have requested - the URL points to a page that contains quite a bit of script code. When you load up a browser and click the button (or menu-option) that reads: "View Source", the HTML you see is not the same exact HTML that is broadcast to and received by JSoup.
If the HTML that is broadcast contains any <SCRIPT TYPE="text/javascript"> ... </SCRIPT> in it (and the named URL in your question does), AND those <SCRIPT> tags are involved in the initial loading of the page, then JSoup will not know anything about it... It only parses what it receives, it cannot process any dynamic content.
There are four ways that I know of to get the "Post Script Loaded" version of the HTML from a dynamic web-page, and I will type them here, now. The first is likely the most popular method (in Java) that I have heard about on Stack Overflow:
Selenium This Answer will show how the tool can run Java-Script. These are some Selenium Docs. And then there is this page right here has a great "first class" for using the tool to retrieve post-script processed HTML. Again, there is no way JSoup can retrieve HTML that is sent to the browser by script (JS/AJAX/Angular/React) since it just a parser.
Puppeteer This requires running a language called Node.js Perhaps calling a simple Node.js program from Java could work, but it would be a "Two Language" solution. I've never used it. Here is an answer that shows getting, sort of, what you are trying to get... The HTML after the script.
WebView Android Java Programmers have a popular class called "WebView" (documented here), that I have recently been told about (yesterday ... but it has been out for years) that will execute script in a browser, and return the HTML. Here is an answer that shows "JavaScript Injection" to retrieve DOM Tree elements from a "WebView" instance (which is how I was told it was done)
Splash My favorite tool, which I don't think anyone has heard of, but has been the simplest for me... So there is an A.P.I. called the "Splash API". Here is their explanation for a "Java-Script Rendering Service." Since this one I have been using... I'll post a code snippet that shows how "Splash Tool" can retrieve post-script processed HTML below.
To run the Splash API (only if you have access to the docker loading program) ... You start a Splash Server as below. These two lines are typed into a GCP (Google Cloud Platform) Shell instance, and the server starts right up without any configurations:
Pull the image:
$ sudo docker pull scrapinghub/splash
Start the container:
$ sudo docker run -it -p 8050:8050 --rm scrapinghub/splash
In your code, just prepend the String to your URL's:
"http://localhost:8050/render.html?url="
So in your code, you would use the following command (instead), and the script would (more likely) load all the HTML Elements that you are not finding:
String SPLASH_URL = "http://localhost:8050/render.html?url=";
doc = Jsoup.connect(SPLASH_URL + queryString.toString()).get();
I have a service which I need to access from an android app. The goal is simply to send a number value from a textbox and get a result string back.
I know it is possible to do so in VB using a Service Reference, which I have done. Here is the sample code :
Private Async Sub Button_Click(sender As Object, e As RoutedEventArgs)
Dim service As New ServiceReference2.Service1Client(ServiceReference2.Service1Client.EndpointConfiguration.BasicHttpBinding_IService1)
Try
lblReturn.Text = Await service.GetDataAsync(CInt(txtValueSent.Text))
Catch ex As Exception
lblReturn.Text = ex.Message
If Not ex.InnerException.Message Is Nothing Then
lblReturn.Text = lblReturn.Text + ex.InnerException.Message
End If
End Try
End Sub
After research I can't seem to find any way to have a quick and simple result like it is possible to with Visual Studio using Android Studio
Are there any tools available which work in a similar way ?
If not which process would be recommanded to achieve the same result ?
Are there any usefull links which could help enlighten me?
This might be a only-for-me problem. I am using eclipse with AWS plugin to tryout AWS EC2 Instances. The instances are successfully launched through program, I get the list of running Instances' ids.
The problem is i launched three instances and they are not showing up in the running instances in the browser, although it shows in the eclipse console:
#5 Describe Current Instances
You have 3 Amazon EC2 instances
i-801cc37a running
i-fb1ec101 running
i-bc1cc346 shutting-down
This is the code I am using to create AWS Instance:
final String imageId = "ami-76f0061f";
int maxInstance = 1, minInstance = 1;
RunInstancesRequest rireq = new RunInstancesRequest(imageId, minInstance, maxInstance);
RunInstancesResult rires = ec2.runInstances(rireq);
List<Instance> insres = rires.getReservation().getInstances();
String createdInstanceId = null;
for(Instance ins: insres) {
createdInstanceId = ins.getInstanceId();
System.out.println("New Instance has been created: " + ins.getInstanceId());
}
( There was a problem I was facing; The ImageId was given to me by my professor which is not present in the list of images in the aws.amazon website for my region. However, only this works. I got two Ids from the website for my region, but they ended up with InvalidAMIID.NotFound error. I even tried the solution here. This might be relevant to not showing up in browser as the ImageId is not in my region... )
Can anyone please tell me where am I going wrong? Or is this pretty normal. If it is normal, is there anyway to make the 'programmatic' instances to show up in browser. If the ImageIds are the problem can I please know how to fix it? I assure that the region in eclipse and aws website are same: oregon(us-west-2).
Thank you
Change the region in your console. Based on your screenshot it is set to "Oregon". Change it to "N. Virginia" and you should be all set.
I have encountered a small problem that I need some help on. The issue is that I wish to call a browser window which calls a html page. The html file opens in 3 different browsers so the code for that should be correct. The actual problem is that it brings up a page can't be displayed error message
Here is the code that gets the location
package org.error;
public class BrowserLocation {
private String test1 = "org\\error\\PatientNumberError.html";
public BrowserLocation() {
}
public String patientNumberAddress() {
return test1;
}
}
and here is the code that creates the browser component and calls the location of the html file.
Browser browser = new Browser(container, SWT.NONE);
browser.setForeground(SWTResourceManager.getColor(SWT.COLOR_DARK_BLUE));
browser.setBackground(SWTResourceManager.getColor(SWT.COLOR_WHITE));
browser.setUrl(browserLocation.patientNumberAddress());
browser.setBounds(25, 25, 315, 180);
Would it be possible to find the error of my ways?
setUrl require a URL so you need something like:
browser.setUrl(new File(path).toURI().toURL().toString());
Sorry for not getting back to you earlier.
Someone that I know who is a senior Java programmer told me the problem that I was having was a case of absolute address versus relative address.
The reason for this is that if I was reading and writing to a file, then I would be able to use a relative address. However If I'm interacting with a server which is the case here as eventually It could go on-line (If I had the money) it would need to be an absolute address.
As I am still learning Java programming this was a very specific and important lesson to learn. I hope this would help anybody else who has had this issue.
I want to check the following (on ie8):
After clicking on a link, popup window is launched, then I want to check if flash content inside has loaded.
For some reason waitForPopUp does not work, it just keeps waiting and times out but I've solved it this way:
selenium.waitForCondition("selenium.getAllWindowTitles().length > 1;", "30000");
String windowID = selenium.getAllWindowTitles()[1];
selenium.selectWindow(windowID);
Then I want to check if the flash content is there before checking anything on it (webpage is very slow and the popup takes a while to show something)
selenium.waitForCondition("selenium.isElementPresent(\"//*[#id='flashcontent']\");",
"30000");
FlashSelenium flashApp = new FlashSelenium(selenium, "flashClient");
assertTrue ( flashApp.PercentLoaded() == 100 );
I've tried hundreds of ways to do this but none works,
I've also tried to check if a text is present but nothing, always times out even if the webpage is completely loaded.
For some reason everything works OK if I execute step by step in the debugger :S
I gave this a little more thought.
There is no way to test an object if it is truely loaded and the flash app is ready and initialized.
The only true way of letting selenium know the flash object is loaded and ready is for flash to use the ExternalInterface method and call a JavaScript function that will assign a var and then have selenium test that var on a timer.
Example<br/>
// in JavaScript
var isFlashLoaded = false;
function cbIsLoaded( ){
isFlashLoaded = true;
}
// in AS3
var retVal:int = ExternalInterface.call("cbIsLoaded");