Responsive web pattern on the server side - java

I have been reading about the responsive web pattern and I have successfully implemented it on a test page. However I see the limitations of that the layout is limited by the order/sequence of the HTML tags. You can set the display:none property on a lot of content etc but that is not nice.
So is there a way on the server side to distinguish between what the HTML response are going to include based on what kind of device is used by the user? I am mainly interested in Scala (Lift) and Java EE solutions.

Using Lift you can identify the userAgent and if it is mobile, you can show different html than if the user is using a desktop browser.
There are a few ways to accomplish this, one is from the Sitemap, or another is from each snippet.
The mailing list is a good place to ask the specifics of each method.
Update
This is an example using Sitemap from Lift
def sitemap = SiteMap(
Menu.i("Home") / "index" >> pickTemplate(),
Menu.i("First") / "first"
)
//Show mobile or regular page
def pickTemplate() ={
//If the browser is Chrome, pick this template
if(S.request.map(_.isChrome) openOr true ){
Template( ()=>Templates("chrome" :: Nil) openOr (NodeSeq.Empty))
} else{
Template( ()=>Templates("other" :: Nil) openOr (NodeSeq.Empty))
}
}

Related

Why is my Jsoup Code not Returning the Correct Elements?

I am working on an app in Android Studio and am having some trouble web-scraping with JSoup. I have successfully connected to the webpage and returned some basic elements to test the library, but now I cannot actually get the elements I need for my app.
I am trying to get a number of elements with the "data-at" attribute. The weird thing is, a few elements with the "data-at" attribute are returned, but not the ones I am looking for. For whatever reason my code is not extracting all of the elements that share the "data-at" attribute on the web page.
This is the URL of the webpage I am scraping:
https://express.liatoyotaofcolonie.com/inventory?f=dealer.name%3ALia%20Toyota%20of%20Colonie&f=submodel%3ACamry&f=trim%3ALE&f=year%3A2020
The method containing the web-scraping code:
#Override
protected String doInBackground(Void... params) {
String title = "";
Document doc;
Log.d(TAG, queryString.toString());
try {
doc = Jsoup.connect(queryString.toString()).get();
Elements content = doc.select("[data-at]");
for (Element e: content) {
Log.d(TAG, e.text());
}
} catch (IOException e) {
Log.e(TAG, e.toString());
}
return title;
}
The results in Logcat
The element I want to retrieve
One of the elements that is actually being retrieved
This is because some of the content - including the one you are looking for - is created asyncronously and is not present in initial DOM (Javascript ;))
When you view the source of the page you will notice that there is only 17 data-at occurences, while running document.querySelector("[data-at]") 29 nodes are returned.
What you are able to get in the JSoup is static content of the page (initial DOM). You wont be able to fetch dynamically created content as you do not run required JS scripts.
In order to overcome this, you will have to either fetch and parse required resources manually (eg trace what AJAX calls are made by the browser) or use headless browser setup. Selenium + headless Chrome should be enough.
Letter option will allow you to scrape ANY posible web application, including SPA apps, which is not possible using plaing Jsoup.
I don't quite know what to do about this, but I'm going to try one more time... The "Problematic Lines" in your code are these:
doc = Jsoup.connect(queryString.toString()).get();
Elements content = doc.select("[data-at]");
It is the queryString that you have requested - the URL points to a page that contains quite a bit of script code. When you load up a browser and click the button (or menu-option) that reads: "View Source", the HTML you see is not the same exact HTML that is broadcast to and received by JSoup.
If the HTML that is broadcast contains any <SCRIPT TYPE="text/javascript"> ... </SCRIPT> in it (and the named URL in your question does), AND those <SCRIPT> tags are involved in the initial loading of the page, then JSoup will not know anything about it... It only parses what it receives, it cannot process any dynamic content.
There are four ways that I know of to get the "Post Script Loaded" version of the HTML from a dynamic web-page, and I will type them here, now. The first is likely the most popular method (in Java) that I have heard about on Stack Overflow:
Selenium This Answer will show how the tool can run Java-Script. These are some Selenium Docs. And then there is this page right here has a great "first class" for using the tool to retrieve post-script processed HTML. Again, there is no way JSoup can retrieve HTML that is sent to the browser by script (JS/AJAX/Angular/React) since it just a parser.
Puppeteer This requires running a language called Node.js Perhaps calling a simple Node.js program from Java could work, but it would be a "Two Language" solution. I've never used it. Here is an answer that shows getting, sort of, what you are trying to get... The HTML after the script.
WebView Android Java Programmers have a popular class called "WebView" (documented here), that I have recently been told about (yesterday ... but it has been out for years) that will execute script in a browser, and return the HTML. Here is an answer that shows "JavaScript Injection" to retrieve DOM Tree elements from a "WebView" instance (which is how I was told it was done)
Splash My favorite tool, which I don't think anyone has heard of, but has been the simplest for me... So there is an A.P.I. called the "Splash API". Here is their explanation for a "Java-Script Rendering Service." Since this one I have been using... I'll post a code snippet that shows how "Splash Tool" can retrieve post-script processed HTML below.
To run the Splash API (only if you have access to the docker loading program) ... You start a Splash Server as below. These two lines are typed into a GCP (Google Cloud Platform) Shell instance, and the server starts right up without any configurations:
Pull the image:
$ sudo docker pull scrapinghub/splash
Start the container:
$ sudo docker run -it -p 8050:8050 --rm scrapinghub/splash
In your code, just prepend the String to your URL's:
"http://localhost:8050/render.html?url="
So in your code, you would use the following command (instead), and the script would (more likely) load all the HTML Elements that you are not finding:
String SPLASH_URL = "http://localhost:8050/render.html?url=";
doc = Jsoup.connect(SPLASH_URL + queryString.toString()).get();

replacing Result's html

I have an URL which shows me a coupon form based on id:
GET /coupon/:couponId
All the coupon forms are different and submit different POST params to:
POST /saveCoupon/:id
I want to have a convenient way of debugging my coupons and be able to have a way of viewing actual POST params submitted.
I've made a controller on URL POST /outputPOST/saveCoupon/:id which saves nothing, but prints to browser POST params received.
Now I want to have an URL like GET /changeActionUrl/coupon/:couponId which calls GET /coupon/:couponId and then substitutes form's action URL POST /saveCoupon/:id with POST /outputPOST/saveCoupon/:id .
In other words I want to do something like:
Result.getHtml().replace("/saveCoupon/","/outputPOST/saveCoupon/");
With this I can easily debug my coupons just by adding "/outputPOST" in the browser.
You could just use a bookmarklet and javascript to replace all of the forms' action attributes. That way your developer can do it with one click instead of changing urls.
Something like this will prefix all form actions on the page with "/outputPOST".
javascript:(function(){var forms=document.getElementsByTagName('FORM');for(i=0;i<forms.length;++i){forms[i].setAttribute('action','/outputPOST'+forms[i].getAttribute('action'));}})();
I don't understand, at least not everything ;)
In general you can debug every piece of Play app using debugger (check for your favorite IDE tips how to do that) - this will be always better, faster, etc etc, than modifying code only for checking incoming values.
I.e. Idea 13+ with Play support allows for debbuging like a dream!

what is the best way to display website visitors a one time welcome message?

I've got a website, and I want to add a welcoming message which hovers on a certain part of the page which only loads for the visitor for the first time they login, and won't again(presumably cookies used). And says something like "adjust your settings here.."
I don't want it to be an external popup but something that loads on the page in a certain area, defined by me (PX-pixle reference)
What would be the best coding language to do it in, oes anyone have any examples of this, or any site based generators to make it on?
thanks
Create one more field in database with lastlogin.
When user is created then make lastlogin field with special.
When user signs the next time from Login Page, update the field the lastlogin value to regular
//query to get value of lastlogin
//add css to elements you want to hover
<element class="<?php if($last-login == 'sepcial') { echo 'sepcialcss'; } else {echo 'regularcss'; }">
Done in PHP
As you added the tag, php would do this, actually any language will do.
Generally you have two ways to do this.
Do it on your server.
Do it on client's computer.
for the first way, you check the cookies and generate the page you want.
for the second way, you need to arrange the page the visitors see with java script.
way 1 recommended, coz it loads less bits. LOL
Update:
your server supports php right? the page, say it index.php, has a special area which is different when the visitors login the first time, right?
<?php
if (firstLogin()){
genSpecial();
}
else{
genRegular();
}
?>
in the funcition firstLogin(), you shall read the cookies and determine.
in the other two functions, just gen two different part, i.e. some html source code.
to your question, if you need to load some image, do it in genSpecial(). and if you choose the first way, js is not used to gen the special area, it's used only if in the special area, there needs some js.
It is possible through javascript. Once the user is shown the settings, store the result in a cookie valid for as long as you want. The next time the user logs in, verify if the cookie is set and then proceed.
Sample code to create cookies:
function setCookie(c_name,value,exdays)
{
var exdate=new Date();
exdate.setDate(exdate.getDate() + exdays);
var c_value=escape(value) + ((exdays==null) ? "" : "; expires="+exdate.toUTCString());
document.cookie=c_name + "=" + c_value;
}
Refer this for more details on how to create and use cookies

How to deal with file uploading in test automation using selenium or webdriver

I think that everybody who uses Webdriver for test automation must be aware of its great advantages for web development.
But there is a huge issue if file uploading is part of your web flow. It stops being test automation. The security restriction of browsers (invoking file selection) practically makes it impossible to automate tests.
Afaik the only option is to have Webdriver click the file upload button, sleep the thread, have developer/tester manually select the file, and then do the rest of the web flow.
How to deal with this, is there a workaround for it? Because it really can't be done like this. It wouldn't make sense.
This is the only case I know of when browser security restrictions do not apply:
<script language=javascript>
function window.onload(){
document.all.attachment.focus();
var WshShell=new ActiveXObject("WScript.Shell")
WshShell.sendKeys("D:\MyFile.doc")
}
</script>
Webdriver can handle this quite easily in IE and Firefox. Its a simple case of finding the element and typing into it.
driver = webdriver.Firefox()
element = driver.find_element_by_id("fileUpload")
element.send_keys("myfile.txt")
The above example is in Python but you get the idea
Using AWT Robots is one option, if you're using Java, which you are. But it's not a good option, it is not very dependable, and not clean at all. Look here
I use HttpClient and run a few tests outside of Selenium. That's more dependable and cleaner.
See the code below. You'll need more exception handling and conditionals to get it to suit your job.
HttpClient c = new HttpClient();
String url = "http://" + cargoHost + ":" + cargoPort + contextPath + "/j_security_check";
PostMethod post = new PostMethod(url);
post.setParameter("j_username", username);
post.setParameter("j_password", password);
c.executeMethod(post);
url = "http://" + cargoHost + ":" + cargoPort + contextPath + "/myurl.html";
MultipartPostMethod mPost = new MultipartPostMethod(url);
String fileNameWithPath = this.getClass().getClassLoader().getResource(filename).getPath();
File f1 = new File(fileNameWithPath);
mPost.addParameter(elementName, f1);
mPost.addParameter("action", "upload");
mPost.addParameter("ajax", "true");
c.executeMethod(mPost);
mPost.getResponseBodyAsString();
The suggestion of typing into the text box works only if the textbox is enabled.
Quite a few applications force you to go through the file system file browser for obvious reasons.
What do you do then?
I don't think the WebDriver mavens thought of just presenting keys into the KeyBoard buffer (this used to be a "no brainer" in earlier automation days)
===
After several days of little sleep, head banging and hair pulling I was able to get some of the Robot-based solution suggested here (and elsewhere).
The problem i encountered was that the dialog text box that was populated with the correct file path and name could not respond to the KeyPress/Release Events of terminating the file name with VK_ENTER as in:
private final static int Enter = KeyEvent.VK_ENTER;
keyboard.keyPress(Enter);
keyboard.keyRelease(Enter);
What happens is that the file path and file name are typed in correctly but the dialog remains opened - against my constant hoping and praying that the key emulation will terminate it and get processed by the app under testing.
Does anyone know how to get this robot to behave a bit better?
Just thought I'd provide an FYI to author's original post of using ActiveX. Another workaround would be to integrate with desktop GUI automation tools to do the job. For example, google "Selenium AutoIt". For a more cross-platform solution, consider tools like Sikuli over AutoIt.
This of course, is not considering WebDriver's support for uploads on IE & Firefox via SendKeys, or considering for other browsers where that method doesn't work.
After banging my head on this problem for far too many hours, I wanted to share with the community that Firefox 7.0.1 seems to have an issue with the FirefoxDriver sendKeys() implementation noted above (at least I couldn't get it to work on my Windows 7 x64 box), I haven't found a workaround, but updating to Firefox 8.0.1 seems to have fixed the problem. For those of you wondering, it's also possible to use Selenium RC to solve this problem (though you need to account for all of your target operating systems and the native key presses required to interact with their file selection dialogs). Hopefully the issues I had to work around save other people some time, in summary:
https://gist.github.com/1511360
If you have your are using a grid, you could make the folder of the testfiles open for sharing.
This way you could select the upload input field and set its value to \\pc-name\myTestFiles
If you're not, you should go with local files on each system.

selecting pulldown in htmlunit

I am using htmlunit in jython and am having trouble selecting a pull down link. The page I am going to has a table with other ajax links, and I can click on them and move around and it seems okay but I can't seem to figure out how to click on a pulldown menu that allows for more links on the page(this pulldown affects the ajax table so its not redirecting me or anything).
Here's my code:
selectField1 = page.getElementById("pageNumSelection")
options2 = selectField1.getOptions()
theOption3 = options2[4]
This gets the option I want, I verify its right. so I select it:
MoreOnPage = selectField1.setSelectedAttribute(theOption3, True)
and I am stuck here(not sure if selecting it works or not because I don't get any message, but I'm not sure what to do next. How do I refresh the page to see the larger list? When clicking on links all you have to do is find the link and then select linkNameVariable.click() into a variable and it works. but I'm not sure how to refresh a pulldown. when I try to use the webclient to create an xml page based on the the select variable, I still get the old page.
to make it a bit easier, I used htmlunit scripter and got some code that should work but its java and I'm not sure how to port it to jython. Here it is:
try
{
page = webClient.getPage( url );
HtmlSelect selectField1 = (HtmlSelect) page.getElementById("pageNumSelection");
List<HtmlOption> options2 = selectField1.getOptions();
HtmlOption theOption3 = null;
for(HtmlOption option: options2)
{
if(option.getText().equals("100") )
{
theOption3 = option;
break;
}
}
selectField1.setSelectedAttribute(theOption3, true );
Have a look at HtmlForm getSelectedByName
HtmlSelect htmlSelect = form.getSelectByName("stuff[1].type");
HtmlOption htmlOption = htmlSelect.getOption(3);
htmlOption.setSelected(true);
Be sure that WebClient.setJavaScriptEnabled is called. The documentation seems to indicate that it is on by default, but I think this is wrong.
Alternatively, you can use WebDriver, which is a framework that supports both HtmlUnit and Selenium. I personally find the syntax easier to deal with than HtmlUnit.
If I understand correctly, the selection of an option in the select box triggers an AJAX calls which, once finished, modifies some part of the page.
The problem here is that since AJAX is, by definition, asynchronous, you can't really know when the call is finished and when you may inspect the page again to find the new content.
HtmlUnit has a class named NicelyResynchronizingAjaxController, which you can pass an instance of to the WebClient's setAjaxController method. As indicated in the javadoc, using this ajax controller will automatically make the asynchronous calls coming from a direct user interaction synchronous instead of asynchronous. Once the setSelectedAttribute method is called, you'll thus be able to see the changed made to the original page.
The other option is to use WebClient's waitForBackgrounfJavascript method after the selection is done, and inspect he page once the background JavaScript has ended, or the timeout has been reached.
This isn't really an answer to the question because I've not used HtmlUnit much before, but you might want to look at Selenium, and in particular Selenium RC. With Selenium RC you are able to control the interactions with a page displayed in a native browser (Firefox for example). It has developer API's for Java and Python amongst others.
I understand that HtmlUnit uses its own javascript and web browser rendering engine and I'm wondering whether that may be a problem.

Categories