I found a very nice jQuery polaroid running in a div.
Is it possible to save the result in a single image? What library should I use? I am using Java, JavaScript and Python. Is there a library for those languages?
It's not quite clear how you want to use this. A couple of use cases come to mind:
Occasionally, in your own browser. Use Windows screen capture (Alt-printscreen), paste into a graphics editor (Paint.NET), and crop.
Often, in your own browser. Install a browser plugin like IECapt http://iecapt.sourceforge.net/ or FireShot https://addons.mozilla.org/en-US/firefox/addon/5648/
Programmatically, in your own browser. Use win32 api calls to get a screenshot, crop and save. ?Linux equivalent?
For a developer, to get testing screenshots on their own machine or a test machine. Selenium RC http://saucelabs.com/blog/index.php/2009/10/selenium-tip-of-the-week-screenshot/
For a user, in their own browser:
Instrument the javascript to report thumbnail positions back to the server; write server-side code to recreate the image, give the user a 'save image as' button to download it.
use an ActiveX control (Snapsie https://github.com/nirvdrum/SnapsIE ) (WebThumb http://www.acasystems.com/en/web-thumb-activex/ ) - need permission to run, IE only
use a Java applet (SnapABug http://www.barklund.org/blog/2009/10/14/how-snapabug-works/ ) - need permission to run
use nonstandard, unsecure javascript extensions? https://developer.mozilla.org/en/Drawing_Graphics_with_Canvas#section_9 - looks like maybe you can grab part of the webpage and roll it over to a canvas? Then http://www.nihilogic.dk/labs/canvas2image/ to save it as a file.
Some related questions:
Take a screenshot of a webpage with JavaScript?
save an image with selenium & firefox
Programmatically get a screenshot of a page
JavaScript code to take a screenshot of a website without using ActiveX
Related
I'm working on a way to detect defacement on my website. The idea is to crawl the whole website and for each page, take a screenshot or render the website as an image and compare it with the last time the page has been checked.
I'm looking for a way to convert a whole webpage (HTML, CSS, JS) into an image, like a screenshot, no matter the language is (but I would prefer Java, Python or C#)
I need it to be fast and usable on a server.
I already tried the folowing in Java:
CssBox, but the rendering isn't good enough (no JS)
Selenium Web Driver, but it's way too slow (Time to open firefox, display the page etc...) and not usable without GUI
I think a solution would be a kind of wrapper for a web engine but I didn't find anything about that (at least in Java). I've been told PhantomJS would fit for this need, is it right?
The perfect result would be to create something like that: http://www.page2images.com/home
Use a browser which you can control via a script or command line options like phantomjs. The documentation contains examples how to make screenshots from URLs.
The website you linked offer some good rest API that perform the task: it's not a viable option for you?
Selenium is your best bet. Depending on your page content (ie. JS libraries, etc) it might take some time, but you could automate this with a script to run nightly via cron. Or using screen.
It has a rich language of assertions and simulated mouse events, and ways to regression-test and/or monitor the state of a set of pages.
Good luck.
With no GUI, it's probably not possible to do something like this.
If you're not too tight on the GUI and related things, you can use the JavaFX Webview and take a screenshot of the node using the following code
WritableImage image = webView.snapshot(null, null);
BufferedImage bufferedImage = SwingFXUtils.fromFXImage(image, null);
....
References:
WebView#snapshot
SwingFXUtils#fromFXImage
need to code a bot that needs to do the following:
Go to a jsp page and search for something by:
writing something on a search box
clicking the search button(submit button)
clicking one of the the resulting buttons/links(same jsp page
with different output)
get the entire html of the new page(same jsp page with different
output)
The 4th one can be done with screen scraping and I do not think I need help with it. But I need some guidance to do the options from 1 to 3. Any links or just some keyword that will help me Google to learn about it will be appreciated. I plan to do this with java.
My suggestion is to use Selenium (http://docs.seleniumhq.org/download/).
Install Selenium IDE in your firefox, and it can record what you do on a website, store it into a script and reply it.
This video (http://www.youtube.com/watch?v=gsHyDIyA3dg) is gonna be helpful if you are a beginner.
And if you want to do it in Java, its easy, just export the scripts in Selenium IDE to JUnit Webdriver code.
Of course you can use Selenium Java webdriver in Java to write your program to operate on website directly.
Selenium automates browsers. That's it. What you do with that power is entirely up to you.
The above steps can be done by using selenium(which is a testing tool in java)
Even points 1 to 3 are screenscraping - you're figuring out (using either manual or automated means) what's there in the page and performing actions on them. You could try exploring the Apache HTTP Client for an easy way to run HTTP commands and get responses.
I hope you're doing this for legitimate means - screenscraping is almost always frowned upon if done without permission.
Language - JAVA
IDE - Eclipse
Tool - Selenium Web Driver
I have a test scenario where clicking on a link opens a new window with a pdf content being shown. The PDF shows up a form with Save and Cancel button. We don't get any element identifier using firebug for elements shown on the new window. How can I write a script to tell the driver to identify fields in the pdf and input something there followed by clicking on Save button.
Not sure if you can identify fields in a pdf using webdriver, since, in most cases, the pdf would be embedded in the browser and the fields in the pdf are not Html components.
One thing you might want to check out is using AutoIt scripting and calling this script from your host program, in your case, java program. I have not tried using AutoIt for pdf but used it for other purposes and found that it is not difficult to learn and use.
You can call AutoIt executable from java program as follows:
Runtime.getRuntime().exec("checkPDF.exe");
Since it is an executable, you might have portability issues if you plan on running your webdriver script on other platforms.
Not going to work with Selenium. PDFs are usually displayed using native desktop applications/browser plugins and that is nothing Selenium can handle in general. BTW: what do you intend to test in a PDF formular? Seems pointless to me (at first sight).
What you CAN do is: take a desktop screenshot using AWT. And you should be able to use native desktop methods to simulate mouse clicks and key presses. But that is, well, not so nice [tm] because you may need to record the click coordinates and re-record them every time the PDF changes.
Maybe related: Interacting with a PDF popup in Selenium
How to develop a user defined plugin for a web browser.
It should features:
It should be installed in any browsers.
It should be executed whenever the browser starts.
It should monitor the web page and access the web page that the browser displays.
It should monitor and access the web page (for example, getting a value from a text box) irrespective of the web page the browser displays. (The web page can be of any URL either google or any domain)
How to start with it? It would be helpful if there is some sample. Thanks in advance
For Firefox < 4 write an Addon, for 4 and above Jetpack will be the way to go. For Chrome write a Extension. Opera, well wait till 11.5 ships. Safari 5. IE.
Read the documentation for each browser.
Hm...
I hope you tell the user about that.
Right now it reads like you want to deploy something to a PC and monitor all browsers, well if you want to do that you'll have to put some effort into it.
I don't think 1. is possible, you will have to create multiple versions of your plugin in order to work with each browser.
There is not a single example, because as I mentioned, you are going to have to do something different. You will need to determine and target specific browsers. I would suggest starting with one and once you have it have it working move to the next browser.
Do you mean a Plugin (like Flash, PDF Reader) or and Extension?
Plugins are native programs and extensions are normally coded in JavaScript & HTML.
Depending on what you want to do, an extension is enough powerful and the better choice.
There is no browser independent way to implement plugins. For each browsers you must read the interface reference. For example the reference for chrome: http://code.google.com/chrome/extensions/getstarted.html
Consider the most excellent wordle tag cloud generator:
http://www.wordle.net/create
Entering text into the "textform" textarea and clicking the go button starts up the wordle java applet on that page. No traffic goes back to the server.
How can I cause this to happen programmatically? No hack too cheap!!
background for this question:
"tag cloud" generators?
If you mean starting it programmatically from a browser page, you can use the same type of JavaScript that that page uses, which calls the function Wordle.t() to start the applet.
If you want to call it from a Java program, you can download the Wordle.class or jar file yourself, and call the functions directly.
I'm the creator of Wordle.
In case anyone finds this page in the future, I thought it would be useful to explain that Wordle invokes its applet by constructing an applet tag with a huge <param> containing a sanitized version of whatever text you pasted in. It is the cheapest hack imaginable.