JWebUnit to test File Upload - java

I am trying to test a File Upload field with JWebUnit but I do not know how to do that. I see that JWebUnit has a dependancy on common-fileupload so I expect that this is possible but I can see nothing documenting it so the feature may as well not exist. I have done some extensive searching and looking so I think soon I might even go as far as check out the JWebUnit code for traces but I'm still not sure how to get this done. How do I make sure that a file is added to the HTTP Post when the form submit button is clicked in the test? Thanks.

Okay, so as it turns out, after some searching through the source code I found a test on line 77 of a test file that basically explains how it works by doing it.

Related

Downloading Files with ChromeDriver

I have a project where I need to download an audio file in ChromeDriver. The behavior here is different from in regular Chrome, where if I visit the URL, it'll automatically start downloading a file. If I do the same thing manually in ChromeDriver, it will not download the file.
I've tried different configurations of the chrome options/preferences. I've also found options that worked with old versions of chrome, that no longer work anymore.
Here is one of the better resources I found, but it still didn't work, even with their updated blog post
https://dkage.wordpress.com/2012/03/10/mid-air-trick-make-selenium-download-files/
When I attempt to use his solution, my chromedriver abruptly crashes itself in a non chrome-esque way. It just disappears. Not "Something went wrong" page like you'd normally expect. I end up with Java not being able to find my Session, cause it stopped existing.
Has anyone been successful at downloading files through Selenium webdriver in Chrome? If I need to use another browser, I can.
I'm currently using Chrome Canary.
I have the same problem. One solution that might work is to use another library, that is able to operate outside of the browser. I found these stackoverflow post discussiong this issue:
https://sqa.stackexchange.com/questions/2197/how-to-download-a-file-using-seleniums-webdriver
it contains this blogpost wich gives you some sugestions.
https://blog.codecentric.de/en/2010/07/file-downloads-with-selenium-mission-impossible/
Window automation
The first approach smells like “brute force”: when searching the net for a solution to the problem, you easily end up with suggestions, to control the native window with some window automation software like AutoIt. Means you have to prepare AutoIt such, that it waits for any browser download dialog, the point at which Selenium is giving up, takes control of the window, saves the file, and closes the window. After that Selenium can continue as usual.
This might eventually work, but I found it to be techical overkill. And as it turned out, there was a much simpler solution to the problem.
Change the browsers default behaviour
The second possibility is to change the default behaviour of the browser. When clicking on a PDF for example, the browser should not open a dialog and ask the user what to do with the file, but rather save it without comments and questions in a predefined directory. To accomplish that, a file download has to be initiated manually, saved to disk and marked as the default behaviour for these file types from now on.
Well, that could work. You “only” have to assure that all developers, hudson instances, etc. share the same browser profile. And depending on the amount of different file types, that could be some manual work.
Direct download
Taking a step back, why do we want to download the file with Selenium in the first place? Wouldn’t it be much cooler, to download the file without Selenium, but rather with wget? You would have solved the second problem as you go. Seems a good idea, since wget is not only available for Linux but also for Windows.
Problem solved? Not quite: what about files, that are not freely accessible? What, when I first need to create some state with Selenium in order to access a generated file? The solution seems ok for public files, but is not applicable for all situations.

TestNG Overriding Report Generation

At the end of the TestNG run, we have a couple things that I am noticing are happening.
We get the following message displayed on the console (this example shown with failing tests):
53 tests completed, 6 failed, 1 skipped
There were failing tests. See the results at: file:///Users/***/Workspace/***/build/test-results/
And, of course an HTML report is generated. What I would like to do, is to add a step to this process where we are copying the generated HTML reports to a different server on the same network, and also publishing a notification in Slack. I think the slack part is pretty easy, just sending in a HTTP request with a json body, but where would I put the code to do this? Can I even do this without having to recompile TestNG?
You just have to implement your own reporter: http://testng.org/doc/documentation-main.html#logging-reporters
Dont understand your question completely .
" but where would I put the code to do this?"
At the end i suppose. You can implement your Listener and then in onFinish method you can implement the copying part.
Or
you can do the copying at the end after testng run is complete. How are you running testng tests ? That will be important in that case.

Run main method of a class in the test folder

Tried to google but got hundreds of unrelated issues regarding testing. I guess I'm missing a crucial keyword to reduce the number of hits to something that is relevant for me.
I have a class in src/test-integration/java which i need to run, since it is a tool for extracting test data from an database. It's basically just a little script in the main method.
However when I try to "run as java application" in Eclipse it says: Error: Could not find or load main class x.y.z.MyClass
I know it has worked before, but not sure how I got it to work.
Sorry for any missing information, please feel free to ask for more.
Any ideas of what I'm missing?
Added the whole full path to the java class in the Java Build Path properties in Eclipse and deselected "allow output for source folders" and selected it once again (don't know if that did something, but I include it anyway).

Works fine in NetBeans, Have some errors when built

I am in a very very upset situation. My program worked 100% fine when it is in netbeans, but when I build it it has some issues. That is, in my program, there is an one interface and 10 implementation classes. Program calls correct correct implementation class based on how the user save the file (eg: if user save it as game.yellow, it will call "YellowImpl.java", if saved as game.red, then "RedImpl.java" likewise).
But when it is built, it is calling everything fine, instead YellowImpl!! Which means, if the user saves it as game.red, it will call "RedImpl" correctly and same to all other implementations instead YellowImpl. When the user save the file as game.yellow, the program do nothing!!! But this is not happening when it is inside the netbeans! I tried clean and build too, still not good! What is causing this ? Please help!
However, I am unable to provide the code, because it has lot of codings
PS: I am using some libs too
It's difficult to understand exactly what issue you are having with your explaination and no code. However I assume you are having issue with implementation naming conventions.
Perhaps the below link can help.
Java Interfaces/Implementation naming convention
I am agree with #Rhys: it is hard to understand what happens in your application. Just let me give you an advice: do not think (even for 1 second) that there is a bug in java compiler, JVM etc. It is definitely your bug.
How to find it? I suggest you to use remote debugging.
Run your application outside IDE (NetBeans in your case) with enabled remote debugger, connect to it with net beans and debug your application. I believe you will fined the problem within minutes.
How to enable remote debugging? Add the following long string to your java execution command line:
-Xdebug -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=n
If something happens in very beginning of your program execution use suspend=y.
Now connect to this application from NetBeans. It is simple, just configure it to port 8000 according to the configuration of your application.
That's it. Good luck.
Thanks a lot for the replies guys. However, I managed to find the issue. That was a simple, capital case!! I have a package called "kolor" and all the implementations are inside that. In my "YelloImpl" class, I have mentioned the package as "Kolor" (Note that "K" is capital). It was fine in netbeans, but outside it wasn't. After clearing this out, everything went fine. Thanks all for the replies again.

Accessibility of test coverage reports for blind people

I am currently helping a member of my team to get in grip with our new project and the tools we are using. We use Java as a primary language. A particularity of my colleague is that he is blind. He's working primarily with Emacs, and he runs maven targets in a terminal.
After I'm done implementing, I find it very useful to check my test coverage. I'd like my colleague to be able to check coverage as well. I have two ways of getting this information:
Use IntelliJ integrated test coverage (it uses EMMA and shows a green, red or yellow color next to each line). Very convenient, as I can see this information immediatly after having run the tests, with no further interaction
This won't work for my colleague as he can't use IntelliJ, and it would probably not work anyway as there is no textual representation of the coverage info
Use Cobertura reports. They use the same concept of line in green/red They are fine for macro information like overall coverage in a class, but not for checking which line has not been covered.
Actually he could dig into the HTML sources of the report and find out which one has class nbHitsUncovered, but it seems very impractical.
I would really like to show him how to get his coverage data quickly. Does anybody know of a tool that shows coverage without relying on colors? Or do we have to write our own? (by transforming the HTML report, for instance)
I’m a totally blind developer who does my work on Windows with the Jaws for Windows screen reader so this won’t map exactly to the developer you work with. With a little programming it looks like cobertura test results are the easiest to deal with. Based on the following sample XML report it shouldn’t be difficult to throw together a quick Perl script to check for lines with a hit count of 0.
https://raw.github.com/jenkinsci/cobertura-plugin/master/src/test/resources/hudson/plugins/cobertura/coverage-with-data.xml
I was able to find out that line 24 was the only one executed 0 times with a quick find for
Hits="0"
Although I was able to find out what line wasn’t executed I had to scroll up quite a bit to figure out what class and method the line was located in. A quick Perl script could eliminate the need to scroll back and provide the package, class, and method the line is located in more efficiently.
I took a look at a sample Emma HTML report using Google Chrome and it was pretty accessible. I could tell what methods were fully tested and what weren’t. Figuring out what lines were executed and what ones weren’t was more difficult. I could tell a method wasn’t 100% executed and would then navigate to it in the report. I then had to use the keystroke provided by my screen reader to announce color on each line of code. I forget the exact color names but I could tell the lines that were and weren’t executed since my screen reader listed them as having different colors. This worked but was slow since I had to manually check each line of a method; that wasn’t completely executed since my screen reader can’t automatically announce color changes. I’m not sure how your developer would do the equivalent since I don’t know his exact assistive technology setup.
I have had a dig around Antoine as I also use SONAR and Cobertura on my projects and am intrigued by your problem. From what I can see when you tell the ANT task to generate "html" as the output you get all the line information that want, but as you've pointed out it's not an easily parseable format (and possibly subject to change).
With SONAR I tell Cobertura to output "xml" which gives me a file named coverage.xml with the output. Unfortunately it does not include line-by-line data and I cannot see any ANT task parameters to include it from the Cobertura docs.
It makes sense to me that the file named cobertura.ser contains all of the data you require, but only the HTML report displays it for you. I believe the answer to your question may lie in trying to extract the required serialised data from cobertura.ser.
Looking at the source code I can see the following classes
net.sourceforge.cobertura.reporting.html.HTMLReport
net.sourceforge.cobertura.reporting.xml.XMLReport
What I suspect you can try and do is take a copy of the HTMLReport as a base and try writing the same output as XML which you can then parse for your own purposes (or mjust ad the same method calls used by HTMLReport in XMLReport). I can see the string nbHitsUncovered in HTMLReport so hopefully you only have one class to write.
I've googled around and can't see anyone having done this, but it looks like a useful enhancement.
How about using a GreaseMonkey script that searches all the lines having the class nbHitsUncovered and adds some list/table containing the information wanted to the report?

Categories