After running a Gatling test via Jenkins the report shows that a certain number of requests in different scenarios are failing. However when this is expanded, none of the requests show KOs or give information about which specific requests are failing and why:
The Gatling logs show 0 sign that there are any errors as well:
Should I assume that all requests succeeded or is this an issue that needs to be corrected?
EDIT:
I should also mention that I have a run which had 122k reqs and saw a similar issue, however that run also shows 6 reqs which are actually failing and output error messages:
This leads me to believe that the first picture does not show any real errors, but I am not certain about this.
Random shots in the dark, in the absence of a reproducer from you:
you're forcing the virtual user to a failed state with something like Session#markAsFailed that would cause the group to fail without any failed request.
you've configured some of your requests to be silent
If not, please provide a reproducer.
This is indeed very strange. The best way to double check is to run the same test and enable debug mode.
Also:
you're forcing the virtual user to a failed state with something like Session#markAsFailed that would cause the group to fail without any failed request. Makes a lot of sense.
Related
At the end of the TestNG run, we have a couple things that I am noticing are happening.
We get the following message displayed on the console (this example shown with failing tests):
53 tests completed, 6 failed, 1 skipped
There were failing tests. See the results at: file:///Users/***/Workspace/***/build/test-results/
And, of course an HTML report is generated. What I would like to do, is to add a step to this process where we are copying the generated HTML reports to a different server on the same network, and also publishing a notification in Slack. I think the slack part is pretty easy, just sending in a HTTP request with a json body, but where would I put the code to do this? Can I even do this without having to recompile TestNG?
You just have to implement your own reporter: http://testng.org/doc/documentation-main.html#logging-reporters
Dont understand your question completely .
" but where would I put the code to do this?"
At the end i suppose. You can implement your Listener and then in onFinish method you can implement the copying part.
Or
you can do the copying at the end after testng run is complete. How are you running testng tests ? That will be important in that case.
I am using Selenium WebDriver to automate the downloading of videos from a few online video converting sites.
Basically, all the user has to do is enter the URL of a YouTube video and the program will run the script to download the videos for you.
Everything runs very smoothly, but the problem is when the website fails to convert the video.
For example, clipconverter.cc sometimes throws an "Unable to get video infos from YouTube" error, but it works when you try again.
I have done some error checking in the event that there are missing elements and the program will stop running the script but in the example I mentioned above, I want to re-run the script again.
What is a possible way of achieving this? Do I have to re-create the error page and get the elements presented there?
Since you are not using Selenium as your test engine, but as a web scraper - IMHO it's actually a matter of your workflow to handle such states. This could be a corner case of a Defensive programming, but still can design it to handle such scenarios when/if they happen.
What is a possible way of achieving this? Do I have to re-create the error page and get the elements presented there?
Once you detect such error message (via the Selenium's functionality)
when the website fails to convert the video
you can call the same piece of code that handled the first request, but this time just pass the parameters you already have (videoURL, user etc.). In case you re-try and this site still fails, you can ask another one to carry out the download (as a failover scenario).
For the design I would use a mixture of
Command to take care of the user requests/responses
Observer to notify me for the changes
State for altering the behavior when the downloading process internal state changes
I have come to know about TravisCI. It's great for testing syntactical bugs and resolving them but if that's the only functionality it provides, then I think Travis isn't worth it for testing. My only question is, does TravisCI automatically tests the code for exceptions/errors which might occur when the user is using the app? Are their any pre-requisits for this?
I'm worried that I understand your question correctly.
TravisCI is a combination of builder and test runner, not a monkey-testing program. You need to write code for Andoid unit-test and autonomous UI test based on your service logic, and also need to use additional app feedback program if you want to get information(stacktrace) of some error occured from users.
To write a unit test, see the official Android tutorial at below link.
https://developer.android.com/training/testing.html
Also, you can command monkey testing.
http://developer.android.com/tools/help/monkey.html
And, in order to get error feedback, there are many projects and services to do this. Android Play Store developer console provides in ANR & Crashes menu. Or, Crashlytics provides errors in more arranged manner. Consider ACRA if you are in a security-sensitive company as it allows you to install to your own server. When user got crash, stacktraces will be automatically collected.
I'm curious I've never seen in my LogCat any messages marked as Assert. I have read documentation as here and here, but actually I didn't get the purpose of it and how it can be useful? I got that it throws errors, but why it is different with for example Log.e(). Can anyone tell me or point at some useful article about the purpose of it and give some small example? Thanks.
java.util.Log.ASSERT log level is something you see when you use the one of the wtf() logging methods.
It generally means "assertion failure" i.e. a programming error with certain assumptions not being true and it's best to terminate the program immediately.
junit.framework.Assert you linked to is another mechanism for expressing assertions. On failure, an AssertionFailedError will be thrown.
may be this link help for you.
They are disabled in emulator by default. You will need to add -shell -prop debug.assert=1 command line parameters to Additional Emulator Command Line Options at the run configuration you're using to run your app.
I am working on a java ee web application in NetBeans. I am trying to debug the behavior of the application but the behavior I'm seeing is confusing.
I am running the application through NetBeans in Tomcat. Within NetBeans, I select "Debug" from the root of the Project Tree and I can send one request to the application I've written. Breakpoints are hit and I get unique results from the application.
However, every subsequent time I try to send a request to my application, I get the exact same incorrect result (even if I clear the cache on Chrome) and the Netbeans IDE doesn't stop at any of the defined breakpoints. Is this to be expected? Does a Servlet get mangled in memory once it runs through the debugger once? Do I need to stop and restart/reattach the NetBeans debugger every time I want to debug the application? Is there something I'm doing wrong when using the debugger? Does this indicate a problem with the code I've written in my Servlet?
Thanks,
Jason Mazzotta
rjsang's point on the cache might be valid, and is worth investigating.
However, it might also be that something is breaking earlier than you expect, causing you to never even reach the break pointed lines.
I would suggest:
Look into liberally sprinkling your code with debug logging statements (using a good logging framework such as Log4J with SLF4j)
Throw more breakpoints at the problem - start with the very first line you expect to be hit from your request. And the go even higher/earlier, if possible.
Tail that Tomcat log (catalina.out) - you might spot something catastrophic happening there.
Good luck.