At the end of the TestNG run, we have a couple things that I am noticing are happening.
We get the following message displayed on the console (this example shown with failing tests):
53 tests completed, 6 failed, 1 skipped
There were failing tests. See the results at: file:///Users/***/Workspace/***/build/test-results/
And, of course an HTML report is generated. What I would like to do, is to add a step to this process where we are copying the generated HTML reports to a different server on the same network, and also publishing a notification in Slack. I think the slack part is pretty easy, just sending in a HTTP request with a json body, but where would I put the code to do this? Can I even do this without having to recompile TestNG?
You just have to implement your own reporter: http://testng.org/doc/documentation-main.html#logging-reporters
Dont understand your question completely .
" but where would I put the code to do this?"
At the end i suppose. You can implement your Listener and then in onFinish method you can implement the copying part.
Or
you can do the copying at the end after testng run is complete. How are you running testng tests ? That will be important in that case.
Related
After running a Gatling test via Jenkins the report shows that a certain number of requests in different scenarios are failing. However when this is expanded, none of the requests show KOs or give information about which specific requests are failing and why:
The Gatling logs show 0 sign that there are any errors as well:
Should I assume that all requests succeeded or is this an issue that needs to be corrected?
EDIT:
I should also mention that I have a run which had 122k reqs and saw a similar issue, however that run also shows 6 reqs which are actually failing and output error messages:
This leads me to believe that the first picture does not show any real errors, but I am not certain about this.
Random shots in the dark, in the absence of a reproducer from you:
you're forcing the virtual user to a failed state with something like Session#markAsFailed that would cause the group to fail without any failed request.
you've configured some of your requests to be silent
If not, please provide a reproducer.
This is indeed very strange. The best way to double check is to run the same test and enable debug mode.
Also:
you're forcing the virtual user to a failed state with something like Session#markAsFailed that would cause the group to fail without any failed request. Makes a lot of sense.
Intellij keeps track of test (JUnit) history, which is very nice...
But it registers failing tests as 'problem', which get triggered by commit-checks. I want to commit non-working tests at the end of the day, so I want to be able to clear the test-history.
However, I can't find that option anywhere, nor does 'delete' work on the selected test-run.
How to clear all of the test history?
In Settings, Editor, Inspections, select "Failed line in test" and turn the severity down to "Weak Warning"
A workaround would be to comment the non working tests, then run it again and commit it with only working tests.
Also the put TODO in the comments of the non working tests, so that you know that you have to check them later on.
I have an automated test about changing frame to a PDF viewer to read the text. I have four tests which basically do the same: Look for a number, letter... and finally test that a word does not exist in the PDF. If I run these test individually, or all frame related test, it works well. But if I run the test suite (which includes like 500 test). One of them systematically fails every time, showing me this error:
org.openqa.selenium.NoSuchFrameException: no such frame
I'm using try/catch, thread/sleep... and all my test are working good, but I cannot figure out why it's always the same test which fails, and why if i run it individually or all the feature, it works. Just wondering if you guys can show me different reasons why this could happen, so I can improve my skills.
Does your test suite take screenshots if the test fails? If it doesn't, I would encourage you to implement a rule for it. (there are numerous examples over the web). A screenshot could shed some light into what is going on.
It sounds like a performance issue though. When you run 1 test, there just isn't a lot of load on the system, and the frame is loaded fast enough for the test to locate it.
However, when you add in the whole suite, tests can sometimes run slower and steps could fail as a result.
It's possible that the failing step is the previous step. If the frame is supposed to load after clicking something, but the click action took place before the link was fully loaded (the link was not actually clicked), then the frame won't be there and the test fails.
It wouldn't matter how long the failing step waited as the previous step was really the one that failed
Just been playing around for the first time with IntelliJ IDEA Community edition, first time I have worked with it so if I'm missing something, please excuse me.
I have a bunch of unit tests which I run, however, when running them in IntelliJ (with the standard setup out of the box), I intermittently get the following error in the console:
03:14:17 Failed to start: 58 passed, 1 not started
I have searched the web but to no avail. If I run just the test that failed, it may or may not print out a similar error:
03:19:54 Failed to start: 0 passed, 1 not started
If I keep trying, eventually it works and tells me that all of my tests have passed.
The image is not the error as an exclamation mark, it is a different error icon (), which I do not recognise. The error in Event Log window appears as red text.
It always appears to happen with only one test and it is always the same test for any given set of tests. I.E. In a different project, the same issue also appears, but for a different test (but it's always the same one in each project or set of tests).
One more thing to note is that this ONLY happens when debugging and not when running, so it may be something to do with connecting the debugger?
It all works perfectly fine with Eclipse.
Any ideas what could be causing this?
The issue for me is Failed to start: 1, passed: 0 . I'm using Spring Boot 2.4.0 with Junit5 to test the Controller Class. I just commented out the version tag in the junit-jupiter-engine dependency. Then it worked. Really strange. It might helpful for someone.
I got the same error. It was something weird sent to System.out that made IntellJ IDEA test "not started".
I've created a ticket for IntelliJ IDEA, you can vote for it if you still encounter this problem.
In my case problem was in pom.
I moved from fulling working application to spring-boot implementation and only imported spring-boot-starter-test in dependency for testing.
I solved by excluding junit part from spring-boot-starter-test and added junit dependency of latest version in separate block.
Sometimes similar error happens with scala code when you mix sclamock's MockFactory with scalatest's AsyncFlatSpec.
So, be sure to use AsyncMockFactory like below.
class ExampleSpec extends AsyncFlatSpec with AsyncMockFactory
Looks like this may have been a bug on IntelliJ, it has been raised with them.
I had this problem (in Android Studio but its a customised IntelliJ) and the reason was WHERE the cursor was when I ran tests using CTRL-SHIFT-F10.
#Parameterized.Parameters
public static Collection data()
Once I moved the cursor into a test method or not inside any method, it worked.
I had the same issue. Whatever be the number of scenarios, it was showing 1 extra scenario in NOT STARTED stage. I was using Scenario Outline to run tests and had commented the rows in the Example tables.
I later found out that commenting the whole example table (which I didn't wanted to run) resolved the issue rather than commenting each row.
I had the same issue that cracked me up a little in IntelliJ IDEA 2017.2.1. The test case ran without any recognizable errors or irregularities, but in the end JUnit claimed the case as not started.
Figured out it was caused by trying to print into a PrintWriter that has already been closed.
In my case I was trying to mock a class having a public static method. Problem solved when everything is set free from static context.
I came along not started tests, when attempting to test code that called System.exit(1) . IntelliJ would not start my tests until I removed the exiting behavior like this:
At first I replaced all direct lines in the code from
System.exit(1)
to
onFailure.run();
with
unnable onFailure = () -> System.exit(1);
in the code itself. In the Test-Code I replaced the Runnable with a testable mock Runnable
Runnable mockOnFailure =
() -> {
throw new CustomError(
"Some descriptive message here.");
};
and than I expected that Error to be thrown like so (using AssertJ for nice assertion statements)
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatExceptionOfType;
assertThatExceptionOfType(CustomError.class).isThrownBy(
() -> {
callingCodeThatCallsOnFailure();
}
);
Now the tests are all being startet by the IDE as desired.
Feel free to reuse that if it is of help to you. I do not claim any ownership or copyright to any of those lines of code.
I am trying to test a File Upload field with JWebUnit but I do not know how to do that. I see that JWebUnit has a dependancy on common-fileupload so I expect that this is possible but I can see nothing documenting it so the feature may as well not exist. I have done some extensive searching and looking so I think soon I might even go as far as check out the JWebUnit code for traces but I'm still not sure how to get this done. How do I make sure that a file is added to the HTTP Post when the form submit button is clicked in the test? Thanks.
Okay, so as it turns out, after some searching through the source code I found a test on line 77 of a test file that basically explains how it works by doing it.