Intellij keeps track of test (JUnit) history, which is very nice...
But it registers failing tests as 'problem', which get triggered by commit-checks. I want to commit non-working tests at the end of the day, so I want to be able to clear the test-history.
However, I can't find that option anywhere, nor does 'delete' work on the selected test-run.
How to clear all of the test history?
In Settings, Editor, Inspections, select "Failed line in test" and turn the severity down to "Weak Warning"
A workaround would be to comment the non working tests, then run it again and commit it with only working tests.
Also the put TODO in the comments of the non working tests, so that you know that you have to check them later on.
Related
I am running unit tests on java code using intelliJ and junit. The unit tests were working fine, and they still are . . . until I run in debug mode. Today, when I run in debug mode, all of a sudden, they start iterating through java files that are installed with java, I didn't write, and that I don't have permission for like the following:
This is part of the java code base that I don't have any control over and I didn't set any breakpoint here. Yet it pauses here and makes me click through it to get past it. I wouldn't care if this was only a couple of additional clicks to click through, but I have clicked like 50 times and it still keeps going through base java code that I have no control over and is not what is throwing any problem or issue.
I tried changing the settings for code coverage but that didn't seem to do anything. Is there any way to get junit to only stop at breakpoints that I, myself, specified? Any help here would be appreciated. I didn't see a similar question on Stack Overflow and the stuff on other sites is all about crafting the unit test itself.
So crazy coder (see above) was correct, but I thought I would add (after painfully trying every other alternative) that you have to go to: Run | View Breakpoints and then scroll all the way down on the left side panel (which you may not notice if you have tons of breakpoints like I did) and at the bottom there are breakpoints for Java exceptions. You need to click those OFF see below:
When I'm running my tests on JUNIT 5, its report automatically collapses between one parameterized test and another during execution:
For example, when the 1st Device is running, the run report is opened like that. When the first test finishes and goes to the second run, all these lines collapses and I need to right-click the "TestCases" at the top and click on "Expand All" option.
Not sure if this is clear, I wish I could record a gif, but I'd like to know if there is an option to JUNIT do not collapse the executions automatically, because sometimes I want to follow the execution in real-time and see if any of the tests got an error, and not just at the end of the execution.
JUnit is just sending out „events“ to your IDE about tests being started and finished. JUnit has no control over how the IDE will animate this stream of events.
So I recommend that you open a feature request for your IDE vendor.
I have some Pax Exam tests. To execute the test normally, I just run the JUnit class in Eclipse. If I want to step through the code in the Eclipse debugger, I have to make it set the debug options, including the flag to make it wait for the debugger connection, which is a separate process I have to run. I'm presently having this code check for a "debug" system property to enable this, but that's sort of annoying.
It would be really nice if the #Configuration method can look at a system property or some other condition that will always be true if the code is executing in the debugger, so I could use that as a trigger to enable those flags, instead of a manually set "debug" system property.
I've already tried setting a breakpoint at the top of this method and inspecting all the system properties for something that might be set while in the debugger, but I didn't see anything.
Update:
Just to be clear, I need to point out some details about how Pax Exam tests work, to better explain why I'm looking for a way to improve this process. When the test runs, it forks a Karaf container to run the test in. In order to run a test in the debugger, you have to force the code that runs in the container to set the "suspend=y" flag, which will wait for a debugger connection. You definitely don't want to do that if you're not debugging.
After starting the pax exam test running, you then have to run another debug configuration, to make a remote connection to the karaf container. Technically, the run configuration for the unit test itself doesn't need to be a debug configuration.
So, the easiest way to make this happen is to have the the code that initiates the container check for a "debug" system property (or whatever you want to call it), and when that is set, to set the debugger port and the "suspend=y" flag. If the property is not set, it doesn't do that.
So, if you're running the test without debug, you have to make sure that system property is not set. If you're debugging, you have to make sure it's set. It's an annoyance to have to edit the run configuration each time you need to go back and forth.
So, what I was intending was to start the unit test run configuration as a debug configuration (even though it doesn't need to be), and for the code that starts the karaf container to detect that it's being run as a debug configuration, and set the "suspend=y" flag in that case.
I've concluded that there is no way for the code itself to detect this, but I'll detail in my own answer how I get the debugger to help me a bit.
I'm going to self-answer to address my original problem, although it isn't quite the answer to my original question, which the first answer does attempt to address. That answer doesn't help me, however.
My real need was to able to run my Pax Exam test so that when I first run the unit test, which is running the "server" portion of the Pax Exam test, it will know to provide the correct "-Xdebug" parameters to the server if I'm going to be using the debugger, and NOT to if I'm not using the debugger. I have code that checks for the "debug" system property and uses that to set the correct "-Xdebug" parameters, but I don't want to have to manually add or remove that parameter from the run configuration if I need to change how I'm running the test (going between debugging and not debugging).
So, as far as I can see, the best I can do is make it so that when I run the "server" portion of the unit test in the debugger (which otherwise doesn't actually have to be in the debugger, as it's only the client side that needs it), this will cause the system property I'm checking for to be set, so it will set the correct flags.
I'm not aware of any feature in Eclipse that lets me run particular predefined snippets of code when I start any debugging session (I mean "any", not a "particular" debugging session), but there is something that comes close, even though it's a bit of a hack.
What I did was set a breakpoint at the top of the method that sets the karaf configuration to be started, and I made the breakpoint conditional, with the following expression:
(System.setProperty("debug", "true") != null) && false
This will set the system property I need, but then not stop, as the final expression will be false.
Technically, it doesn't even need to be in this method, it just has to be hit before the karaf options are set.
This stays as a workspace setting, so I don't need to re-add this every time I start Eclipse.
Update:
With the upgrade to Oxygen, this can be slightly simpler, with the new "tracepoints" feature (https://www.eclipse.org/eclipse/news/4.7/jdt.php#toggle-trace-point). Just "toggle tracepoint" and set the expression to 'System.setProperty("debug", "true")'
If you are debugging eclipse application there is a way, you can use Platform.isDebug() to check whether your application is using debug or run lanuch configuration.
In your case i.e checking in Junit test run, I am not aware of how to detect whether we are using run/debug launch config. But strongly believe that there must a way to find it out with code like what you suggested using System Class.
Why can't you pass your own argument/environment variable in Junit debug launch config(With -D= in Arguments tab or in Environment tab? and use it in the test code to detect?
Just been playing around for the first time with IntelliJ IDEA Community edition, first time I have worked with it so if I'm missing something, please excuse me.
I have a bunch of unit tests which I run, however, when running them in IntelliJ (with the standard setup out of the box), I intermittently get the following error in the console:
03:14:17 Failed to start: 58 passed, 1 not started
I have searched the web but to no avail. If I run just the test that failed, it may or may not print out a similar error:
03:19:54 Failed to start: 0 passed, 1 not started
If I keep trying, eventually it works and tells me that all of my tests have passed.
The image is not the error as an exclamation mark, it is a different error icon (), which I do not recognise. The error in Event Log window appears as red text.
It always appears to happen with only one test and it is always the same test for any given set of tests. I.E. In a different project, the same issue also appears, but for a different test (but it's always the same one in each project or set of tests).
One more thing to note is that this ONLY happens when debugging and not when running, so it may be something to do with connecting the debugger?
It all works perfectly fine with Eclipse.
Any ideas what could be causing this?
The issue for me is Failed to start: 1, passed: 0 . I'm using Spring Boot 2.4.0 with Junit5 to test the Controller Class. I just commented out the version tag in the junit-jupiter-engine dependency. Then it worked. Really strange. It might helpful for someone.
I got the same error. It was something weird sent to System.out that made IntellJ IDEA test "not started".
I've created a ticket for IntelliJ IDEA, you can vote for it if you still encounter this problem.
In my case problem was in pom.
I moved from fulling working application to spring-boot implementation and only imported spring-boot-starter-test in dependency for testing.
I solved by excluding junit part from spring-boot-starter-test and added junit dependency of latest version in separate block.
Sometimes similar error happens with scala code when you mix sclamock's MockFactory with scalatest's AsyncFlatSpec.
So, be sure to use AsyncMockFactory like below.
class ExampleSpec extends AsyncFlatSpec with AsyncMockFactory
Looks like this may have been a bug on IntelliJ, it has been raised with them.
I had this problem (in Android Studio but its a customised IntelliJ) and the reason was WHERE the cursor was when I ran tests using CTRL-SHIFT-F10.
#Parameterized.Parameters
public static Collection data()
Once I moved the cursor into a test method or not inside any method, it worked.
I had the same issue. Whatever be the number of scenarios, it was showing 1 extra scenario in NOT STARTED stage. I was using Scenario Outline to run tests and had commented the rows in the Example tables.
I later found out that commenting the whole example table (which I didn't wanted to run) resolved the issue rather than commenting each row.
I had the same issue that cracked me up a little in IntelliJ IDEA 2017.2.1. The test case ran without any recognizable errors or irregularities, but in the end JUnit claimed the case as not started.
Figured out it was caused by trying to print into a PrintWriter that has already been closed.
In my case I was trying to mock a class having a public static method. Problem solved when everything is set free from static context.
I came along not started tests, when attempting to test code that called System.exit(1) . IntelliJ would not start my tests until I removed the exiting behavior like this:
At first I replaced all direct lines in the code from
System.exit(1)
to
onFailure.run();
with
unnable onFailure = () -> System.exit(1);
in the code itself. In the Test-Code I replaced the Runnable with a testable mock Runnable
Runnable mockOnFailure =
() -> {
throw new CustomError(
"Some descriptive message here.");
};
and than I expected that Error to be thrown like so (using AssertJ for nice assertion statements)
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatExceptionOfType;
assertThatExceptionOfType(CustomError.class).isThrownBy(
() -> {
callingCodeThatCallsOnFailure();
}
);
Now the tests are all being startet by the IDE as desired.
Feel free to reuse that if it is of help to you. I do not claim any ownership or copyright to any of those lines of code.
I'm wondering something about debugging and what it means.
Currently developping a program that watches a directory and when something changes in the directory, it runs all the tests that it can find in the same directory.
So, I test what happens if you change the tests? I find that when I have 20 tests that should fail, and I change one of them to succeed, the program finds and runs all the tests and reports 20 failed tests. It doesn't use the new test, which is slightly odd.
Now, when I go through the program with a debugger, it does detect the new test!
How come the results change when using a debugger? It is the default debugger of Eclipse.
The program watches the directory using a WatchService and runs and collects the tests using JUnit.
I shall answer your question as I understand it.
The debugger in eclipse is designed for making complex applications, like GUI's or apps like your own.
The debugger allows you to change the code of a program in real time. It is designed for you to be able to edit an application, and see what your doing with it, IE changing a window size, ETC.
Each debugger run is a clean build. Try cleaning the builds of your eclipse project, this may be creating the problem, as the compiler is logging the data and expecting you to input a certain answer.