We have a task in our Gradle build file called integration which extends Test and is used to run all of our integration tests. Our integration tests take quite some time to run because we have, as you might expect, quite a few. And several of them can run for up to 10 minutes because of some lengthy DB interactions.
Right now all I can see when they are running is >Building > :integration. And it sits at that point for...a very long time. I'm often not sure if it's just in the middle of a bunch of long tests, or if it's hanging on something it really shouldn't be hanging on.
So, I was wondering if there is a way to get them to run verbosely? So I can see which test we are actually on? I just want the command line to spit out something like:
Running test <testName>: Started...Complete (231 sec)
I've looked through the Gradle documentation and haven't seen anything that shows how this might be done. Granted, I am a little new at this, so... If anyone has an idea how this could be done? I would prefer to have a flag that just does it, but if it can be done with some scripting I'm willing to learn. Just point me in that direction...
Have a look at this DSL doc. In general what You need is make use of beforeTest and afterTest closures. You can keep the invocation start in a global map.
def times = [:]
test {
beforeTest { td ->
times[td.name] = System.currentTimeMillis()
println "$td.name started"
}
afterTest { td ->
println "$td.name finished in ${System.currentTimeMillis() - times[td.name]}"
}
}
I'm not sure if this is synchronous. You need to try.
Related
I have a pipeline which use Flink Global Window with custom Trigger based on Event Time (from timestamp on arriving element) and Evictor which cut unnecessary elements from the window and pass it to the ProcessFunction,
something like:
public SingleOutputStreamOperator<Results> processElements(DataStream<Elements> inputStream) {
return inputStream
.keyBy(Elements::getId)
.window(GlobalWindows.create())
.trigger(new CustomTrigger())
.evictor(new CustomEvictor())
.process(new MyWindowProcessFunction())
.name("Process")
.uid("process-elements")
.returns(Results.class);
}
public void executePipelineFlow(StreamExecutionEnvironment env) throws Exception {
DataStream<Elements> inputStream = getInputStream(env);
DataStream<Results> processedInput = processElements(inputStream);
applySink(processedInput);
}
I know i can test MyWindowProcessFunction with TestHarness which provide Watermark manipulation but i need to test whole flow, Trigger+Evictor+ProcessFunction.
Also i try some kind of timed SourceFunction with use of Thread.sleep() but my pipeline work in event time and this wont work if i had 1000 elements in test stream (because test will take couple of hours).
My question is, how i can unit test my whole processElements method?
I cant find any test examples for my case.
Thanks
You might look at how the end-to-end integration tests for the windowing exercise in the Flink training are implemented as an example. This exercise isn't using GlobalWindows or custom triggering, etc, but you can use this overall approach to test any pipeline.
The one thing that's maybe less than ideal about this approach is how it handles watermarking. The applications being tested are using the default periodic watermarking strategy, wherein watermarks are generated every 200msec. Since the tests don't run that long, the only watermark that's actually generated is the one that comes at the end of every job with bounded inputs. This works, but isn't quite the same as what will happen in production. (Is this why you were thinking of having your test source sleep between events?)
BTW, these tests in the Flink training repo are made slightly more complex than is ordinarily necessary, because these tests are used to provide coverage for the Java and the Scala implementations of both the exercises and solutions.
We are seeing frequent timing issues in our nightly UI tests. The tests often fail because events performed by the java.awt.Robot have not completed before the test code tries to verifying the results.
We are code like using:
Point p = cb.getLocationOnScreen();
int m = 5;
if (cb.getWidth()<5||cb.getHeight()<5)
m=3;
System.out.println("Click at " + (p.x+m) + "," + (p.y+m));
robot.mouseMove(p.x + m, p.y + m);
robot.mousePress(MouseEvent.BUTTON1_MASK);
robot.mouseRelease(MouseEvent.BUTTON1_MASK);
robot.waitForIdle();
Thread.sleep(100);
// Verify results...
We keep having the bump up the Thread.sleep to ensure things complete on the event thread (things like clicking on a button or typing text) despite the java.awt.Robot.waitForIdle() call.
I found this question (Does java.awt.Robot.waitForIdle() wait for events to be dispatched?) which says to use java.awt.Toolkit.realSync(), but this is not an accessible method and with Java 9 coming, I'd rather not add any unnecessary reflection to our tests.
Are there better solutions? Or do people use realSync() or just increase the wait time until tests pass reliably?
UPDATE
I tried using sun.awt.SunToolkit.realSync(), but it is hanging in some tests and never returning. It looks like the EventThread is painting borders and such.
Looks like my only solution is to bump the sleep time up until the test can actually pass reliably. Yuck.
UPDATE 2
I figured out the first hang I had with realSync(). It was a problem in our code, where some paint code called a get method that called a set method which queued up another paint. Repeat forever.
Fixed our code and realSync() worked for a while, sort of. It still seems to return before it should. No idea why and I have no work around.
Also, I've seen realSync() hang and time out on my Linux box running under Java 1.7, but it works under Java 1.8. Very reliable tool here. /s
So, back the to original question. What is a decent way to tell when UI updates are done?
I came to the conclusion that I did need to use SunToolkit.realSync() and it seems to work correctly for Java 9 as well.
It seems, although I couldn't find any hard evidence, that realSync() waits for all graphics related threads while Robot.waitForIdle() and SwingUntilities.invokeLater() only wait for the Java EventThread to finish it's work.
If someone comes up with a better answer, I'd being will to accept that instead of my answer.
I have a lot of Acceptance tests that runs with CucumberRunner. When it starts I have no idea in awhile how many scenarios left to be executed. Is there any way how i can print in the terminal like:
Scenario(10 of 1000) Call A() will add new object
Scenario(11 of 1000) Call B() will open new window
....
etc
Regards
Try using QAF Gherkin client it has live reporting feature which not only shows you the total vs completed test count but you also can see detailed report of completed test during execution.
I'm currently running some Cuke test with Selenium that hinge on preconditions in the system. A given run can include 1 or more Features. Features check for certain preconditions at the start of the run, for instance whether or not it can find the proper driver.exe file. If some of those preconditions fail, I would like to kill the run completely inside of a catch block an prevent any other scenario or Feature from being checked as they will all fail anyway. Is there a function or set of functions to accomplish this?
try {
//Gonna check for things here
} catch(Exception e) {
//Something went wrong, kill this thread.
}
I would consider a before step in Cucumber. It will be executed before each scenario in that particular feature file. This would result in the check being performed before each scenario. If you need, set a static flag that you can examine and fail fast if needed.
I have 100 test methods. After each test, I need to perform some actions (data cleanup). Each of these 100 tests have different actions. These 100 test are not in one package or class. They are distributed.
How can I achieve this?
Right now, if a test passes, the cleanup happens, since it is part of the test. However, if the test fails, the cleanup doesn't happen. How can I make this work?
Any pointers would help.
If the tests do not have any common cleanup, you can ensure the test gets cleaned up from within the test method using a try/finally block, something like:
try {
// do test
}
finally {
// do cleanup
}
If there is any common cleanup between the test methods you could use #AfterMethod to do the cleaup.
In your case, it doesn't sound like there is much common cleanup, so the first may work better for you. It might also be worth considering if you need 100 different cleanup methods or if there can be any common setup/cleanup.
#AfterMethod would mean that you would need that every class gets this method. So you would need to go and edit each class/method. Same for #AfterGroups.
What I would suggest is to implement the IInvokedMethodListener. This would give you beforeInvocation and afterInvocation methods. In the afterInvocation method, implement your cleanup code.
Create a suite file with all of your tests which need this cleanup and specify this listener.
Hope it helps.
It sounds like you may already be using #AfterMethod to cleanup after the tests. To make #AfterMethod work after a failure, you need to use:
#AfterMethod(alwaysRun=true)
You can use groups and run a #AfterGroups somewhere. There's a #BeforeGroups as well. Setting it up with build tooling is a bit tedious and there are some interactions with IDEs as well. There's a BeforeSuite and AfterSuite as well, I believe.
An alternative could be using Spring and using the same spring context in all your tests (spring context gets reused that way). You can then do some things when the context is destroyed after your tests.