I want to test something for a while, say 5 seconds, and then pass the test if nothing wrong has been asserted. Is this possible with annotations? Can something like #Test(uptime=5000) be used?
Revised answer after question was edited
Fundamentally it feels like you're testing the wrong thing here - it seems very odd for "nothing happening" to be a sign of success.
If you want to prove that your algorithm can run for a certain amount of time without failing, I would actually extract out a single cycle, then write a test of something like:
#Test
public void fineForFiveSeconds() {
long start = System.nanoTime();
long end = start + TimeUnit.SECONDS.toNanos(5);
while (System.nanoTime < end()) {
test.executeOneIteration();
}
}
This way you don't have a separate thread which has to kill the working code, etc.
Original answer
This answer was written before the question indicated that timing out was a sign of success, not failure.
I think you just want the timeout attribute in the #Test annotation:
#Test(timeout = 5000)
with documentation:
Optionally specify timeout in milliseconds to cause a test method to fail if it takes longer than that number of milliseconds.
Related
Quick question regarding how to build a visual on a specific condition of a java counter in Grafana please.
Currently, I have a small piece of java code, straightforward.
private String question(MeterRegistry meterRegistry) {
if (someCondition()) {
Counter.builder("theCounter").tags("GOOD", "GOOD").register(meterRegistry).increment();
return "good";
} else {
LOGGER.warn("it is failing, we should increment failure");
Counter.builder("theCounter").tags("FAIL", "FAIL").register(meterRegistry).increment();
return "fail";
}
}
As you can see, it is very simple, just a "if a condition is met, increment the GOOD counter, if not, increment the FAIL counter"
I am interested in building a dashboard for the failures only.
When I query my /prometheus endpoint I successfully see:
myCounter_total{FAIL="FAIL",} 7.0
myCounter_total{GOOD="GOOD",} 3.0
Hence, I started using this query.
myCounter_total{_ws_="workspace",_ns_="namespace",_source_="source}
Unfortunately, this query is giving me the visual for everything, the GOOD and the FAIL. In my example, I see all 10 counters, while I just want to see the 7 failures.
I tried putting
myCounter_total{FAIL="FAIL",_ws_="workspace",_ns_="namespace",_source_="source}
{{FAIL}}
But no luck.
May I ask what did I miss please?
Create only one counter for this case and give it a label named status, for example. Now, depending on whether the good or the fail condition occurs, increment your counter with a string "GOOD" or "FAIL" as a value for the status label, for example like in this pseudo code:
Counter myCounter =
Counter.build().name("myCounter").help("This is my counter").labelNames("status").register();
if (someCondition()) {
myCounter.labels("GOOD").increment();
} else {
myCounter.labels("FAIL").increment();
}
Now you should be able see a query output like this:
myCounter_total{status="FAIL"} 7.0
myCounter_total{status="GOOD"} 3.0
For visualization purposes, if you'd like to see two graphs, one for good and one for bad cases, you could use something like below. This one query querying myCounter_total reveals its data and the Legend takes care of separating values/graphs with different status label values.
Query: myCounter_total
Legend: {{status}}
I am working on a small game project and want to track time in order to process physics. After scrolling through different approaches, at first I had decided to use Java's Instant and Duration classes and now switched over to Guava's Stopwatch implementation, however, in my snippet, both of those approaches have a big gap at the second call of runtime.elapsed(). That doesn't seem like a big problem in the long run, but why does that happen?
I have tried running the code below as both in focus and as a Thread, in Windows and in Linux (Ubuntu 18.04) and the result stays the same - the exact values differ, but the gap occurs. I am using the IntelliJ IDEA environment with JDK 11.
Snippet from Main:
public static void main(String[] args) {
MassObject[] planets = {
new Spaceship(10, 0, 6378000)
};
planets[0].run();
}
This is part of my class MassObject extends Thread:
public void run() {
// I am using StringBuilder to eliminate flushing delays.
StringBuilder output = new StringBuilder();
Stopwatch runtime = Stopwatch.createStarted();
// massObjectList = static List<MassObject>;
for (MassObject b : massObjectList) {
if(b!=this) calculateGravity(this, b);
}
for (int i = 0; i < 10; i++) {
output.append(runtime.elapsed().getNano()).append("\n");
}
System.out.println(output);
}
Stdout:
30700
1807000
1808900
1811600
1812400
1813300
1830200
1833200
1834500
1835500
Thanks for your help.
You're calling Duration.getNano() on the Duration returned by elapsed(), which isn't what you want.
The internal representation of a Duration is a number of seconds plus a nano offset for whatever additional fraction of a whole second there is in the duration. Duration.getNano() returns that nano offset, and should almost never be called unless you're also calling Duration.getSeconds().
The method you probably want to be calling is toNanos(), which converts the whole duration to a number of nanoseconds.
Edit: In this case that doesn't explain what you're seeing because it does appear that the nano offsets being printed are probably all within the same second, but it's still the case that you shouldn't be using getNano().
The actual issue is probably some combination of classloading or extra work that has to happen during the first call, and/or JIT improving performance of future calls (though I don't think looping 10 times is necessarily enough that you'd see much of any change from JIT).
Recently, I was writing a plugin using Java and found that retrieving an element(using get()) from a HashMap for the first time is very slow. Originally, I wanted to ask a question on that and found this (No answers though). With further experiments, however, I notice that this phenomenon happens on ArrayList and then all the methods.
Here is the code:
public class Test {
public static void main(String[] args) {
long startTime, stopTime;
// Method 1
System.out.println("Test 1:");
for (int i = 0; i < 20; ++i) {
startTime = System.nanoTime();
testMethod1();
stopTime = System.nanoTime();
System.out.println((stopTime - startTime) + "ns");
}
// Method 2
System.out.println("Test 2:");
for (int i = 0; i < 20; ++i) {
startTime = System.nanoTime();
testMethod2();
stopTime = System.nanoTime();
System.out.println((stopTime - startTime) + "ns");
}
}
public static void testMethod1() {
// Do nothing
}
public static void testMethod2() {
// Do nothing
}
}
Snippet: Test Snippet
The output would be like this:
Test 1:
2485ns
505ns
453ns
603ns
362ns
414ns
424ns
488ns
325ns
426ns
618ns
794ns
389ns
686ns
464ns
375ns
354ns
442ns
404ns
450ns
Test 2:
3248ns
700ns
538ns
531ns
351ns
444ns
321ns
424ns
523ns
488ns
487ns
491ns
551ns
497ns
480ns
465ns
477ns
453ns
727ns
504ns
I ran the code for a few times and the results are about the same. The first call would be even longer(>8000 ns) on my computer(Windows 8.1, Oracle Java 8u25).
Apparently, the first calls is usually slower than the following calls(Some calls may be longer in random cases).
Update:
I tried to learn some JMH, and write a test program
Code w/ sample output: Code
I don't know whether it's a proper benchmark(If the program has some problems, tell me), but I found that the first warm-up iterations spend more time(I use two warm-up iterations in case the warm-ups affect the results). And I think that the first warm-up should be the first call and is slower. So this phenomenon exists, if the test is proper.
So why does it happen?
You're calling System.nanoTime() inside a loop. Those calls are not free, so in addition to the time taken for an empty method you're actually measuring the time it takes to exit from nanotime call #1 and to enter nanotime call #2.
To make things worse, you're doing that on windows where nanotime performs worse compared to other platforms.
Regarding JMH: I don't think it's much help in this situation. It's designed to measure by averaging many iterations, to avoid dead code elimination, account for JIT warmup, avoid ordering dependence, ... and afaik it simply uses nanotime under the hood too.
Its design goals pretty much aim for the opposite of what you're trying to measure.
You are measuring something. But that something might be several cache misses, nanotime call overhead, some JVM internals (class loading? some kind of lazy initialization in the interpreter?), ... probably a combination thereof.
The point is that your measurement can't really be taken at face value. Even if there is a certain cost for calling a method for the first time, the time you're measuring only provides an upper bound for that.
This kind of behaviour is often caused by the compiler or RE. It starts to optimize the execution after the first iteration. Additionally class loading can have an effect (I guess this is not the case in your example code as all classes are loaded in the first loop latest).
See this thread for a similar problem.
Please keep in mind this kind of behaviour is often dependent on the environment/OS it's running on.
I saw the following code in this commit for MongoDB's Java Connection driver, and it appears at first to be a joke of some sort. What does the following code do?
if (!((_ok) ? true : (Math.random() > 0.1))) {
return res;
}
(EDIT: the code has been updated since posting this question)
After inspecting the history of that line, my main conclusion is that there has been some incompetent programming at work.
That line is gratuitously convoluted. The general form
a? true : b
for boolean a, b is equivalent to the simple
a || b
The surrounding negation and excessive parentheses convolute things further. Keeping in mind De Morgan's laws it is a trivial observation that this piece of code amounts to
if (!_ok && Math.random() <= 0.1)
return res;
The commit that originally introduced this logic had
if (_ok == true) {
_logger.log( Level.WARNING , "Server seen down: " + _addr, e );
} else if (Math.random() < 0.1) {
_logger.log( Level.WARNING , "Server seen down: " + _addr );
}
—another example of incompetent coding, but notice the reversed logic: here the event is logged if either _ok or in 10% of other cases, whereas the code in 2. returns 10% of the times and logs 90% of the times. So the later commit ruined not only clarity, but correctness itself.
I think in the code you have posted we can actually see how the author intended to transform the original if-then somehow literally into its negation required for the early return condition. But then he messed up and inserted an effective "double negative" by reversing the inequality sign.
Coding style issues aside, stochastic logging is quite a dubious practice all by itself, especially since the log entry does not document its own peculiar behavior. The intention is, obviously, reducing restatements of the same fact: that the server is currently down. The appropriate solution is to log only changes of the server state, and not each its observation, let alone a random selection of 10% such observations. Yes, that takes just a little bit more effort, so let's see some.
I can only hope that all this evidence of incompetence, accumulated from inspecting just three lines of code, does not speak fairly of the project as a whole, and that this piece of work will be cleaned up ASAP.
https://github.com/mongodb/mongo-java-driver/commit/d51b3648a8e1bf1a7b7886b7ceb343064c9e2225#commitcomment-3315694
11 hours ago by gareth-rees:
Presumably the idea is to log only about 1/10 of the server failures (and so avoid massively spamming the log), without incurring the cost of maintaining a counter or timer. (But surely maintaining a timer would be affordable?)
Add a class member initialized to negative 1:
private int logit = -1;
In the try block, make the test:
if( !ok && (logit = (logit + 1 ) % 10) == 0 ) { //log error
This always logs the first error, then every tenth subsequent error. Logical operators "short-circuit", so logit only gets incremented on an actual error.
If you want the first and tenth of all errors, regardless of the connection, make logit class static instead of a a member.
As had been noted this should be thread safe:
private synchronized int getLogit() {
return (logit = (logit + 1 ) % 10);
}
In the try block, make the test:
if( !ok && getLogit() == 0 ) { //log error
Note: I don't think throwing out 90% of the errors is a good idea.
I have seen this kind of thing before.
There was a piece of code that could answer certain 'questions' that came from another 'black box' piece of code. In the case it could not answer them, it would forward them to another piece of 'black box' code that was really slow.
So sometimes previously unseen new 'questions' would show up, and they would show up in a batch, like 100 of them in a row.
The programmer was happy with how the program was working, but he wanted some way of maybe improving the software in the future, if possible new questions were discovered.
So, the solution was to log unknown questions, but as it turned out, there were 1000's of different ones. The logs got too big, and there was no benefit of speeding these up, since they had no obvious answers. But every once in a while, a batch of questions would show up that could be answered.
Since the logs were getting too big, and the logging was getting in the way of logging the real important things he got to this solution:
Only log a random 5%, this will clean up the logs, whilst in the long run still showing what questions/answers could be added.
So, if an unknown event occurred, in a random amount of these cases, it would be logged.
I think this is similar to what you are seeing here.
I did not like this way of working, so I removed this piece of code, and just logged these
messages to a different file, so they were all present, but not clobbering the general logfile.
So I have worked through the Money example in Kent Beck's book Test Driven Development by Example and have been able to get the code to work up until the last test that he writes:
#Test
public void testPlusSameCurrencyReturnsMoney(){
Expression sum = Money.dollar(1).plus(Money.dollar(1));
assertTrue(sum instanceof Money);
}
and here is the function that this calls
public Expression plus(Expression addend) {
return new Sum(this, addend);
}
When I run this, it gives java.lang.AssertionError, so my question is why is it giving this error and how do I fix it?
Lunivore already answered the question with how to solve the problem, but I think you should re-read the paragraph just before and after the block of code (and test), if you want to understand more on what Beck was trying to convey.
The last sentence reads "Here is the code we would have to modify to make it work:". That block of code was first entered on page 75 (with test case). Nothing was changed in end effect on page 79. It was just an indication of what we could change, if we wanted to keep this test.
"There is no obvious, clean way to check the currency of the argument if and only if it is Money. The experiment fails, we delete the test, and away we go".
He also stated that this test is ugly and concluded on the following page "Tried a brief experiment, then discarded it when it didn't work out".
I wrote this just in case you were thinking all of the examples just work and should be kept.
You're checking that the sum variable is a Money, but returning a Sum in the plus method.
So, unless Sum is a subclass of Money, that assertion will always fail.
To make it pass, you might want to do something like:
public Expression plus(Expression addend) {
return new Money(...<whatever>...);
}
Of course, Money would then have to be an Expression too.
Or you might want to evaluate the sum to get the money out of it. Or maybe even do sum instanceof Sum instead. It depends on what behavior you're actually trying to achieve.
By the way, beware the instanceof operator.