I'm trying to test the method on mocked object getting called in the expected order or not. Below is the simplified example:
#Test
public void test() {
List<String> mockedList = Mockito.mock(List.class);
for (int i = 0; i < 5; i++) {
mockedList.add("a");
mockedList.add("b");
mockedList.add("c");
}
// I want only this to pass.
InOrder inOrder1 = Mockito.inOrder(mockedList);
inOrder1.verify(mockedList).add("a");
inOrder1.verify(mockedList).add("b");
inOrder1.verify(mockedList).add("c");
// I want this to fail.
InOrder inOrder2 = Mockito.inOrder(mockedList);
inOrder2.verify(mockedList).add("c");
inOrder2.verify(mockedList).add("b");
inOrder2.verify(mockedList).add("a");
}
Although the verifying order (c -> b -> a) is different from the invocation order (a -> b -> c), this test passes. This is because Mockito verifies if method2 called anywhere AFTER method1, but not immediately (i.e., no other method called in between). As I'm adding elements multiple times, this is very much possible. Which means, Mockito InOrder passes for b -> a -> c -> a -> c -> b -> c -> b -> a ...
But I want this to fail, and make sure the order is always a -> b -> c -> a -> b -> c -> a -> b -> c ...
Update: Proper way to verify is to verify the order same number of iterations (summary of accepted answer):
for (int i = 0; i < 5; i++) {
inOrder1.verify(mockedList).add("a");
inOrder1.verify(mockedList).add("b");
inOrder1.verify(mockedList).add("c");
}
// fail the test if we missed to verify any other invocations
inOrder1.verifyNoMoreInteractions();
The thing is that you need to add
inOrder.verifyNoMoreInteractions();
With your loop you generate calls like
add(a)
add(b)
add(c)
add(a)
add(b)
add(c)
When you then check
inOrder.verify(mockedList).add("b");
inOrder.verify(mockedList).add("c");
inOrder.verify(mockedList).add("a");
It matches the calls (add(b), add(c), add(a)). The other calls are not checked.
add(a)
add(b)
add(c)
add(a)
add(b)
add(c)
So I think you have to options:
1) verify all calls a, b, c, a, b, c
2) verify that no more interactions happen to your mock
BTW if you change the verification to
inOrder.verify(mockedList).add("c");
inOrder.verify(mockedList).add("b");
inOrder.verify(mockedList).add("a");
it will fail as it does not match the calls :-)
A non answer here: you are going down the wrong path (at least for the given example):
Meaning: when you create an "API", you want to achieve "easy to use, hard to mis-use". An API that requires methods to be called in a certain order doesn't achieve that. Thus: feeling the need to check for order programmatically could be an indication that you are doing "the wrong thing". You should rather design an API that "does the right thing" instead of expecting users of your code to do that for you.
Beyond that: when you are testing lists, you absolutely do not want to use mocking in the first place.
You want to make sure that elements get added to a list in a specific order? Then a simple
assertThat(actualList, is(expectedList));
is the one and only thing your test should check for!
Meaning: instead of testing an implementation detail (add() gets called with this and that parameter, in this and that order), you simply check for the observable outcome of that operation. You don't care in which order things get added, and maybe re-setted and updated, you only care for the final result!
Given the comment by the OP: when you have to process certain calls/objects "in order", then you should design an interface that allows you communicate that intent. You are only testing your intent via unit tests. That is of course a good start, but not sufficient!
Basically, there are two concepts that could work for you:
Sequence numbers: when objects come in sequentially, and order matters, then each object should receive a unique (ideally ascending) sequence number. And then each step that processes elements can simply remember the last sequence number that was processed, and if a lower one comes in, you throw an exception.
Sequences of "commands". The OP wants to make sure that method calls happen in order. That is simply not a helpful abstraction. Instead: one could create a Command class (that executes "something"), and then create different subclasses for each required activity. And then your processor simply creates a List<Command>. And now testing boils down to: generating such a sequence, and checking that each entry is of a given type.
Related
I'm reading up about Java streams and discovering new things as I go along. One of the new things I found was the peek() function. Almost everything I've read on peek says it should be used to debug your Streams.
What if I had a Stream where each Account has a username, password field and a login() and loggedIn() method.
I also have
Consumer<Account> login = account -> account.login();
and
Predicate<Account> loggedIn = account -> account.loggedIn();
Why would this be so bad?
List<Account> accounts; //assume it's been setup
List<Account> loggedInAccount =
accounts.stream()
.peek(login)
.filter(loggedIn)
.collect(Collectors.toList());
Now as far as I can tell this does exactly what it's intended to do. It;
Takes a list of accounts
Tries to log in to each account
Filters out any account which aren't logged in
Collects the logged in accounts into a new list
What is the downside of doing something like this? Any reason I shouldn't proceed? Lastly, if not this solution then what?
The original version of this used the .filter() method as follows;
.filter(account -> {
account.login();
return account.loggedIn();
})
The important thing you have to understand is that streams are driven by the terminal operation. The terminal operation determines whether all elements have to be processed or any at all. So collect is an operation that processes each item, whereas findAny may stop processing items once it encountered a matching element.
And count() may not process any elements at all when it can determine the size of the stream without processing the items. Since this is an optimization not made in Java 8, but which will be in Java 9, there might be surprises when you switch to Java 9 and have code relying on count() processing all items. This is also connected to other implementation-dependent details, e.g. even in Java 9, the reference implementation will not be able to predict the size of an infinite stream source combined with limit while there is no fundamental limitation preventing such prediction.
Since peek allows “performing the provided action on each element as elements are consumed from the resulting stream”, it does not mandate processing of elements but will perform the action depending on what the terminal operation needs. This implies that you have to use it with great care if you need a particular processing, e.g. want to apply an action on all elements. It works if the terminal operation is guaranteed to process all items, but even then, you must be sure that not the next developer changes the terminal operation (or you forget that subtle aspect).
Further, while streams guarantee to maintain the encounter order for a certain combination of operations even for parallel streams, these guarantees do not apply to peek. When collecting into a list, the resulting list will have the right order for ordered parallel streams, but the peek action may get invoked in an arbitrary order and concurrently.
So the most useful thing you can do with peek is to find out whether a stream element has been processed which is exactly what the API documentation says:
This method exists mainly to support debugging, where you want to see the elements as they flow past a certain point in a pipeline
The key takeaway from this:
Don't use the API in an unintended way, even if it accomplishes your immediate goal. That approach may break in the future, and it is also unclear to future maintainers.
There is no harm in breaking this out to multiple operations, as they are distinct operations. There is harm in using the API in an unclear and unintended way, which may have ramifications if this particular behavior is modified in future versions of Java.
Using forEach on this operation would make it clear to the maintainer that there is an intended side effect on each element of accounts, and that you are performing some operation that can mutate it.
It's also more conventional in the sense that peek is an intermediate operation which doesn't operate on the entire collection until the terminal operation runs, but forEach is indeed a terminal operation. This way, you can make strong arguments around the behavior and the flow of your code as opposed to asking questions about if peek would behave the same as forEach does in this context.
accounts.forEach(a -> a.login());
List<Account> loggedInAccounts = accounts.stream()
.filter(Account::loggedIn)
.collect(Collectors.toList());
Perhaps a rule of thumb should be that if you do use peek outside the "debug" scenario, you should only do so if you're sure of what the terminating and intermediate filtering conditions are. For example:
return list.stream().map(foo->foo.getBar())
.peek(bar->bar.publish("HELLO"))
.collect(Collectors.toList());
seems to be a valid case where you want, in one operation to transform all Foos to Bars and tell them all hello.
Seems more efficient and elegant than something like:
List<Bar> bars = list.stream().map(foo->foo.getBar()).collect(Collectors.toList());
bars.forEach(bar->bar.publish("HELLO"));
return bars;
and you don't end up iterating a collection twice.
A lot of answers made good points, and especially the (accepted) answer by Makoto describes the possible problems in quite some detail. But no one actually showed how it can go wrong:
[1]-> IntStream.range(1, 10).peek(System.out::println).count();
| $6 ==> 9
No output.
[2]-> IntStream.range(1, 10).filter(i -> i%2==0).peek(System.out::println).count();
| $9 ==> 4
Outputs numbers 2, 4, 6, 8.
[3]-> IntStream.range(1, 10).filter(i -> i > 0).peek(System.out::println).count();
| $12 ==> 9
Outputs numbers 1 to 9.
[4]-> IntStream.range(1, 10).map(i -> i * 2).peek(System.out::println).count();
| $16 ==> 9
No output.
[5]-> Stream.of(1, 2, 3, 4, 5, 6, 7, 8, 9).peek(System.out::println).count();
| $23 ==> 9
No output.
[6]-> Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9).stream().peek(System.out::println).count();
| $25 ==> 9
No output.
[7]-> IntStream.range(1, 10).filter(i -> true).peek(System.out::println).count();
| $30 ==> 9
Outputs numbers 1 to 9.
[1]-> List<Integer> list = new ArrayList<>();
| list ==> []
[2]-> Stream.of(1, 5, 2, 7, 3, 9, 8, 4, 6).sorted().peek(list::add).count();
| $7 ==> 9
[3]-> list
| list ==> []
(You get the idea.)
The examples were run in jshell (Java 15.0.2) and mimic the use case of converting data (replace System.out::println by list::add for example as also done in some answers) and returning how much data was added. The current observation is that any operation that could filter elements (such as filter or skip) seems to force handling of all remaining elements, but it need not stay that way.
I would say that peek provides the ability to decentralize code that can mutate stream objects, or modify global state (based on them), instead of stuffing everything into a simple or composed function passed to a terminal method.
Now the question might be: should we mutate stream objects or change global state from within functions in functional style java programming?
If the answer to any of the the above 2 questions is yes (or: in some cases yes) then peek() is definitely not only for debugging purposes, for the same reason that forEach() isn't only for debugging purposes.
For me when choosing between forEach() and peek(), is choosing the following: Do I want pieces of code that mutate stream objects (or change global state) to be attached to a composable, or do I want them to attach directly to stream?
I think peek() will better pair with java9 methods. e.g. takeWhile() may need to decide when to stop iteration based on an already mutated object, so paring it with forEach() would not have the same effect.
P.S. I have not mentioned map() anywhere because in case we want to mutate objects (or global state), rather than generating new objects, it works exactly like peek().
Although I agree with most answers above, I have one case in which using peek actually seems like the cleanest way to go.
Similar to your use case, suppose you want to filter only on active accounts and then perform a login on these accounts.
accounts.stream()
.filter(Account::isActive)
.peek(login)
.collect(Collectors.toList());
Peek is helpful to avoid the redundant call while not having to iterate the collection twice:
accounts.stream()
.filter(Account::isActive)
.map(account -> {
account.login();
return account;
})
.collect(Collectors.toList());
Despite the documentation note for .peek saying the "method exists mainly to support debugging" I think it has general relevance. For one thing the documentation says "mainly", so leaves room for other use cases. It is not deprecated since years, and speculations about its removal are futile IMO.
I would say in a world where we still have to handle side-effectful methods it has a valid place and utility. There are many valid operations in streams that use side-effects. Many have been mentioned in other answers, I'll just add here to set a flag on a collection of objects, or register them with a registry, on objects which are then further processed in the stream. Not to mention creating log messages during stream processing.
I support the idea to have separate actions in separate stream operations, so I avoid pushing everything into a final .forEach. I favor .peek over an equivalent .map with a lambda who's only purpose, besides calling the side-effect method, is to return the passed in argument. .peek tells me that what goes in also goes out as soon as I encounter this operation, and I don't need to read a lambda to find out. In that sense it is succinct, expressive and improves readability of the code.
Having said that I agree with all the considerations when using .peek, e.g. being aware of the effect of the terminal operation of the stream it is used in.
The functional solution is to make account object immutable. So account.login() must return a new account object. This will mean that the map operation can be used for login instead of peek.
To get rid of warnings, I use functor tee, named after Unix' tee:
public static <T> Function<T,T> tee(Consumer<T> after) {
return arg -> {
f.accept(arg);
return arg;
};
}
You can replace:
.peek(f)
with
.map(tee(f))
It seems like a helper class is needed:
public static class OneBranchOnly<T> {
public Function<T, T> apply(Predicate<? super T> test,
Consumer<? super T> t) {
return o -> {
if (test.test(o)) t.accept(o);
return o;
};
}
}
then switch peek with map:
.map(new OneBranchOnly< Account >().apply(
account -> account.isTestAccount(),
account -> account.setName("Test Account"))
)
results: Collections of accounts that only test accounts got renamed (no reference gets maintained)
So suppose my application does lots of repetitive work, for example let's say my application checks lots of various Lists if they're empty or not. There are two methods by which I can accomplish this functionality - (there maybe other methods but since my goal is to understand the difference of the two methods and not the functionality itself here we go)
Method 1 - Tradition Method
public boolean isEmptyOrNull(List list)
{
return list != null && !list.isEmpty();
}
Method 2 - Lambda Way
Assuming we have created a functional interface with class name Demo and boolean isEmptyOrNull as the function.
Demo var = list -> list != null && !list.isEmpty();
So each time I wish to check a list I can either use Method 1 or 2 by using isEmptyOrNull(myList) or var.isEmptyOrNull(myList) respectively.
My question is why should I use Method 1 and not Method 2 and vice versa. Is there some performance aspect or some other aspect as to why I should prefer one method over the other !?
Oof, where to start.
Your idea of what null is, is broken.
isEmptyOrNull is a code smell. You shouldn't have this method.
null is a stand-in value that necessarily can mean 'not initialised', because that's built into java itself: Any field that you don't explicitly set will be null. However, it is very common in APIs, even in java.* APIs, that null can also mean 'not found' (such as when you call map.get(someKeyNotInTheMap)), and sometimes also 'irrelevant in this context', such as asking a bootstrapped class for its ClassLoader.
It does not, as a rule, mean 'empty'. That's because there is a perfectly fine non-null value that does a perfect job representing empty. For strings, "" is the empty string, so use that, don't arbitrarily return null instead. For lists, an empty list (as easy to make as List.of()) is what you should be using for empty lists.
Assuming that null semantically means the exact same thing as List.of() is either unneccessary (the source of that list wouldn't be returning null in the first place, thus making that null check unneccessary) or worse, will hide errors: You erroneously interpret 'uninitialized' as 'empty', which is a fine way to have a bug and to have that result your app doing nothing, making it very hard to find the bug. It's much better if a bug loudly announces its presence and does so by pointing exactly at the place in your code where the bug exists, which is why you want an exception instead of a 'do nothing, silently, when that is incorrect' style bug.
Your lambda code does not compile
Unless Demo is a functional interface that has the method boolean isEmptyOrNull(List list);, that is.
The difference
The crucial difference is that a lambda represents a method that you can reference. You can pass the lambda itself around as a parameter.
For example, java.util.TreeSet is an implementation of set that stores all elements you put inside in sorted order by using a tree. It's like building a phonebook: To put "Ms. Bernstein" into the phonebook, you open the book to the middle, check the name there, and if it's 'above' 'Bernstein', look at the middle of the first half. Keep going until you find the place where Bernstein should be inserted; even in a phonebook of a million numbers, this only takes about 20 steps, which is why TreeSet is fast even if you put tons of stuff in there.
The one thing TreeSet needs to do its job is a comparison function: "Given the name 'Maidstone' and 'Bernstein', which one should be listed later in the phone book"? That's all. If you have that function then TreeSet can do its job regardless of the kind of object you store in it.
So let's say you want to make a phone book that first sorts on the length of names, and only then alphabetically.
This requires that you pass the function that decrees which of two names is 'after' the other. Lambdas make this easy:
Comparator<String> decider = (a, b) -> {
if (a.length() < b.length()) return -1;
if (a.length() > b.length()) return +1;
return a.compareTo(b);
};
SortedSet<String> phonebook = new TreeSet<>(decider);
Now try to write this without using lambdas. You won't be able to, as you can't use method names like this. This doesn't work:
public void decider(String a, String b) {
if (a.length() < b.length()) return -1;
if (a.length() > b.length()) return +1;
return a.compareTo(b);
}
public SortedSet<String> makeLengthBook() {
return new TreeSet<String>(decider);
}
There are many reasons that doesn't work, but from a language design point of view: Because in java you can have a method named decider, and also a local variable named decider. You can write this::decider which would work - that's just syntax sugar for (a, b) -> this.decider(a, b); and you should by all means use that where possible.
There is no advantage in your example. Lamdas are often used in Object Streams, e.g for mapping or filtering by defining "adhoc functions" (that's what lambdas are).
Example: You have a list of strings named allStrings that you want to filter
List<String> longStrings = allStrings.stream()
.filter(s -> s.length() > 5) // only keep strings longer than 5
.collect(Collectors.toList()); // collect stream in a new list
Lambdas were added to introduce functional programming in java. It could be used as a shorthand for implementing single method interfaces (Functional interfaces). In the above example you provided, there is no much advantage. But lambdas could be useful in the below scenario:
Before lambdas
public Interface Calc{
int doCalc(int a, int b);
}
public class MyClass{
public void main(String[] args){
Calc x = new Calc() {
#Override
public int doCalc(int a, int b) {
return a + b;
}
};
System.out.println(x.doCalc(2, 3));
}
}
But with lambdas this could be simplified to
public class MyClass{
public void main(String[] args){
BiFunction<Integer, Integer, Integer> doCalc= (a, b) -> a + b;
System.out.println(doCalc.apply(2, 3));
}
}
This is especially helpful for implementing event listeners (in Android), in which case there are a lot of interfaces provided as part of the API with methods like onClick etc. In such cases lambdas could be useful to reduce the code.
Also with Java 8, streams were introduced and lambdas could be passed to filter/map the stream elements. Stream allow more readable code than traditional for loop / if-else in most cases.
Simplifying a bit, our system has two parts. "Our" part, which in turn uses an lower level part implemented by another team (in the same codebase). We have a fairly complicated functional test setup, where we wrap the entry points to the lower level in spy objects. In positive tests we use the real implementation of that level, but we mock calls that should fail with some predefined error.
Now I am trying to add support for more complicated scenarios, where I would like to add an artificial delay for the calls made to the underlying level (on a fake clock obviously). To do this I would like to define a mock that would (1) Call the real implementation (2) Get the resulting Future object that is returned and combine it with a custom function that would inject the delay accordingly. So Ideally I would like to have something like:
doAnswer(invocationOnMock ->
{
result = call real method on mySpy;
return Futures.combile(result, myFunction);
}).when(mySpy).myMethod();
How can I achieve it?
As for me the easiest way it's just to save the link to the read object when you initialize your Spy object:
Foo realFoo = new Foo();
Foo spyFoo = Mockito.spy(realFoo);
Now you can stub it like this:
doAnswer(invocation -> realFoo.getSome() + "spyMethod").when(spyFoo).getSome();
One more way is to call invocation.callRealMethod():
doAnswer(invocation -> invocation.callRealMethod() + "spyMethod").when(spyFoo).getSome();
But in this case you may need to cast the return value as long as invocation.callRealMethod() returns Object.
I have started working on a BDD project using JBehave. I need to build a sequence of methods to be called based on the steps "Given", "When" and finally execute them in "Then". A sample would be like
Given a user logins as a premium user
When he adds an item to the cart
Then he gets a special discount
For the above scenario, I would have to build method call sequence based on "Given" / "When" and then execute the same in "Then"
E.g.
List<Executable> sequenceList
#Given
public void execGiven()
{
A a = new A();
a.call1()
a.call2()
B b = new B();
b.call3();
}
#When
public void execWhen(){
C c = new C();
c.call4();
//...few more methods
#Then
public void execThen(){
//Add some methods to the list of executables
D d = new D();
d.call5();
Assert if everything successful
}
The problem I am facing is that the framework we are using(in built and in use) cannot be used to partially execute the method call sequences in each Step of a story. Rather, I have to execute them as a whole sequence( from a.call1 to d.call5).Also another issue is that I dont want to hardcode the method calls for each step instead call them based on some config at runtime.
My Approach.
Instead of running these methods (a.call1 , a.call2) in each step, add them to a list of Methods and execute them in "Then" using reflections. Also use annotations like
#Sequence(step="Login" , sequenceId=1) for each method so that at runtime I can build a list of calls to be made.
What would be a good approach considering that any change in the method call sequence is less painful to change. I had few approaches in mind
Use annotations to wire the sequence
Use xml to wire the sequence
Use a text file to mention the sequence(almost same as above)
Is there any better approach to build the graph in the runtime and execute the sequence? Also any drawbacks of the same?
Thank You
I'm having some difficulties about how to correctly design and implement two methods.
I have 2 methods, A and B. Method A does 2 things and method B does only 1 thing. One of the things that method A does is the same as what method B does, so it is very reasonable to call method B from within a method A.
Now the problem is that both methods need to send exactly one email to a user. When I invoke A, I want to receive 1 email and when I invoke B, I also want to get 1 email. This means that if I call method B inside A, I will now get 2 mails while doing 1 action (= invoking A). To make it even more tempting to simply include B in A, the set up procedure is truly the same for both methods and so instead of redoing the set up in the B, I could simply call method B with these set up data from A. But I have no setup data to provide when calling the B directly and so in this case I'd need to do the setup anyway.
What is the best approach to solve such a problem? Should I:
Add a parameter to method B in which I will say whether it should send the email or not and another parameters for the setup data
Keep the two methods separated, because method B is not doing exactly the same thing in the two contexts
... other suggestions?
PS. I'm not sure if stackoverflow is actually suitable for such a question, let me know if there's a better stackexchange platform for this.
Thanks for any ideas
I would suggest you split out the functionality a little further.
Currently you have two methods that might be described something like this:
void MethodA()
{
DoThing1();
DoThing2();
SendEmail();
}
void MethodB()
{
DoThing1();
SendEmail();
}
So one fairly simple answer is to extract the actual functional bits out of the two methods into methods of their own and leave behind a shell similar to the above. Each of the DoThingX methods can return whatever it is you need for building the email, etc.
Of course if the DoThingX methods are really small - a couple of lines or so, for instance - then it might not make sense to break them out this way.
I don't know if this is truly on-topic either. That said...
It seems to me that "sending email" is one of the things that is included in the method B work that method A needs done. As such, why not just implement it in method B? Then method A gets it for free when it calls method B.
If the exact contents of the email are different depending on whether method A was called or not, then sure…you can add a parameter to method B to customize the email in some way.
Finally, you're pretty vague on the details. It's not really clear just how much code you're saving by calling method B from method A. If it's significant, then I'm a strong proponent of code sharing like this. But if we're just talking a single statement, well...that seems less worth bothering with; maybe just putting that same single statement in each method is better.
Sorry for the vague answer. GIGO. :)
You could use the return of the methods for the content of our email:
private String funcA() {
// foo
return "Mail from funcA";
}
private String funcB() {
funcA(); // ignoring the return value here, we later return our on
// foo
return "Mail from funcB"; // our own return
}
Then you send the email outside of these functions (JUST EXAMPLE CODE):
private void mainFunc() {
String mail = condition ? funcA() : funcB();
sendMail(mail);
}
I have two ideas:
create C method that represents common functionality of A and B, and invoke C in A and B with different parameters
invoke B inside A, but extract mailing functionality alone to separate method C
To clarify the first idea:
methodA() {
firstAction();
secondAction(Parameter parameterFromA);
}
methodB() {
secondAction(Parameter parameterFromB);
}