Optional<?> as parameter - java

I have the following methods:
Optional<Book> findBook(UUID bookId);
void doSomething(Book book, SomeOtherSutff someOtherStuff);
Since it's not recommended to have methods that have an Optional as a parameter I'm forced to do this:
doSomething(optionalBook.orElse(null), someOtherStuff)
The method parseBook contains a null check.
This seems wrong but I'm not sure what else can I do.
Example updated.
Thanks

One of the reasons for not using an Optional<T> as an input parameter is that you will likely have to construct it on-the-fly just to pass it to that method.
Even if now you have an Optional<Book> ready to use, from findBook, it may be that future changes to the code will mean the Optional is unwrapped and you end up with a naked Book you'll have to wrap. That would be annoying, at least.
The solution with unwrapping the Optional when calling the doSomething method is, I think, the best quick fix available.
Ideally, you want to avoid situations like this from occuring, as they cause confusion and could lead to bugs.
Depending on the semantics, the complexity of your doSomething and the importance of the Book parameter you could:
Create a doSomething(Book book, SomeOtherSutff someOtherStuff) and a doSomething(SomeOtherSutff someOtherStuff). This is useful if doSomething can be called standalone, without a Book, from other parts of the code.
Split doSomething in three: before doing Book things, doing Book things and "the rest of the processing". This can have various benefits, especially when processing many items. It may be that instead of doing (A, B, C), (A, B, C), ..., (A, B, C) that it's better to do (A, A, ..., A), (B, B, ..., B), (C, C, ...C) or (A, A, ..., A), (B, C), (B, C), ..., (B, C)
Do nothing. It doesn't seem like more than a minor annoyance and making changes will cost time and risk introducing bugs.

With following code you'll only call parseBook when Optional<Book> isn't empty
BookInfo info = optionalBook
.map(book -> parseBook(book))
.orElse(null);

Related

Verify mock methods got called in exact order using Mockito.inOrder

I'm trying to test the method on mocked object getting called in the expected order or not. Below is the simplified example:
#Test
public void test() {
List<String> mockedList = Mockito.mock(List.class);
for (int i = 0; i < 5; i++) {
mockedList.add("a");
mockedList.add("b");
mockedList.add("c");
}
// I want only this to pass.
InOrder inOrder1 = Mockito.inOrder(mockedList);
inOrder1.verify(mockedList).add("a");
inOrder1.verify(mockedList).add("b");
inOrder1.verify(mockedList).add("c");
// I want this to fail.
InOrder inOrder2 = Mockito.inOrder(mockedList);
inOrder2.verify(mockedList).add("c");
inOrder2.verify(mockedList).add("b");
inOrder2.verify(mockedList).add("a");
}
Although the verifying order (c -> b -> a) is different from the invocation order (a -> b -> c), this test passes. This is because Mockito verifies if method2 called anywhere AFTER method1, but not immediately (i.e., no other method called in between). As I'm adding elements multiple times, this is very much possible. Which means, Mockito InOrder passes for b -> a -> c -> a -> c -> b -> c -> b -> a ...
But I want this to fail, and make sure the order is always a -> b -> c -> a -> b -> c -> a -> b -> c ...
Update: Proper way to verify is to verify the order same number of iterations (summary of accepted answer):
for (int i = 0; i < 5; i++) {
inOrder1.verify(mockedList).add("a");
inOrder1.verify(mockedList).add("b");
inOrder1.verify(mockedList).add("c");
}
// fail the test if we missed to verify any other invocations
inOrder1.verifyNoMoreInteractions();
The thing is that you need to add
inOrder.verifyNoMoreInteractions();
With your loop you generate calls like
add(a)
add(b)
add(c)
add(a)
add(b)
add(c)
When you then check
inOrder.verify(mockedList).add("b");
inOrder.verify(mockedList).add("c");
inOrder.verify(mockedList).add("a");
It matches the calls (add(b), add(c), add(a)). The other calls are not checked.
add(a)
add(b)
add(c)
add(a)
add(b)
add(c)
So I think you have to options:
1) verify all calls a, b, c, a, b, c
2) verify that no more interactions happen to your mock
BTW if you change the verification to
inOrder.verify(mockedList).add("c");
inOrder.verify(mockedList).add("b");
inOrder.verify(mockedList).add("a");
it will fail as it does not match the calls :-)
A non answer here: you are going down the wrong path (at least for the given example):
Meaning: when you create an "API", you want to achieve "easy to use, hard to mis-use". An API that requires methods to be called in a certain order doesn't achieve that. Thus: feeling the need to check for order programmatically could be an indication that you are doing "the wrong thing". You should rather design an API that "does the right thing" instead of expecting users of your code to do that for you.
Beyond that: when you are testing lists, you absolutely do not want to use mocking in the first place.
You want to make sure that elements get added to a list in a specific order? Then a simple
assertThat(actualList, is(expectedList));
is the one and only thing your test should check for!
Meaning: instead of testing an implementation detail (add() gets called with this and that parameter, in this and that order), you simply check for the observable outcome of that operation. You don't care in which order things get added, and maybe re-setted and updated, you only care for the final result!
Given the comment by the OP: when you have to process certain calls/objects "in order", then you should design an interface that allows you communicate that intent. You are only testing your intent via unit tests. That is of course a good start, but not sufficient!
Basically, there are two concepts that could work for you:
Sequence numbers: when objects come in sequentially, and order matters, then each object should receive a unique (ideally ascending) sequence number. And then each step that processes elements can simply remember the last sequence number that was processed, and if a lower one comes in, you throw an exception.
Sequences of "commands". The OP wants to make sure that method calls happen in order. That is simply not a helpful abstraction. Instead: one could create a Command class (that executes "something"), and then create different subclasses for each required activity. And then your processor simply creates a List<Command>. And now testing boils down to: generating such a sequence, and checking that each entry is of a given type.

Benefits of Functional decomposition and currying

I have a function, calculate(String A,int B) in legacy code
Double calculate(String A,int B) {
if(A.equals("something")){ return B*1.02; }
if(B.equals("some")) return B*1.0;
else return B;
}
The calculation applied on B depends on the value of A.
In functional style I can break this into:
Function<String, Function<Integer,Double>> strategyA = (a)-> {
if(A.equals("something")) return b -> b*1.02;
if(B.equals("some")) return b -> return b -> b*1.0;
else return b -> b;
}
Then instead of calling calculate(a,b) I would call
strategyA.apply(a).apply(b)
Is the second style better than first one. As per my understanding this involves Strategy pattern and Functional Decomposition and Currying.
If the second approach is indeed better, how would you convince someone?
In Java, the preferred way of delivering a named piece of code is and stays the method. There is no reason to express a method as function, just to have “more functional style”. The reason, why function support was added to Java, is, that you sometimes want to pass a reference to the code to another method or function. In this case, the receiving method defines the required function signature, not the code you’re going to encapsulate.
So your calculate method may be referred to as an ObjIntConsumer<String>, if the receiving method only wants to pass pairs of String and int to it without being interested in the result. Otherwise, it may use a custom functional interface expressing the (String,int) → Double signature. Or when you accept boxing, BiFunction<String,Integer,Double> will do.
Currying allows you to reuse existing interfaces like Function to express functions with multiple arguments, when no builtin interface exists, but given the resulting generic signature, which will appear at least at one place in Java code using such a curried function, the readability suffers a lot, so in most cases, defining a new functional interface will be preferred over currying in most cases…
For other programming languages, having a different syntax and type inference for functional types (or having real function types in the first place, rather than settling on functional interfaces), this will be quite different.
I agree with Holger that in most cases, it does not make sense to write code using functions just for the sake of using functional programming. Functions are just an additional tool that lets you write code such as collection processing in a nicer way.
There is one interesting thing about your example though, which is that you take the String parameter a, then perform some computation and then return another function. This can be sometimes useful if the first operation takes a long time:
Function<String, Function<Integer,Double>> f = (a) -> {
if (some-long-computation(a)) return b -> b*1.02;
if (some-other-long-computation(a)) return b -> return b -> b*1.0;
else return b -> b;
}
When you then invoke f with the String argument, the function will run some-long-computation and some-other-long-computation and return the desired function:
Function<Integer,Double> fast = f.apply("Some input"); // Slow
Double d1 = fast.apply(123); // Fast!
Double d1 = fast.apply(456); // Fast!
If f was an ordinary method, then calling it twice as f("Some input", 123) and
f("Some input", 456) would be slower, because you'd run the expensive computations twice. Of course, this is something you can handle without functional programming too, but it is one place where returning a function actually fits quite nicely.

Monads with Java 8

In the interests of helping to understand what a monad is, can someone provide an example using java ? Are they possible ?
Lambda expressions are possible using java if you download the pre-release lambda compatible JDK8 from here http://jdk8.java.net/lambda/
An example of a lambda using this JDK is shown below, can someone provide a comparably simple monad ?
public interface TransformService {
int[] transform(List<Integer> inputs);
}
public static void main(String ars[]) {
TransformService transformService = (inputs) -> {
int[] ints = new int[inputs.size()];
int i = 0;
for (Integer element : inputs) {
ints[i] = element;
}
return ints;
};
List<Integer> inputs = new ArrayList<Integer>(5) {{
add(10);
add(10);
}};
int[] results = transformService.transform(inputs);
}
Just FYI:
The proposed JDK8 Optional class does satisfy the three Monad laws. Here's a gist demonstrating that.
All it takes be a Monad is to provide two functions which conform to three laws.
The two functions:
Place a value into monadic context
Haskell's Maybe: return / Just
Scala's Option: Some
Functional Java's Option: Option.some
JDK8's Optional: Optional.of
Apply a function in monadic context
Haskell's Maybe: >>= (aka bind)
Scala's Option: flatMap
Functional Java's Option: flatMap
JDK8's Optional: flatMap
Please see the above gist for a java demonstration of the three laws.
NOTE: One of the key things to understand is the signature of the function to apply in monadic context: it takes the raw value type, and returns the monadic type.
In other words, if you have an instance of Optional<Integer>, the functions you can pass to its flatMap method will have the signature (Integer) -> Optional<U>, where U is a value type which does not have to be Integer, for example String:
Optional<Integer> maybeInteger = Optional.of(1);
// Function that takes Integer and returns Optional<Integer>
Optional<Integer> maybePlusOne = maybeInteger.flatMap(n -> Optional.of(n + 1));
// Function that takes Integer and returns Optional<String>
Optional<String> maybeString = maybePlusOne.flatMap(n -> Optional.of(n.toString));
You don't need any sort of Monad Interface to code this way, or to think this way. In Scala, you don't code to a Monad Interface (unless you are using Scalaz library...). It appears that JDK8 will empower Java folks to use this style of chained monadic computations as well.
Hope this is helpful!
Update: Blogged about this here.
Java 8 will have lambdas; monads are a whole different story. They are hard enough to explain in functional programming (as evidenced by the large number of tutorials on the subject in Haskell and Scala).
Monads are a typical feature of statically typed functional languages. To describe them in OO-speak, you could imagine a Monad interface. Classes that implement Monad would then be called 'monadic', provided that in implementing Monad the implementation obeys what are known as the 'monad laws'. The language then provides some syntactic sugar that makes working with instances of the Monad class interesting.
Now Iterable in Java has nothing to do with monads, but as a example of a type that the Java compiler treats specially (the foreach syntax that came with Java 5), consider this:
Iterable<Something> things = getThings(..);
for (Something s: things) { /* do something with s */ }
So while we could have used Iterable's Iterator methods (hasNext and company) in an old-style for loop, Java grants us this syntactic sugar as a special case.
So just as classes that implement Iterable and Iterator must obey the Iterator laws (Example: hasNext must return false if there is no next element) to be useful in foreach syntax - there would exist several monadic classes that would be useful with a corresponding do notation (as it is called in Haskell) or Scala's for notation.
So -
What are good examples of monadic classes?
What would syntactic sugar for dealing with them look like?
In Java 8, I don't know - I am aware of the lambda notation but I am not aware of other special syntactic sugar, so I'll have to give you an example in another language.
Monads often serve as container classes (Lists are an example). Java already has java.util.List which is obviously not monadic, but here is Scala's:
val nums = List(1, 2, 3, 4)
val strs = List("hello", "hola")
val result = for { // Iterate both lists, return a resulting list that contains
// pairs of (Int, String) s.t the string size is same as the num.
n <- nums
s <- strs if n == s.length
} yield (n, s)
// result will be List((4, "hola"))
// A list of exactly one element, the pair (4, "hola")
Which is (roughly) syntactic sugar for:
val nums = List(1, 2, 3, 4)
val strs = List("hello", "hola")
val results =
nums.flatMap( n =>
strs.filter(s => s.size == n). // same as the 'if'
map(s => (n, s)) // Same as the 'yield'
)
// flatMap takes a lambda as an argument, as do filter and map
//
This shows a feature of Scala where monads are exploited to provide list comprehensions.
So a List in Scala is a monad, because it obeys Scala's monad laws, which stipulate that all monad implementations must have conforming flatMap, map and filter methods (if you are interested in the laws, the "Monads are Elephants" blog entry has the best description I've found so far). And, as you can see, lambdas (and HoF) are absolutely necessary but not sufficient to make this kind of thing useful in a practical way.
There's a bunch of useful monads besides the container-ish ones as well. They have all kinds of applications. My favorite must be the Option monad in Scala (the Maybe monad in Haskell), which is a wrapper type which brings about null safety: the Scala API page for the Option monad has a very simple example usage: http://www.scala-lang.org/api/current/scala/Option.html
In Haskell, monads are useful in representing IO, as a way of working around the fact that non-monadic Haskell code has indeterminate order of execution.
Having lambdas is a first small step into the functional programming world; monads
require both the monad convention and a large enough set of usable monadic types, as well as syntactic sugar to make working with them fun and useful.
Since Scala is arguably the language closest to Java that also allows (monadic) Functional Programming, do look at this Monad tutorial for Scala if you are (still) interested:
http://james-iry.blogspot.jp/2007/09/monads-are-elephants-part-1.html
A cursory googling shows that there is at least one attempt to do this in Java: https://github.com/RichardWarburton/Monads-in-Java -
Sadly, explaining monads in Java (even with lambdas) is as hard as explaining full-blown Object oriented programming in ANSI C (instead of C++ or Java).
Even though monads can be implemented in Java, any computation involving them is doomed to become a messy mix of generics and curly braces.
I'd say that Java is definitely not the language to use in order to illustrate their working or to study their meaning and essence. For this purpose it is far better to use JavaScript or to pay some extra price and learn Haskell.
Anyway, I am signaling you that I just implemented a state monad using the new Java 8 lambdas. It's definitely a pet project, but it works on a non-trivial test case.
You may find it presented at my blog, but I'll give you some details here.
A state monad is basically a function from a state to a pair (state,content). You usually give the state a generic type S and the content a generic type A.
Because Java does not have pairs we have to model them using a specific class, let's call it Scp (state-content pair), which in this case will have generic type Scp<S,A> and a constructor new Scp<S,A>(S state,A content). After doing that we can say that the monadic function will have type
java.util.function.Function<S,Scp<S,A>>
which is a #FunctionalInterface. That's to say that its one and only implementation method can be invoked without naming it, passing a lambda expression with the right type.
The class StateMonad<S,A> is mainly a wrapper around the function. Its constructor may be invoked e.g. with
new StateMonad<Integer, String>(n -> new Scp<Integer, String>(n + 1, "value"));
The state monad stores the function as an instance variable. It is then necessary to provide a public method to access it and feed it the state. I decided to call it s2scp ("state to state-content pair").
To complete the definition of the monad you have to provide a unit (aka return) and a bind (aka flatMap) method. Personally I prefer to specify unit as static, whereas bind is an instance member.
In the case of the state monad, unit gotta be the following:
public static <S, A> StateMonad<S, A> unit(A a) {
return new StateMonad<S, A>((S s) -> new Scp<S, A>(s, a));
}
while bind (as instance member) is:
public <B> StateMonad<S, B> bind(final Function<A, StateMonad<S, B>> famb) {
return new StateMonad<S, B>((S s) -> {
Scp<S, A> currentPair = this.s2scp(s);
return famb(currentPair.content).s2scp(currentPair.state);
});
}
You notice that bind must introduce a generic type B, because it is the mechanism that allows the chaining of heterogeneous state monads and gives this and any other monad the remarkable capability to move the computation from type to type.
I'd stop here with the Java code. The complex stuff is in the GitHub project. Compared to previous Java versions, lambdas remove a lot of curly braces, but the syntax is still pretty convoluted.
Just as an aside, I'm showing how similar state monad code may be written in other mainstream languages. In the case of Scala, bind (which in that case must be called flatMap) reads like
def flatMap[A, B](famb: A => State[S, B]) = new State[S, B]((s: S) => {
val (ss: S, aa: A) = this.s2scp(s)
famb(aa).s2scp(ss)
})
whereas the bind in JavaScript is my favorite; 100% functional, lean and mean but -of course- typeless:
var bind = function(famb){
return state(function(s) {
var a = this(s);
return famb(a.value)(a.state);
});
};
<shameless>
I am cutting a few corners here, but if you are interested in the details you will find them on my WP blog.</shameless>
Here's the thing about monads which is hard to grasp: monads are a
pattern, not a specific type. Monads are a shape, they are an abstract
interface (not in the Java sense) more than they are a concrete data
structure. As a result, any example-driven tutorial is doomed to
incompleteness and failure.
[...]
The only way to understand monads is to see them for what they are: a mathematical construct.
Monads are not metaphors by Daniel Spiewak
Monads in Java SE 8
List monad
interface Person {
List<Person> parents();
default List<Person> greatGrandParents1() {
List<Person> list = new ArrayList<>();
for (Person p : parents()) {
for (Person gp : p.parents()) {
for (Person ggp : p.parents()) {
list.add(ggp);
}
}
}
return list;
}
// <R> Stream<R> flatMap(Function<? super T, ? extends Stream<? extends R>> mapper)
default List<Person> greatGrandParents2() {
return Stream.of(parents())
.flatMap(p -> Stream.of(p.parents()))
.flatMap(gp -> Stream.of(gp.parents()))
.collect(toList());
}
}
Maybe monad
interface Person {
String firstName();
String middleName();
String lastName();
default String fullName1() {
String fName = firstName();
if (fName != null) {
String mName = middleName();
if (mName != null) {
String lName = lastName();
if (lName != null) {
return fName + " " + mName + " " + lName;
}
}
}
return null;
}
// <U> Optional<U> flatMap(Function<? super T, Optional<U>> mapper)
default Optional<String> fullName2() {
return Optional.ofNullable(firstName())
.flatMap(fName -> Optional.ofNullable(middleName())
.flatMap(mName -> Optional.ofNullable(lastName())
.flatMap(lName -> Optional.of(fName + " " + mName + " " + lName))));
}
}
Monad is a generic pattern for nested control flow encapsulation.
I.e. a way to create reusable components from nested imperative idioms.
Important to understand that a monad is not just a generic wrapper class with a flat map operation.
For example, ArrayList with a flatMap method won't be a monad.
Because monad laws prohibit side effects.
Monad is a formalism. It describes the structure, regardless of content or meaning.
People struggle with relating to meaningless (abstract) things.
So they come up with metaphors which are not monads.
See also:
conversation between Erik Meijer and Gilad Bracha.
the only way to understand monads is by writing a bunch of combinator libraries, noticing the resulting duplication, and then discovering for yourself that monads let you factor out this duplication. In discovering this, everyone builds some intuition for what a monad is… but this intuition isn’t the sort of thing that you can communicate to someone else directly – it seems everyone has to go through the same experience of generalizing to monads from some concrete examples of combinator libraries. however
here i found some materials to learn Mondas.
hope to be useful for you too.
codecommit
james-iry.blogspot
debasishg.blogspot
This blog post gives a step-by-step example of how you might implement a Monad type (interface) in Java and then use it to define the Maybe monad, as a practical application.
This post explains that there is one monad built into the Java language, emphasising the point that monads are more common than many programmers may think and that coders often inadvertently reinvent them.
Despite all controversy about Optional satisfying, or not, the Monad laws, I usually like to look at Stream, Optional and CompletableFuture in the same way. In truth, all them provide a flatMap() and that is all I care and let me embrace the "the tasteful composition of side effects" (cited by Erik Meijer). So we may have corresponding Stream, Optional and CompletableFuture in the following way:
Regarding Monads, I usually simplify it only thinking on flatMap()(from "Principles of Reactive Programming" course by Erik Meijer):
A diagram for the "Optional" Monad in Java.
Your task: Perform operations on the "Actuals" (left side) transforming elements of type T union null to type U union null using the function in the light blue box (the light blue box function). Just one box is shown here, but there may be a chain of the light blue boxes (thus proceeding from type U union null to type V _union null to type W union null etc.)
Practically, this will cause you to worry about null values appearing in the function application chain. Ugly!
Solution: Wrap your T into an Optional<T> using the light green box functions, moving to the "Optionals" (right side). Here, transform elements of type Optional<T> to type Optional<U> using the red box function. Mirroring the application of functions to the "Actuals", there may be several red box functions to be be chained (thus proceeding from type Optional<U> to Optional<V> then to Optional<W> etc.). In the end, move back from the "Optionals" to the "Actuals" through one of the dark green box functions.
No worrying about null values anymore. Implementationwise, there will always be an Optional<U>, which may or may not be empty. You can chain the calls to to the red box functions without null checks.
The key point: The red box functions are not implemented individually and directly. Instead, they are obtained from the blue box functions (whichever have been implemented and are available, generally the light blue ones) by using either the map or the flatMap higher-order functions.
The grey boxes provide additional support functionality.
Simples.
I like to think of monads in slighlty more mathematical (but still informal) fashion. After that I will explain the relationship to one of Java 8's monads CompletableFuture.
First of all, a monad M is a functor. That is, it transforms a type into another type: If X is a type (e.g. String) then we have another type M<X> (e.g. List<String>). Moreover, if we have a transformation/function X -> Y of types, we should get a function M<X> -> M<Y>.
But there is more data to such a monad. We have a so-called unit which is a function X -> M<X> for each type X. In other words, each object of X can be wrapped in a natural way into the monad.
The most characteristic data of a monad, however, is it's product: a function M<M<X>> -> M<X> for each type X.
All of these data should satisfy some axioms like functoriality, associativity, unit laws, but I won't go into detail here and it also doesn't matter for practical usage.
We can now deduce another operation for monads, which is often used as an equivalent definition for monads, the binding operation: A value/object in M<X> can be bound with a function X -> M<Y> to yield another value in M<Y>. How do we achieve this? Well, first we apply functoriality to the function to obtain a function M<X> -> M<M<Y>>. Next we apply the monadic product to the target to obtain a function M<X> -> M<Y>. Now we can plug in the value of M<X> to obtain a value in M<Y> as desired. This binding operation is used to chain several monadic operations together.
Now lets come to the CompletableFuture example, i.e. CompletableFuture = M. Think of an object of CompletableFuture<MyData> as some computation that's performed asynchronously and which yields an object of MyData as a result some time in the future. What are the monadic operations here?
functoriality is realized with the method thenApply: first the computation is performed and as soon as the result is available, the function which is given to thenApply is applied to transform the result into another type
the monadic unit is realized with the method completedFuture: as the documentation tells, the resulting computation is already finished and yields the given value at once
the monadic product is not realized by a function, but the binding operation below is equivalent to it (together with functoriality) and its semantic meaning is simply the following: given a computation of type CompletableFuture<CompletableFuture<MyData>> that computation asynchronously yields another computation in CompletableFuture<MyData> which in turn yields some value in MyData later on, so performing both computations on after the other yields one computation in total
the resulting binding operation is realized by the method thenCompose
As you see, computations can now be wrapped up in a special context, namely asynchronicity. The general monadic structures enable us to chain such computations in the given context. CompletableFuture is for example used in the Lagom framework to easily construct highly asynchronous request handlers which are transparently backed up by efficient thread pools (instead of handling each request by a dedicated thread).
Haskell monads is an interface which specify rules to convert “datatype that is wrapped in another datatype” to another “datatype that is wrapped in another or same datatype”; the conversion steps is specified by a function you define with a format.
The function format takes a datatype and return “datatype that is wrapped in another datatype”. You can specify operations/ calculations during conversion e.g. multiply or lookup something.
It is so difficult to understand because of the nested abstraction. It is so abstracted so that you can reuse the rules to convert datatype in a datatype without custom programming to unwrap the first “ datatype that is wrapped in another datatype” before putting the data to your specified function; Optional with some datatype is an example of “datatype in a datatype”.
The specified function is any lambda confirming the format.
You don’t need to fully understand it; you will write your own reusable interface to solve similar problem. Monad is just exist because some mathematicians already hit and resolve that problem, and create monad for you to reuse. But due to its abstraction, it is difficult to learn and reuse in the first place.
In other words, e.g. Optional is a wrapper class, but some data is wrapped , some not, some function take wrapped data type but some don’t, return type can be of type wrapped or not. To chain calling mixture of function which may wrap or not in parameter/return types, you either do your own custom wrap/unwrap or reuse pattern of functor / applicative / monad to deal with all those wrapped/unwrapped combinations of chained function call. Every time u try to put optional to a method that only accept plain value and return optional, the steps are what monad does.

Is this a sensible monad for mutable state in Clojure?

I've been experimenting with monads in Clojure and came up with the following code, where a monadic value/state pair is represented by a mutable Clojure deftype object.
Since the object is mutable, an advantage would seem to be that you can write monadic code without needing to construct new result objects all the time.
However, I'm pretty new to monads so would love to know:
Does this construct make sense?
Will it actually work correctly as a monad?
Code below:
(defprotocol PStateStore
(set-store-state [ss v])
(get-store-state [ss])
(set-store-value [ss v])
(get-store-value [ss]))
(deftype StateStore [^{:unsynchronized-mutable true} value
^{:unsynchronized-mutable true} state]
PStateStore
(get-store-state [ss] (.state ss))
(get-store-value [ss] (.value ss))
(set-store-state [ss v] (set! state v))
(set-store-value [ss v] (set! value v))
Object
(toString [ss] (str "value=" (.value ss) ", state=" (.state ss))))
(defn state-store [v s] (StateStore. v s))
(defmonad MStoredState
[m-result (fn [v]
(fn [^StateStore ss]
(do
(set-store-value ss v)
ss)))
m-bind (fn [a f]
(fn [^StateStore ss]
(do
(a ss)
((f (get-store-value ss)) ss))))])
; Usage examples
(def mb
(domonad MStoredState
[a (m-result 1)
b (m-result 5)]
(+ a b)))
(def ssa (state-store 100 101))
(mb ssa)
; => #<StateStore value=6, state=101>
No, it won't work correctly as a monad, because you use mutable state.
Imagine that you have a monadic value m (a value, carrying a state), which you call a StateStore. You want to be able to do this :
(let
[a (incr-state m)
b (decr-state m)]
(if some-condition a b))
What I expect is that this computation returns the monadic m whose state has been either, according to some-condition, incremented or decremented. If you use mutable state, it will be both incremented and decremented during evaluation of this code.
One of the good things about monads is that, while they represent effects, they behave as ordinary pure, non-mutable values. You can pass them around, duplicate them (you can expand any let-definition of a monadic value, replacing its name with it's definition at every use site). The only place you have to be careful is where you actually chain effect using m-bind. Otherwise, there is no implicit chaining of effects in unrelated parts of code, as in usual imperative programming. This is what makes reasoning about monads easier and more comfortable, in the situations where you want to restrict side-effects.
Edit
You may have heard of the monadic laws, which are equations that any monad implementation should respect. However, your problem here is not that you break the law, as the law don't talk about this. Indeed, the monadic laws are usually stated in the pure language Haskell, and therefore do not consider side-effects.
If you want, you could consider it as a fourth, unspoken, monad law : good monads should respect referential transparency

Reordering arguments using recursion (pro, cons, alternatives)

I find that I often make a recursive call just to reorder arguments.
For example, here's my solution for endOther from codingbat.com:
Given two strings, return true if either of the strings appears at the very end of the other string, ignoring upper/lower case differences (in other words, the computation should not be "case sensitive"). Note: str.toLowerCase() returns the lowercase version of a string.
public boolean endOther(String a, String b) {
return a.length() < b.length() ? endOther(b, a)
: a.toLowerCase().endsWith(b.toLowerCase());
}
I'm very comfortable with recursions, but I can certainly understand why some perhaps would object to it.
There are two obvious alternatives to this recursion technique:
Swap a and b traditionally
public boolean endOther(String a, String b) {
if (a.length() < b.length()) {
String t = a;
a = b;
b = t;
}
return a.toLowerCase().endsWith(b.toLowerCase());
}
Not convenient in a language like Java that doesn't pass by reference
Lots of code just to do a simple operation
An extra if statement breaks the "flow"
Repeat code
public boolean endOther(String a, String b) {
return (a.length() < b.length())
? b.toLowerCase().endsWith(a.toLowerCase())
: a.toLowerCase().endsWith(b.toLowerCase());
}
Explicit symmetry may be a nice thing (or not?)
Bad idea unless the repeated code is very simple
...though in this case you can get rid of the ternary and just || the two expressions
So my questions are:
Is there a name for these 3 techniques? (Are there more?)
Is there a name for what they achieve? (e.g. "parameter normalization", perhaps?)
Are there official recommendations on which technique to use (when)?
What are other pros/cons that I may have missed?
Another example
To focus the discussion more on the technique rather than the particular codingbat problem, here's another example where I feel that the recursion is much more elegant than a bunch of if-else's, swaps, or repetitive code.
// sorts 3 values and return as array
static int[] sort3(int a, int b, int c) {
return
(a > b) ? sort3(b, a, c) :
(b > c) ? sort3(a, c, b) :
new int[] { a, b, c };
}
Recursion and ternary operators don't bother me as much as it bothers some people; I honestly believe the above code is the best pure Java solution one can possibly write. Feel free to show me otherwise.
Let’s first establish that code duplication is usually a bad idea.
So whatever solution we take, the logic of the method should only be written once, and we need a means of swapping the arguments around that does not interfere with the logic.
I see three general solutions to that:
Your first recursion (either using if or the conditional operator).
swap – which, in Java, is a problem, but might be appropriate in other languages.
Two separate methods (as in #Ha’s solution) where one acts as the implementation of the logic and the other as the interface, in this case to sort out the parameters.
I don’t know which of these solutions is objectively the best. However, I have noticed that there are certain algorithms for which (1) is generally accepted as the idiomatic solution, e.g. Euklid’s algorithm for calculating the GCD of two numbers.
I am generally averse to the swap solution (2) since it adds an extra call which doesn’t really do anything in connection with the algorithm. Now, technically this isn’t a problem – I doubt that it would be less efficient than (1) or (3) using any decent compiler. But it adds a mental speed-bump.
Solution (3) strikes me as over-engineered although I cannot think of any criticism except that it’s more text to read. Generally, I don’t like the extra indirection introduced by any method suffixed with “Impl”.
In conclusion, I would probably prefer (1) for most cases although I have in fact used (3) in similar circumstances.
Another +1 for "In any case, my recommendation would be to do as little in each statement as possible. The more things that you do in a single statement, the more confusing it will be for others who need to maintain your code."
Sorry but your code:
// sorts 3 values and return as array
static int[] sort3(int a, int b, int c) {
return
(a > b) ? sort3(b, a, c) :
(b > c) ? sort3(a, c, b) :
new int[] { a, b, c };
}
It's perhaps for you the best "pure java code", but for me it's the worst... unreadable code, if we don't have the method or the comment we just can't know at first sight what it's doing...
Hard to read code should only be used when high performances are needed (but anyway many performances problems are due to bad architecture...). If you HAVE TO write such code, the less you can do is to make a good javadoc and unit tests... we developper often don't care about implementation of such methods if we just have to use it, and not to rework it... but since the first sight doesn't tell us what is does, we can have to trust it works like we expect it does and we can loose time...
Recursive methods are ok when it's a short method, but i think a recursive method should be avoided if the algorithm is complex and if there's another way to do it for almost the same computation time... Particulary if other peoples will prolly work in this method.
For your exemple it's ok since it's a short method, but anyway if you'r just not concerned by performances you could have used something like that:
// sorts int values
public static int[] sort(Integer... intValues) {
ArrayList list = new ArrayList(
for ( Integer i : intValues ) {
list.add(i);
}
Collections.sort(list);
return list.toArray();
}
A simple way to implement your method, easily readable by all java >= 1.5 developper, that works for 1 to n integers...
Not the fastest but anyway if it's just about speed use c++ or asm :)
For this particular example, I wouldn't use anything you suggested.. I would instead write:
public boolean endOther(String a, String b){
String alower=a.toLowerCase();
String blower=b.toLowerCase();
if ( a.length() < b.length() ){
return blower.endsWith(alower);
} else {
return alower.endsWith(blower);
}
}
While the ternary operator does have its place, the if statement is often more intelligible, especially when the operands are fairly complex. In addition, if you repeat code in different branches of an if statement, they will only be evaluated in the branch that is taken (in many programming languages, both operands of the ternary operator are evaluated no matter which branch is selected). While, as you have pointed out, this is not a concern in Java, many programmers have used a variety of languages and might not remember this level of detail, and so it is best to use the ternary operator only with simple operands.
One frequently hears of "recursive" vs. "iterative"/"non-recursive" implementations. I have not heard of any particular names for the various options that you have given.
In any case, my recommendation would be to do as little in each statement as possible. The more things that you do in a single statement, the more confusing it will be for others who need to maintain your code.
In terms of your complaint about repetitiion... if there are several lines that are being repated, then it is time to create a "helper" function that does that part. Function composition is there to reduce repitition. Swapping just doesn't make any sense to do, since there is more effort to swap than to simply repeat... also, if code later in the function uses the parameters, the parameters now mean different things than they used to.
EDIT
My argument vis-a-vis the ternary operator was not a valid one... the vast majority of programming languages use lazy evalution with the ternary operator (I was thinking of Verilog at the time of writing, which is a hardware description language (HDL) in which both branches are evaluated in parallel). That said, there are valid reasons to avoid using complicated expressions in ternary operators; for example, with an if...else statement, it is possible to set a breakpoint on one of the conditional branches whereas, with the ternary operator, both branches are part of the same statement, so most debuggers won't split on them.
It is slightly better to use another method instead of recursion
public boolean endOther(String a, String b) {
return a.length() < b.length() ? endOtherImpl(b,a):endOtherImpl(a,b);
}
protected boolean endOtherImpl(String longStr,String shortStr)
{
return longStr.toLowerCase().endsWith(shortStr.toLowerCase());
}

Categories