Unbox value from Either<L, R> vavr - java

Background
I have a small function that can return Either<String, Float>. If it succeed it returns a float otherwise an error string.
My objective is to perform a series of operations in a pipeline, and to achieve railway oriented programming using Either.
Code
import java.util.function.Function;
import io.vavr.control.Either;
#Test
public void run(){
Function<Float, Either<String, Float>> either_double = num -> {
if(num == 4.0f)
Either.left("I don't like this number");
return Either.right(num * 2);
};
Function<Float, Float> incr = x -> x + 1.0f;
Float actual =
Either.right(2f)
.map(incr)
.map(either_double)
.get();
Float expected = 6.0f;
assertEquals(expected, actual);
}
This code does a series of simple operations. First I create a either of right with value 2, then I increment it and I finish by doubling it. The result of these operations is 6.
Problem
The result of the mathematical operations is 6.0f, but that's not what I get. Instead I get Right(6.0f).
This is an issue that prevents the code from compiling. I have a value boxed inside the Either Monad, but after checking their API for Either I didn't find a way to unbox it and get the value as is.
I thought about using getOrElseGet but even that method returns a Right.
Question
How do I access the real value stored inside the Either Monad?

Use flatMap(either_double) instead of map(either_double).

Related

ConcurrentHashMap throws recursive update exception

Here is my Java code:
static Map<BigInteger, Integer> cache = new ConcurrentHashMap<>();
static Integer minFinder(BigInteger num) {
if (num.equals(BigInteger.ONE)) {
return 0;
}
if (num.mod(BigInteger.valueOf(2)).equals(BigInteger.ZERO)) {
//focus on stuff thats happening inside this block, since with given inputs it won't reach last return
return 1 + cache.computeIfAbsent(num.divide(BigInteger.valueOf(2)),
n -> minFinder(n));
}
return 1 + Math.min(cache.computeIfAbsent(num.subtract(BigInteger.ONE), n -> minFinder(n)),
cache.computeIfAbsent(num.add(BigInteger.ONE), n -> minFinder(n)));
}
I tried to memoize a function that returns a minimum number of actions such as division by 2, subtract by one or add one.
The problem I'm facing is when I call it with smaller inputs such as:
minFinder(new BigInteger("32"))
it works, but with bigger values like:
minFinder(new BigInteger("64"))
It throws a Recursive Update exception.
Is there any way to increase recursion size to prevent this exception or any other way to solve this?
From the API docs of Map.computeIfAbsent():
The mapping function should not modify this map during computation.
The API docs of ConcurrentHashMap.computeIfAbsent() make that stronger:
The mapping function must not modify this map during computation.
(Emphasis added)
You are violating that by using your minFinder() method as the mapping function. That it seems nevertheless to work for certain inputs is irrelevant. You need to find a different way to achieve what you're after.
Is there any way to increase recursion size to prevent this exception or any other way to solve this?
You could avoid computeIfAbsent() and instead do the same thing the old-school way:
BigInteger halfNum = num.divide(BigInteger.valueOf(2));
BigInteger cachedValue = cache.get(halfNum);
if (cachedValue == null) {
cachedValue = minFinder(halfNum);
cache.put(halfNum, cachedValue);
}
return 1 + cachedValue;
But that's not going to be sufficient if the computation loops. You could perhaps detect that by putting a sentinel value into the map before you recurse, so that you can recognize loops.

How to collect results after filtering and mapping a parallelStream in Java8?

I want to extract a collection of objects from another collection. The objects to be filtered must be a specific type (or subtype) and must intersect with a given Shape. I want to do it with parallelStream
I have the following code:
public class ObjectDetector {
...
public ObjectDetector(final Collection<WorldObject> objects,
final BiFunction<Shape, Shape, Boolean> isIntersecting) {
...
}
public List<ISensor> getSonarObjects(final Shape triangle) {
return selectIntersecting(triangle, ISensor.class);
}
private <T> List<T> selectIntersecting(Shape triangle, Class<T> type) {
return objects.parallelStream()
.filter(o -> type.isInstance(o) && isIntersecting.apply(o.getShape(), triangle))
.map(o -> type.cast(o)).collect(Collectors.toList());
The problematic part is in the List<T> selectIntersecting(Shape triangle, Class<T> type) method, in which objects is a Collection and isIntersecting is a BiFunction<Shape,Shape,Boolean>.
When I'm using stream() instead of parallelStream() all my tests are green. So I may assume that the filtering and mapping logic works fine. However when I am trying to use the parallelStream() my tests are failing unpredictably. The only coherence that I was able to observe is that the size() of the returned List<T> is less than or equal to (but of course never greater) the size I expect.
A failing testcase for example:
int counter = 0;
public BiFunction<Shape, Shape, Boolean> every2 = (a, b) -> {
counter++;
return counter % 2 == 0 ? true : false;
};
#Test
public void getEvery2Sonar() {
assertEquals("base list size must be 8",8,list.size());
ObjectDetector detector = new ObjectDetector(list, every2);
List<ISensor> sonarable = detector.getSonarObjects(triangle);
assertEquals("number of sonar detectables should be 3", 3, sonarable.size());
}
And the test result is:
Failed tests: getEvery2Sonar(hu.oe.nik.szfmv.environment.ObjectDetectorTest): number of sonar detectables should be 3 expected:<3> but was:<2>
In my understanding - as it is written here - it is possible to collect a parallelStream into non-concurrent Collection.
I've also tried to find some clues on the Parallelism tutorial page, but I'm still clueless.
Could someone please provide me an explanation about what am I doing wrong?
Your predicate function has side effects - this is going to go badly with parallelStream because the evaluation order across the input stream is non-deterministic, plus you have no locking on your mutable state.
Indeed, the documentation for filter states* that the predicate must be stateless.
I'm not sure what behaviour you're trying to achieve here, so I'm not sure what an appropriate "fix" might be.
* No pun intended.

Deduplication for String intern method in ConcurrentHashMap

I watched a code from JavaDays, author said that this approach with probability is very effective for storing Strings like analogue to String intern method
public class CHMDeduplicator<T> {
private final int prob;
private final Map<T, T> map;
public CHMDeduplicator(double prob) {
this.prob = (int) (Integer.MIN_VALUE + prob * (1L << 32));
this.map = new ConcurrentHashMap<>();
}
public T dedup(T t) {
if (ThreadLocalRandom.current().nextInt() > prob) {
return t;
}
T exist = map.putIfAbsent(t, t);
return (exist == null) ? t : exist;
}
}
Please, explain me, what is effect of probability in this line:
if (ThreadLocalRandom.current().nextInt() > prob) return t;
This is original presentation from Java Days https://shipilev.net/talks/jpoint-April2015-string-catechism.pdf
(56th slide)
If you look at the next slide which has a table with data with different probabilities, or listen to the talk, you will see/hear the rationale: probabilistic deduplicators balance the time spent deduplicating the Strings, and the memory savings coming from the deduplication. This allows to fine-tune the time spent processing Strings, or even sprinkle the low-prob deduplicators around the code thus amortizing the deduplication costs.
(Source: these are my slides)
The double value passed to the constructor is intended to be a probability value in the range 0.0 to 1.0. It is converted to an integer such that the proportion of integer values below it is equal to the double value.
The whole expression is designed to evaluate to true with a probability equal to that of the constructor parameter. By using integer math it will be slightly faster than if the raw double value were used.
The intention of implementation is that sometimes it won't cache the String, instead just returning it. The reason for doing this is a CPU vs memory performance trade off: if the memory-saving caching process causes a CPU bottleneck, you can turn up the "do nothing" probability until you find a balance.

Build Spark JavaRDD List from DropResult objects

(What's possible in Scala should be possible in Java, right? But I would take Scala suggestions as well)
I am not trying to iterate over an RDD, instead I need to build one with n elements from a random/simulator class of a type called DropResult. DropResult can't be cast into anything else.
I thought the Spark "find PI" example had me on the right track but no luck. Here's what I am trying:
On a one-time basis a DropResult is made like this:
make a single DropResult from pld (PipeLinkageData)
DropResult dropResultSeed = pld.doDrop();
I am trying something like this:
JavaRDD<DropResult> simCountRDD = spark.parallelize(makeRangeList(1, getSimCount())).foreach(pld.doDrop());
I just need to run pld.doDrop() about 10^6 times on the cluster and put the results in a Spark RDD for the next operation, also on the cluster. I can't figure out what kind of function to use on "parallelize" to make this work.
makeRangeList:
private List<Integer> makeRangeList(int lower, int upper) {
List<Integer> range = IntStream.range(lower, upper).boxed().collect(Collectors.toList());
return range;
}
(FWIW I was trying to use the Pi example from http://spark.apache.org/examples.html as a model of how to do a for loop to create a JavaRDD)
int count = spark.parallelize(makeRange(1, NUM_SAMPLES)).filter(new Function<Integer, Boolean>() {
public Boolean call(Integer i) {
double x = Math.random();
double y = Math.random();
return x*x + y*y < 1;
}
}).count();
System.out.println("Pi is roughly " + 4 * count / NUM_SAMPLES);
Yea, seems like you should be able to do this pretty easily. Sounds like you just need to parallelize an RDD of 10^6 integers simply so that you can create 10^6 DropResult objects into an RDD.
If this is the case, I don't think you need to explicitly create a list as above. It seems like you should just be able to use makeRange() the way the Spark Pi example does like this :
JavaRDD<DropResult> simCountRDD = spark.parallelize(makeRange(1,getSimCount())).map(new Function<Integer, DropResult>()
{
public DropResult call(Integer i) {
return pld.doDrop();
}
});

Why isn't it an error?

The following program is a recursive program to find the maximum and minimum of an array.(I think! Please tell me if it is not a valid recursive program. Though there are easier ways to find the maximum and minimum in the array, I'm doing in the recursive manner only as a part of a exercise!)
This program works correctly and produces the outputs as expected.
In the comment line where I have marked "Doubt here!", I am unable to understand why an error is not given during compilation. The return type is clearly an integer array (as specified in the method definition), but I have not assigned the returned data to any integer array, but the program still works. I was expecting an error during compilation if I did it this way, but it worked. If someone would help me figure this out, it'd be helpful! :)
import java.io.*;
class MaxMin_Recursive
{
static int i=0,max=-999,min=999;
public static void main(String[] args) throws IOException
{
BufferedReader B = new BufferedReader(new InputStreamReader(System.in));
int[] inp = new int[6];
System.out.println("Enter a maximum of 6 numbers..");
for(int i=0;i<6;i++)
inp[i] = Integer.parseInt(B.readLine());
int[] numbers_displayed = new int[2];
numbers_displayed = RecursiveMinMax(inp);
System.out.println("The maximum of all numbers is "+numbers_displayed[0]);
System.out.println("The minimum of all numbers is "+numbers_displayed[1]);
}
static int[] RecursiveMinMax(int[] inp_arr) //remember to specify that the return type is an integer array
{
int[] retArray = new int[2];
if(i<inp_arr.length)
{
if(max<inp_arr[i])
max = inp_arr[i];
if(min>inp_arr[i])
min = inp_arr[i];
i++;
RecursiveMinMax(inp_arr); //Doubt here!
}
retArray[0] = max;
retArray[1] = min;
return retArray;
}
}
The return type is clearly an integer array (as specified in the method definition), but I have not assigned the returned data to any integer array, but the program still works.
Yes, because it's simply not an error to ignore the return value of a method. Not as far as the compiler is concerned. It may well represent a bug, but it's a perfectly valid use of the language.
For example:
Console.ReadLine(); // User input ignored!
"text".Substring(10); // Result ignored!
Sometimes I wish it could be used as warning - and indeed Resharper will give warnings when it can detect that "pure" methods (those without any side-effects) are called without using the return value. In particular, call which cause problems in real life:
Methods on string such as Replace and Substring, where users assume that calling the method alters the existing string
Stream.Read, where users assume that all the data they've requested has been read, when actually they should use the return value to see how many bytes have actually been read
There are times where it's entirely appropriate to ignore the return value for a method, even when it normally isn't for that method. For example:
TValue GetValueOrDefault<TKey, TValue>(Dictionary<TKey, TValue> dictionary, TKey key)
{
TValue value;
dictionary.TryGetValue(key, out value);
return value;
}
Normally when you call TryGetValue you want to know whether the key was found or not - but in this case value will be set to default(TValue) even if the key wasn't found, so we're going to return the same thing anyway.
In Java (as in C and C++) it is perfectly legal to discard the return value of a function. The compiler is not obliged to give any diagnostic.

Categories