Here is my Java code:
static Map<BigInteger, Integer> cache = new ConcurrentHashMap<>();
static Integer minFinder(BigInteger num) {
if (num.equals(BigInteger.ONE)) {
return 0;
}
if (num.mod(BigInteger.valueOf(2)).equals(BigInteger.ZERO)) {
//focus on stuff thats happening inside this block, since with given inputs it won't reach last return
return 1 + cache.computeIfAbsent(num.divide(BigInteger.valueOf(2)),
n -> minFinder(n));
}
return 1 + Math.min(cache.computeIfAbsent(num.subtract(BigInteger.ONE), n -> minFinder(n)),
cache.computeIfAbsent(num.add(BigInteger.ONE), n -> minFinder(n)));
}
I tried to memoize a function that returns a minimum number of actions such as division by 2, subtract by one or add one.
The problem I'm facing is when I call it with smaller inputs such as:
minFinder(new BigInteger("32"))
it works, but with bigger values like:
minFinder(new BigInteger("64"))
It throws a Recursive Update exception.
Is there any way to increase recursion size to prevent this exception or any other way to solve this?
From the API docs of Map.computeIfAbsent():
The mapping function should not modify this map during computation.
The API docs of ConcurrentHashMap.computeIfAbsent() make that stronger:
The mapping function must not modify this map during computation.
(Emphasis added)
You are violating that by using your minFinder() method as the mapping function. That it seems nevertheless to work for certain inputs is irrelevant. You need to find a different way to achieve what you're after.
Is there any way to increase recursion size to prevent this exception or any other way to solve this?
You could avoid computeIfAbsent() and instead do the same thing the old-school way:
BigInteger halfNum = num.divide(BigInteger.valueOf(2));
BigInteger cachedValue = cache.get(halfNum);
if (cachedValue == null) {
cachedValue = minFinder(halfNum);
cache.put(halfNum, cachedValue);
}
return 1 + cachedValue;
But that's not going to be sufficient if the computation loops. You could perhaps detect that by putting a sentinel value into the map before you recurse, so that you can recognize loops.
Related
class Note{
private text;
..
private int score = 0;
}
class Project{
...
List<Note> notes;
private int score = 0;
}
Project score is derived by project properties + sum of note scores.
First I'm updating and replacing all notes in the project. Then iterating again to sum the note score.
project.notes(project.notes()
.stream()
.map(this::updateNote)
.collect(Collectors.toList()));
project.score(project.notes()
.stream()
.mapToInt(n->n.score())
.sum());
private Note updateNote(Note note){
note.score(....);
return note;
}
Somehow I feel this is not right. Is there an elegant solution to avoid looping twice?
Performing a side-effect-ful method in a map operation like this is suspect no matter how you want to slice this puzzle, it's always going to look a bit weird.
You're abusing map here: You map objects to itself, but in so doing, cause a side effect. well, in for a penny, in for a pound, I guess - that is the suspect thing here, but the code works (it's just bad style). Note also that passing the collected list to project.notes does nothing, just project.notes().stream().map(this::updateNote).collect(Collectors.toList()) and letting the collection be lost to the either already 'works'. The only point of collecting is merely to force the stream to actually iterate (map doesn't cause iteration, it merely says: When you start iterating, map as-you-go).
So:
project.notes()
.stream()
.map(this::updateNote)) // no-op streamwise. Just making the side effect happen
.mapToInt(this::score)
.sum();
is all you need - but it's.. still a bit stinky. If instead of updateNote, note was immutable and there is a calculateScore method, you could do:
project.notes()
.stream()
.mapToInt(this::calculateScore)
.sum()
Here, calculateScore doesn't change anything about a Note object, it merely.. calculates the score and returns it, without changing any fields.
EDIT: I forgot a 'stream' in stream, and added a clarification.
You may get rid of looping twice by accumulating the sum in an AtomicInteger, but this would result in replacing the method reference this::updateNote with a lambda:
AtomicInteger sum = new AtomicInteger(0);
project.notes(project.notes()
.stream()
.map(note -> {
Note updated = updateNote(note);
sum.addAndGet(updated.getScore());
return updated;
})
.collect(Collectors.toList()));
project.score(sum.intValue());
Hi i want to count how many time a String is found in a Array of Strings using Streams
What i have thought so far is this:
Stream<String> stream=Arrays.stream(array);
int counter= (int) stream.filter(c-> c.contains("something")).count();
return counter;
The problem that i get is that most of the time i get an error of NullPointerException and i think is because of .count() if it doesn't get any much inside filter(c-> c.contains("something")).
And i came to this conclusion cause if i run it with out .count() like that stream.filter(c-> c.contains("something")); without returning nothing, it won't throw an Exception. I'm not sure about it but that's what i think.
Any ideas on how i can count the times a String appears in and Array of Strings using Streams?
null is a valid element of an array, so you have to be prepared to handle these. For example:
int counter = stream.filter(c -> c != null && c.contains("something")).count();
The problem that i get is that most of the time i get an error of
NullPointerException and i think is because of .count() And i came to
this conclusion cause if i run it with out .count()
it won't throw an Exception.
The reason being you cannot replicate the NullPointerException without calling count is because streams are lazy evaluated i.e. the entire pipeline is not executed until an eager operation (an operation which triggers the processing of the pipeline) is invoked.
We can come to the conclusion that Arrays.stream(array) is not the culprit for the NullPointerException because it would have blown up regardless of wether you called an eager operation on the stream or not as the parameter to Arrays.stream should be nonNull or else it would bomb out with the aforementioned error.
Thus we can come to the conclusion that the elements inside the array are the culprits for this error in the code you've illustrated but then you should ask your self are null elements allowed in the first place and if so then filter them out before performing c.contains("something") and if not then you should debug at which point in your application were nulls added to the array when they should not be. find the bug rather than suppress it.
if null's are allowed in the first place then the solution is simple i.e. filter the nulls out before calling .contains:
int counter = (int)stream.filter(Objects::nonNull)
.filter(c -> c.contains("something")) // or single filter with c -> c != null && c.contains("something") as pred
.count();
You have to filter for null values first. Do it either the way #pafauk. answered or by filtering sepraretly. That requires the null filter to be applied before the one you already use:
public static void main(String[] args) {
List<String> chainedChars = new ArrayList<>();
chainedChars.add("something new"); // match
chainedChars.add("something else"); // match
chainedChars.add("anything new");
chainedChars.add("anything else");
chainedChars.add("some things will never change");
chainedChars.add("sometimes");
chainedChars.add(null);
chainedChars.add("some kind of thing");
chainedChars.add("sumthin");
chainedChars.add("I have something in mind"); // match
chainedChars.add("handsome thing");
long somethings = chainedChars.stream()
.filter(java.util.Objects::nonNull)
.filter(cc -> cc.contains("something"))
.count();
System.out.printf("Found %d somethings", somethings);
}
outputs
Found 3 somethings
while switching the filter lines will result in a NullPointerException.
In my Spring application, I have a Couchbase repository for a document type of QuoteOfTheDay. The document is very basic, just has an id field of type UUID, value field of type String and created date field of type Date.
In my service class, I have a method that returns a random quote of the day. Initially I tried simply doing the following, which returned an argument of type Optional<QuoteOfTheDay>, but it would seem that findAny() would pretty much always return the same element in the stream. There's only about 10 elements at the moment.
public Optional<QuoteOfTheDay> random() {
return StreamSupport.stream(repository.findAll().spliterator(), false).findAny();
}
Since I wanted something more random, I implemented the following which just returns a QuoteOfTheDay.
public QuoteOfTheDay random() {
int count = Long.valueOf(repository.count()).intValue();
if(count > 0) {
Random r = new Random();
List<QuoteOfTheDay> quotes = StreamSupport.stream(repository.findAll().spliterator(), false)
.collect(toList());
return quotes.get(r.nextInt(count));
} else {
throw new IllegalStateException("No quotes found.");
}
}
I'm just curious how the findAny() method of Stream actually works since it doesn't seem to be random.
Thanks.
The reason behind findAny() is to give a more flexible alternative to findFirst(). If you are not interested in getting a specific element, this gives the implementing stream more flexibility in case it is a parallel stream.
No effort will be made to randomize the element returned, it just doesn't give the same guarantees as findFirst(), and might therefore be faster.
This is what the Javadoc says on the subject:
The behavior of this operation is explicitly nondeterministic; it is free to select any element in the stream. This is to allow for maximal performance in parallel operations; the cost is that multiple invocations on the same source may not return the same result. (If a stable result is desired, use findFirst() instead.)
Don’t collect into a List when all you want is a single item. Just pick one item from the stream. By picking the item via Stream operations you can even handle counts bigger than Integer.MAX_VALUE and don’t need the “interesting” way of hiding the fact that you are casting a long to an int (that Long.valueOf(repository.count()).intValue() thing).
public Optional<QuoteOfTheDay> random() {
long count = repository.count();
if(count==0) return Optional.empty();
Random r = new Random();
long randomIndex=count<=Integer.MAX_VALUE? r.nextInt((int)count):
r.longs(1, 0, count).findFirst().orElseThrow(AssertionError::new);
return StreamSupport.stream(repository.findAll().spliterator(), false)
.skip(randomIndex).findFirst();
}
I was curious if, in Java, you could create a piece of code that keeps iterating a piece of code without the use of a for or while loop, and if so, what methods could be used to solve this?
Look at recursion. A recursive function is a function which calls itself until a base case is reached. An example is the factorial function:
int fact(int n)
{
int result;
if(n==1)
return 1;
result = fact(n-1) * n;
return result;
}
You could use the Java 8 Streams methods for iterating over the elements of a Collection. Among the methods you can use are filtering methods (get all the elements of a collection that satisfy some conditions), mapping methods (map a Collection of one type to a Collection of another type) and aggregation methods (like computing the sum of all the elements in a Collection, based on some integer member of the Element stored in the collection).
For example - Stream forEach :
List<Element> = new ArrayList<Element>();
...
list.stream().forEach (element -> System.out.println(element));
Or you can do it without a Stream :
List<Element> = new ArrayList<Element>();
...
list.forEach (element -> System.out.println(element));
Another variant of recursion:
public class LoopException extends Exception {
public LoopException(int i, int max) throws LoopException {
System.out.println( "Loop variable: "+i);
if (i < max)
throw new LoopException( i+1, max );
}
}
Of course this is just a bit of fun, don't ever do it for real.
Java does not have a goto statement (that's a lie), so that way is a dead end.
But you could always make a piece of code endlessly iterate using recursion. Old factorial function seems to be the favorite, but since it is not an infinite loop, I will go for this simple function:
int blowMyStack(int a) {
return blowMyStack(a + 1);
}
There will be many ways to do this using various features of the language. But it always falls to an underlying recursion.
In case you're referring of something like C's goto, the answer is no.
In other cases, you can use recursive functions.
I am looking for a data structure that operates similar to a hash table, but where the table has a size limit. When the number of items in the hash reaches the size limit, a culling function should be called to get rid of the least-retrieved key/value pairs in the table.
Here's some pseudocode of what I'm working on:
class MyClass {
private Map<Integer, Integer> cache = new HashMap<Integer, Integer>();
public int myFunc(int n) {
if(cache.containsKey(n))
return cache.get(n);
int next = . . . ; //some complicated math. guaranteed next != n.
int ret = 1 + myFunc(next);
cache.put(n, ret);
return ret;
}
}
What happens is that there are some values of n for which myFunc() will be called lots of times, but many other values of n which will only be computed once. So the cache could fill up with millions of values that are never needed again. I'd like to have a way for the cache to automatically remove elements that are not frequently retrieved.
This feels like a problem that must be solved already, but I'm not sure what the data structure is that I would use to do it efficiently. Can anyone point me in the right direction?
Update I knew this had to be an already-solved problem. It's called an LRU Cache and is easy to make by extending the LinkedHashMap class. Here is the code that incorporates the solution:
class MyClass {
private final static int SIZE_LIMIT = 1000;
private Map<Integer, Integer> cache =
new LinkedHashMap<Integer, Integer>(16, 0.75f, true) {
protected boolean removeEldestEntry(Map.Entry<Integer, Integer> eldest)
{
return size() > SIZE_LIMIT;
}
};
public int myFunc(int n) {
if(cache.containsKey(n))
return cache.get(n);
int next = . . . ; //some complicated math. guaranteed next != n.
int ret = 1 + myFunc(next);
cache.put(n, ret);
return ret;
}
}
You are looking for an LRUList/Map. Check out LinkedHashMap:
The removeEldestEntry(Map.Entry) method may be overridden to impose a policy for removing stale mappings automatically when new mappings are added to the map.
Googling "LRU map" and "I'm feeling lucky" gives you this:
http://commons.apache.org/proper/commons-collections//javadocs/api-release/org/apache/commons/collections4/map/LRUMap.html
A Map implementation with a fixed
maximum size which removes the least
recently used entry if an entry is
added when full.
Sounds pretty much spot on :)
WeakHashMap will probably not do what you expect it to... read the documentation carefully and ensure that you know exactly what you from weak and strong references.
I would recommend you have a look at java.util.LinkedHashMap and use its removeEldestEntry method to maintain your cache. If your math is very resource intensive, you might want to move entries to the front whenever they are used to ensure that only unused entries fall to the end of the set.
The Adaptive Replacement Cache policy is designed to keep one-time requests from polluting your cache. This may be fancier than you're looking for, but it does directly address your "filling up with values that are never needed again".
Take a look at WeakHashMap
You probably want to implement a Least-Recently Used policy for your map. There's a simple way to do it on top of a LinkedHashMap:
http://www.roseindia.net/java/example/java/util/LRUCacheExample.shtml