I want to build an adaptative mesh refinement in 3D.
The basic principle is the following:
I have a set of cells with unique cell IDs.
I test each cell to see if it needs to be refined.
If refinement is required, a create 8 new child cells and add them to the list of cells to check for refinement.
Otherwise, this is a leaf node and I add it to my list of leaf nodes.
I want to implement it using the ForkJoin framework and Java 8 streams. I read this article, but I don't know how to apply it to my case.
For now, what I came up with is this:
public class ForkJoinAttempt {
private final double[] cellIds;
public ForkJoinAttempt(double[] cellIds) {
this.cellIds = cellIds;
}
public void refineGrid() {
ForkJoinPool pool = ForkJoinPool.commonPool();
double[] result = pool.invoke(new RefineTask(100));
}
private class RefineTask extends RecursiveTask<double[]> {
final double cellId;
private RefineTask(double cellId) {
this.cellId = cellId;
}
#Override
protected double[] compute() {
return ForkJoinTask.invokeAll(createSubtasks())
.stream()
.map(ForkJoinTask::join)
.reduce(new double[0], new Concat());
}
}
private double[] refineCell(double cellId) {
double[] result;
if (checkCell()) {
result = new double[8];
for (int i = 0; i < 8; i++) {
result[i] = Math.random();
}
} else {
result = new double[1];
result[0] = cellId;
}
return result;
}
private Collection<RefineTask> createSubtasks() {
List<RefineTask> dividedTasks = new ArrayList<>();
for (int i = 0; i < cellIds.length; i++) {
dividedTasks.add(new RefineTask(cellIds[i]));
}
return dividedTasks;
}
private class Concat implements BinaryOperator<double[]> {
#Override
public double[] apply(double[] a, double[] b) {
int aLen = a.length;
int bLen = b.length;
#SuppressWarnings("unchecked")
double[] c = (double[]) Array.newInstance(a.getClass().getComponentType(), aLen + bLen);
System.arraycopy(a, 0, c, 0, aLen);
System.arraycopy(b, 0, c, aLen, bLen);
return c;
}
}
public boolean checkCell() {
return Math.random() < 0.5;
}
}
... and I'm stuck here.
This doesn't do much for now, because I never call the refineCell function.
I also might have a performance issue with all those double[] I create. And merging them in this way might not be the most efficient way to do it too.
But first things first, can anyone help me on implementing the fork join in that case?
The expected result of the algorithm is an array of leaf cell IDs (double[])
Edit 1:
Thanks to the comments, I came up with something that works a little better.
Some changes:
I went from arrays to lists. This is not good for the memory footprint, because I'm not able to use Java primitives. But it made the implantation simpler.
The cell IDs are now Long instead of Double.
Ids are not randomly chosen any more:
Root level cells have IDs 1, 2, 3 etc.;
Children of 1 have IDs 10, 11, 12, etc.;
Children of 2 have IDs 20, 21, 22, etc.;
You get the idea...
I refine all cells whose ID is lower than 100
This allows me for the sake of this example to better check the results.
Here is the new implementation:
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.concurrent.*;
import java.util.function.BinaryOperator;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
import java.util.stream.Stream;
public class ForkJoinAttempt {
private static final int THRESHOLD = 2;
private List<Long> leafCellIds;
public void refineGrid(List<Long> cellsToProcess) {
leafCellIds = ForkJoinPool.commonPool().invoke(new RefineTask(cellsToProcess));
}
public List<Long> getLeafCellIds() {
return leafCellIds;
}
private class RefineTask extends RecursiveTask<List<Long>> {
private final CopyOnWriteArrayList<Long> cellsToProcess = new CopyOnWriteArrayList<>();
private RefineTask(List<Long> cellsToProcess) {
this.cellsToProcess.addAll(cellsToProcess);
}
#Override
protected List<Long> compute() {
if (cellsToProcess.size() > THRESHOLD) {
System.out.println("Fork/Join");
return ForkJoinTask.invokeAll(createSubTasks())
.stream()
.map(ForkJoinTask::join)
.reduce(new ArrayList<>(), new Concat());
} else {
System.out.println("Direct computation");
List<Long> leafCells = new ArrayList<>();
for (Long cell : cellsToProcess) {
Long result = refineCell(cell);
if (result != null) {
leafCells.add(result);
}
}
return leafCells;
}
}
private Collection<RefineTask> createSubTasks() {
List<RefineTask> dividedTasks = new ArrayList<>();
for (List<Long> list : split(cellsToProcess)) {
dividedTasks.add(new RefineTask(list));
}
return dividedTasks;
}
private Long refineCell(Long cellId) {
if (checkCell(cellId)) {
for (int i = 0; i < 8; i++) {
Long newCell = cellId * 10 + i;
cellsToProcess.add(newCell);
System.out.println("Adding child " + newCell + " to cell " + cellId);
}
return null;
} else {
System.out.println("Leaf node " + cellId);
return cellId;
}
}
private List<List<Long>> split(List<Long> list)
{
int[] index = {0, (list.size() + 1)/2, list.size()};
List<List<Long>> lists = IntStream.rangeClosed(0, 1)
.mapToObj(i -> list.subList(index[i], index[i + 1]))
.collect(Collectors.toList());
return lists;
}
}
private class Concat implements BinaryOperator<List<Long>> {
#Override
public List<Long> apply(List<Long> listOne, List<Long> listTwo) {
return Stream.concat(listOne.stream(), listTwo.stream())
.collect(Collectors.toList());
}
}
public boolean checkCell(Long cellId) {
return cellId < 100;
}
}
And the method testing it:
int initialSize = 4;
List<Long> cellIds = new ArrayList<>(initialSize);
for (int i = 0; i < initialSize; i++) {
cellIds.add(Long.valueOf(i + 1));
}
ForkJoinAttempt test = new ForkJoinAttempt();
test.refineGrid(cellIds);
List<Long> leafCellIds = test.getLeafCellIds();
System.out.println("Leaf nodes: " + leafCellIds.size());
for (Long node : leafCellIds) {
System.out.println(node);
}
The output confirms that it adds 8 children to each root cell. But it does not go further.
I know why, but I don't know how to solve it: this is because even though the refineCell method add the new cells to the list of cells to process. The createSubTask method is not called again, so it cannot know I have added new cells.
Edit 2:
To state the problem differently, what I'm looking for is a mechanism where a Queue of cells IDs is processed by some RecursiveTasks while others add to the Queue in parallel.
First, let’s start with the Stream based solution
public class Mesh {
public static long[] refineGrid(long[] cellsToProcess) {
return Arrays.stream(cellsToProcess).parallel().flatMap(Mesh::expand).toArray();
}
static LongStream expand(long d) {
return checkCell(d)? LongStream.of(d): generate(d).flatMap(Mesh::expand);
}
private static boolean checkCell(long cellId) {
return cellId > 100;
}
private static LongStream generate(long cellId) {
return LongStream.range(0, 8).map(j -> cellId * 10 + j);
}
}
While the current flatMap implementation has known issues with parallel processing that might apply when the mesh is too unbalanced, the performance for your actual task might be reasonable, so this simple solution is always worth a try, before start to implement something more complicated.
If you really need a custom implementation, e.g. if the workload is unbalanced and the Stream implementation can’t adapt well enough, you can do it like this:
public class MeshTask extends RecursiveTask<long[]> {
public static long[] refineGrid(long[] cellsToProcess) {
return new MeshTask(cellsToProcess, 0, cellsToProcess.length).compute();
}
private final long[] source;
private final int from, to;
private MeshTask(long[] src, int from, int to) {
source = src;
this.from = from;
this.to = to;
}
#Override
protected long[] compute() {
return compute(source, from, to);
}
private static long[] compute(long[] source, int from, int to) {
long[] result = new long[to - from];
ArrayDeque<MeshTask> next = new ArrayDeque<>();
while(getSurplusQueuedTaskCount()<3) {
int mid = (from+to)>>>1;
if(mid == from) break;
MeshTask task = new MeshTask(source, mid, to);
next.push(task);
task.fork();
to = mid;
}
int pos = 0;
for(; from < to; ) {
long value = source[from++];
if(checkCell(value)) result[pos++]=value;
else {
long[] array = generate(value);
array = compute(array, 0, array.length);
result = Arrays.copyOf(result, result.length+array.length-1);
System.arraycopy(array, 0, result, pos, array.length);
pos += array.length;
}
while(from == to && !next.isEmpty()) {
MeshTask task = next.pop();
if(task.tryUnfork()) {
to = task.to;
}
else {
long[] array = task.join();
int newLen = pos+to-from+array.length;
if(newLen != result.length)
result = Arrays.copyOf(result, newLen);
System.arraycopy(array, 0, result, pos, array.length);
pos += array.length;
}
}
}
return result;
}
static boolean checkCell(long cellId) {
return cellId > 1000;
}
static long[] generate(long cellId) {
long[] sub = new long[8];
for(int i = 0; i < sub.length; i++) sub[i] = cellId*10+i;
return sub;
}
}
This implementation calls the compute method of the root task directly to incorporate the caller thread into the computation. The compute method uses getSurplusQueuedTaskCount() to decide whether to split. As its documentation says, the idea is to always have a small surplus, e.g. 3. This ensures that the evaluation can adapt to unbalanced workloads as idle threads can steal work from other task.
The splitting is not done by creating two sub-tasks and wait for both. Instead, only one task is split off, representing the second half of the pending work, and the current task’s workload is adapted to reflect the first half.
Then, the remaining workload is processed locally. Afterwards, the last pushed subtask is popped and attempted to unfork. If unforking succeeded, the current workload’s range is adapted to cover the subsequent task’s range too and the local iteration continues.
That way, any surplus task that has not been stolen by another thread is processed in the simplest and most lightweight way, as if it was never forked.
If the task has been picked up by another thread, we have to wait for its completion now and merge the result array.
Note that when waiting for a sub task via join(), the underlying implementation will also check if unforking and local evaluation is possible, to keep all worker threads busy. However, adjusting our loop variable and directly accumulating the results in our target array is still better than a nested compute invocation that still needs merging the result arrays.
If a cell is not a leaf, the resulting nodes are processed recursively by the same logic. This again allows for adaptive local and concurrent evaluation, so the execution will adapt to unbalanced workloads, e.g. if a particular cell has a larger subtree or the evaluation of a particular cell taskes much longer than others.
It must be emphasized that in all cases, a significant processing workload is needed to draw a benefit from parallel processing. If, like in the example, there is mostly data copying only, the benefit might be much smaller, non-existent or in the worst case, the parallel processing may perform worse than sequential.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm trying to figure out the best way of finding/automating all the possible permutations for a certain scenario.
I have a program which takes in a set of numbers [X, Y , Z], Each number has a predefined uncertainty. Therefore, I want to run my program against [X, Y , Z], [X+e, Y, Z] [x-e, Y, Z], [X, Y+e, Z] etc. Right now I have built an object which contains all the 27 possibilities and I'm iterating through it in order to provide my program with a new set of input. (I'll run my program 27 times with different set of inputs)
as time goes, I'd need to update my program to take in a bigger set of numbers. So I'm wondering whether there is a better way of calculating all the possible permutations my base set may have.
I'd rather know the way of implementing this instead of using any existing libraries (if there is any). I see this as a learning program. Thanks!
Instead of writing down the the 3x3x3 sets of 3 numbers by hand, you can use nested loops. If you have 3 loops, one inside the other, each running 3 times, you get 27 outputs:
double[] numbers = new double[3];
double[] e = {-1e-6, 0, 1e-6};
for (double eX : e) {
for (double eY : e) {
for (double eZ : e) {
double[] newNumbers = {numbers[0] + eX, numbers[1] + eY, numbers[2] + eZ};
// Run your program using "newNumbers". Just as an example:
System.out.println(Arrays.toString(newNumbers));
}
}
}
As for
as time goes, I'd need to update my program to take in a bigger set of numbers
If the size of the set is going to be small and fixed, you can just add more nested loops. If not, you are going to need more advanced techniques .
Here is a permutation method I found some time ago. It prints them within the method. It only does single dimension permutations but you may be able to adapt it to your needs.
public static void generate(int n, int[] a) {
if (n == 1) {
System.out.println(Arrays.toString(a));
} else {
for (int i = 0; i < n - 1; i++) {
generate(n - 1, a);
if ((n & 1) == 0) {
swap(i, n - 1, a);
} else {
swap(0, n - 1, a);
}
}
generate(n - 1, a);
}
}
public static void swap(int a, int b, int[] array) {
int temp = array[a];
array[a] = array[b];
array[b] = temp;
}
I believe the best way to do this is to implement a Spliterator and wrap it in a Stream:
public interface Combinations<T> extends Stream<List<T>> {
public static <T> Stream<List<T>> of(Collection<T> collection) {
SpliteratorSupplier<T> supplier =
new SpliteratorSupplier<T>(collection);
return supplier.stream();
}
...
}
Which solves the general use-case:
Combinations.of(List.of(X, Y, Z)).forEach(t -> process(t));
Implementing the Spliterator is straightforward but tedious and I have written about it here. The key components are a DispatchSpliterator:
private Iterator<Supplier<Spliterator<T>>> spliterators = null;
private Spliterator<T> spliterator = Spliterators.emptySpliterator();
...
protected abstract Iterator<Supplier<Spliterator<T>>> spliterators();
...
#Override
public Spliterator<T> trySplit() {
if (spliterators == null) {
spliterators = Spliterators.iterator(spliterators());
}
return spliterators.hasNext() ? spliterators.next().get() : null;
}
#Override
public boolean tryAdvance(Consumer<? super T> consumer) {
boolean accepted = false;
while (! accepted) {
if (spliterator == null) {
spliterator = trySplit();
}
if (spliterator != null) {
accepted = spliterator.tryAdvance(consumer);
if (! accepted) {
spliterator = null;
}
} else {
break;
}
}
return accepted;
}
A Spliterator for each prefix:
private class ForPrefix extends DispatchSpliterator<List<T>> {
private final int size;
private final List<T> prefix;
private final List<T> remaining;
public ForPrefix(int size, List<T> prefix, List<T> remaining) {
super(binomial(remaining.size(), size),
SpliteratorSupplier.this.characteristics());
this.size = size;
this.prefix = requireNonNull(prefix);
this.remaining = requireNonNull(remaining);
}
#Override
protected Iterator<Supplier<Spliterator<List<T>>>> spliterators() {
List<Supplier<Spliterator<List<T>>>> list = new LinkedList<>();
if (prefix.size() < size) {
for (int i = 0, n = remaining.size(); i < n; i += 1) {
List<T> prefix = new LinkedList<>(this.prefix);
List<T> remaining = new LinkedList<>(this.remaining);
prefix.add(remaining.remove(i));
list.add(() -> new ForPrefix(size, prefix, remaining));
}
} else if (prefix.size() == size) {
list.add(() -> new ForCombination(prefix));
} else {
throw new IllegalStateException();
}
return list.iterator();
}
}
and one for each combination:
private class ForCombination extends DispatchSpliterator<List<T>> {
private final List<T> combination;
public ForCombination(List<T> combination) {
super(1, SpliteratorSupplier.this.characteristics());
this.combination = requireNonNull(combination);
}
#Override
protected Iterator<Supplier<Spliterator<List<T>>>> spliterators() {
Supplier<Spliterator<List<T>>> supplier =
() -> Collections.singleton(combination).spliterator();
return Collections.singleton(supplier).iterator();
}
}
I'm doing something that produces the right result. However, it is wrong from a design POV.
The point of the program is to list the result of all the powers of a number up to and including the user-defined limit.
I have a constructor which accepts the base and the exponent from the Scanner. Then a method, which utilises a for loop to calculate the power for each exponent.
Now, the problem is that I'm printing the result from each loop iteration directly from this method. This beats the point of private variables and it being void in the 1st place.
Therefore, I want to define a getter method which returns the result of each power to the output. I used to set them just fine for if/switch statements, but I don't know how to do the same for loops. If I assign the result to a variable within the loop and return that variable from the getter then it will return only the output from the final iteration.
Private implementation
package Chapter6Review;
public class Powers {
private int target;
private int power;
public Powers(int target, int power) {
this.target = target;
this.power = power;
}
public void calculatePower() {
for (int i = 0; i <= power; i++) {
System.out.println((int) Math.pow(target, i));
}
}
/*
public int getPower() {
return
}
*/
}
User interface
package Chapter6Review;
import java.util.Scanner;
public class PowersTester {
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
System.out.print("Enter your base: ");
int target = in.nextInt();
System.out.print("Enter your exponent: ");
int power = in.nextInt();
Powers tester = new Powers(target, power);
tester.calculatePower();
}
}
You can simply use a List ;
public List<Integer> calculatePower() {
int p;
List<Integer> result = new ArrayList<Integer>();
for (int i = 0; i <= power; i++) {
p = (int) Math.pow(target, i);
result.add(p);
}
return result;
}
Then in you main method, you can iterate the list to print the powers like that :
List<Integer> result = new ArrayList<Integer>();
Powers tester = new Powers(target, power);
result = tester.calculatePower();
for (int i = 0; i < result.size(); i++) {
System.out.println(result.get(i));
}
You could store each of the results in a List:
List<Power> list = new ArrayList<>();
and when you call it add it as well
list.add(new Powers(target, power));
At the end you can iterate over the list like this:
for (Power power : list){
// your code
}
You might consider using streams as well
public List<Integer> calculatePower() {
return IntStream
.rangeClosed(0, power). // iterate from 0 till power inclusive
.mapToObj(i -> (int) Math.pow(target,i))
.collect(Collectors.toList()); // get result as list
}
Thanks for all the answers. Using a list seems to be a good choice.
Since I haven't covered lists yet, I resorted to this solution for now. But I don't like having code that can affect the solution in the main. Ideally, the loop should go in the private implementation.
Main
Powers tester = new Powers(target, power);
for (int i = 0; i <= power; i++) {
tester.calculatePower(i);
System.out.println(tester.getPower());
}
Private implementation
public void calculatePower(int iPower) {
result = (int) Math.pow(target, iPower);
}
public int getPower() {
return result;
}
I was experimenting with this question today, from Euler Problems:
A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.
Find the largest palindrome made from the product of two 3-digit numbers.
I thought about it and it can of course be done with for-loops, however I want to use Java 8 as it opens up new options.
However first of all, I do not know how to generate an IntStream that produces such elements, so I still ended up using normal for-loops:
public class Problem4 extends Problem<Integer> {
private final int digitsCount;
private int min;
private int max;
public Problem4(final int digitsCount) {
this.digitsCount = digitsCount;
}
#Override
public void run() {
List<Integer> list = new ArrayList<>();
min = (int)Math.pow(10, digitsCount - 1);
max = min * 10;
for (int i = min; i < max; i++) {
for (int j = min; j < max; j++) {
int sum = i * j;
if (isPalindrome(sum)) {
list.add(sum);
}
}
}
result = list.stream().mapToInt(i -> i).max().getAsInt();
}
private boolean isPalindrome(final int number) {
String numberString = String.valueOf(number);
String reversed = new StringBuilder(numberString).reverse().toString();
return (numberString.equals(reversed));
}
#Override
public String getName() {
return "Problem 4";
}
}
As you can see I might be a bit lazy, bit really the IntStream::max is a very nice method and I think it is better to use that, as to write it yourself.
Here comes the issue though, I need to have a list now to be able to obtain the maximum in this manner, which means I need to store data, where I really should not do so.
So, the question now, would it be possible to implement this in Java 8?
for (int i = min; i < max; i++) {
for (int j = min; j < max; j++) {
yield i * j;
}
}
And then out of that method create an PrimitiveIterator.OfInt (unboxes version of Iterator<Integer>, or create an IntStream directly?
Then getting the answer with streamFromYield.filter(this::isPalindrome).max().getAsInt() would be really easy to implement.
Lastly, I know this question has been asked before, however the last time is already quite a bit ago and now Java 8 is going to happen very soon, where they have added as big concept Stream<T> and the new language construct, called lambdas.
So making such code may be very different now than when people were making it for Java 6 or 7.
Well, I think we've gotten carried away using the Streams API from the "outside," using flatMap, optimizing the palindrome-finding algorithm, etc. See answers from Boris the Spider and assylias. However, we've sidestepped the original question of how to write a generator function using something like Python's yield statement. (I think the OP's nested-for example with yield was using Python.)
One of the problems with using flatMap is that parallel splitting can only occur on the outermost stream. The inner streams (returned from flatMap) are processed sequentially. We could try to make the inner streams also parallel, but they'd possibly compete with the outer ones. I suppose nested splitting could work, but I'm not too confident.
One approach is to use the Stream.generate or (like assylias' answer) the Stream.iterate functions. These create infinite streams, though, so an external limit must be supplied to terminate the stream.
It would be nice if we could create a finite but "flattened" stream so that the entire stream of values is subject to splitting. Unfortunately creating a stream is not nearly as convenient as Python's generator functions. It can be done without too much trouble, though. Here's an example that uses the StreamSupport and AbstractSpliterator classes:
class Generator extends Spliterators.AbstractIntSpliterator {
final int min;
final int max;
int i;
int j;
public Generator(int min, int max) {
super((max - min) * (max - min), 0);
this.min = min;
this.max = max;
i = min;
j = min;
}
public boolean tryAdvance(IntConsumer ic) {
if (i == max) {
return false;
}
ic.accept(i * j);
j++;
if (j == max) {
i++;
j = min;
}
return true;
}
}
public static void main(String[] args) {
Generator gen = new Generator(100, 1000);
System.out.println(
StreamSupport.intStream(gen, false)
.filter(i -> isPalindrome(i))
.max()
.getAsInt());
}
Instead of having the iteration variables be on the stack (as in the nested-for with yield approach) we have to make them fields of an object and have the tryAdvance increment them until the iteration is complete. Now, this is the simplest form of a spliterator and it doesn't necessarily parallelize well. With additional work one could implement the trySplit method to do better splitting, which in turn would enable better parallelism.
The forEachRemaining method could be overridden, and it would look almost like the nested-for-loop-with-yield example, calling the IntConsumer instead of yield. Unfortunately tryAdvance is abstract and therefore must be implemented, so it's still necessary to have the iteration variables be fields of an object.
How about looking at it from another direction:
You want a Stream of [100,1000), and for each element of that Stream you want another Stream of that element multiplied by each of [100, 1000). This is what flatMap is for:
public static void main(final String[] args) throws Exception {
OptionalInt max = IntStream.range(100, 1000).
flatMap((i) -> IntStream.range(i, 1000).map((j) -> i * j)).
unordered().
parallel().
filter((i) -> {
String forward = Integer.toString(i);
String backward = new StringBuilder(forward).reverse().toString();
return forward.equals(backward);
}).
max();
System.out.println(max);
}
Not sure if getting a String and then the reverse is the most efficient way to detect palindromes, off the top of my head this would seem to be faster:
final String asString = Integer.toString(i);
for (int j = 0, k = asString.length() - 1; j < k; j++, k--) {
if (asString.charAt(j) != asString.charAt(k)) {
return false;
}
}
return true;
It gives the same answer but I haven't put it under an rigorous testing... Seems to be about 100ms faster on my machine.
Also not sure this problem is big enough for unordered().parallel() - removing that gives a little boost to speed too.
Was just trying to demonstrate the capabilities of the Stream API.
EDIT
As #Stuart points out in the comments, as multiplication is commutative, we only need to IntStream.range(i, 1000) in the sub-stream. This is because once we check a x b we don't need to check b x a. I have updated the answer.
There always have been ways to emulate that overrated yield feature, even without Java 8. Basically it is about storing the state of an execution, i.e. the stack frame(s), which can be done by a thread. A very simple implementation could look like this:
import java.util.Iterator;
import java.util.NoSuchElementException;
public abstract class Yield<E> implements Iterable<E> {
protected interface Flow<T> { void yield(T item); }
private final class State implements Runnable, Iterator<E>, Flow<E> {
private E nextValue;
private boolean finished, value;
public synchronized boolean hasNext() {
while(!(value|finished)) try { wait(); } catch(InterruptedException ex){}
return value;
}
public synchronized E next() {
while(!(value|finished)) try { wait(); } catch(InterruptedException ex){}
if(!value) throw new NoSuchElementException();
final E next = nextValue;
value=false;
notify();
return next;
}
public void remove() { throw new UnsupportedOperationException(); }
public void run() {
try { produce(this); }
finally {
synchronized(this) {
finished=true;
notify();
}
}
}
public synchronized void yield(E next) {
while(value) try { wait(); } catch(InterruptedException ex){}
nextValue=next;
value=true;
notify();
}
}
protected abstract void produce(Flow<E> f);
public Iterator<E> iterator() {
final State state = new State();
new Thread(state).start();
return state;
}
}
Once you have such a helper class, the use case will look straight-forward:
// implement a logic the yield-style
Iterable<Integer> y=new Yield<Integer>() {
protected void produce(Flow<Integer> f) {
for (int i = min; i < max; i++) {
for (int j = min; j < max; j++) {
f.yield(i * j);
}
}
}
};
// use the Iterable, e.g. in a for-each loop
int maxPalindrome=0;
for(int i:y) if(isPalindrome(i) && i>maxPalindrome) maxPalindrome=i;
System.out.println(maxPalindrome);
The previous code didn’t use any Java 8 features. But it will allow using them without the need for any change:
// the Java 8 way
StreamSupport.stream(y.spliterator(), false).filter(i->isPalindrome(i))
.max(Integer::compare).ifPresent(System.out::println);
Note that the Yield support class above is not the most efficient implementation and it doesn’t handle the case if an iteration is not completed but the Iterator abandoned. But it shows that such a logic is indeed possible to implement in Java (while the other answers convincingly show that such a yield logic is not necessary to solve such a problem).
I'll give it a go. Version with a loop then with a stream. Although I start from the other end so it's easier because I can limit(1).
public class Problem0004 {
public static void main(String[] args) {
int maxNumber = 999 * 999;
//with a loop
for (int i = maxNumber; i > 0; i--) {
if (isPalindrome(i) && has3DigitsFactors(i)) {
System.out.println(i);
break;
}
}
//with a stream
IntStream.iterate(maxNumber, i -> i - 1)
.parallel()
.filter(i -> isPalindrome(i) && has3DigitsFactors(i))
.limit(1)
.forEach(System.out::println);
}
private static boolean isPalindrome(int n) {
StringBuilder numbers = new StringBuilder(String.valueOf(n));
return numbers.toString().equals(numbers.reverse().toString());
}
private static boolean has3DigitsFactors(int n) {
for (int i = 999; i > 0; i--) {
if (n % i == 0 && n / i < 1000) {
return true;
}
}
return false;
}
}
I have defined my own compare function for a priority queue, however the compare function needs information of an array. The problem is that when the values of the array changed, it did not affect the compare function. How do I deal with this?
Code example:
import java.util.Arrays;
import java.util.Comparator;
import java.util.PriorityQueue;
import java.util.Scanner;
public class Main {
public static final int INF = 100;
public static int[] F = new int[201];
public static void main(String[] args){
PriorityQueue<Integer> Q = new PriorityQueue<Integer>(201,
new Comparator<Integer>(){
public int compare(Integer a, Integer b){
if (F[a] > F[b]) return 1;
if (F[a] == F[b]) return 0;
return -1;
}
});
Arrays.fill(F, INF);
F[0] = 0; F[1] = 1; F[2] = 2;
for (int i = 0; i < 201; i ++) Q.add(i);
System.out.println(Q.peek()); // Prints 0, because F[0] is the smallest
F[0] = 10;
System.out.println(Q.peek()); // Still prints 0 ... OMG
}
}
So, essentially, you are changing your comparison criteria on the fly, and that's just not the functionality that priority queue contracts offer. Note that this might seem to work on some cases (e.g. a heap might sort some of the items when removing or inserting another item) but since you have no guarantees, it's just not a valid approach.
What you could do is, every time you change your arrays, you get all the elements out, and put them back in. This is of course very expensive ( O(n*log(n))) so you should probably try to work around your design to avoid changing the array values at all.
Your comparator is only getting called when you modify the queue (that is, when you add your items). After that, the queue has no idea something caused the order to change, which is why it remains the same.
It is quite confusing to have a comparator like this. If you have two values, A and B, and A>B at some point, everybody would expect A to stay bigger than B. I think your usage of a priority queue for this problem is wrong.
Use custom implementation of PriorityQueue that uses comparator on peek, not on add:
public class VolatilePriorityQueue <T> extends AbstractQueue <T>
{
private final Comparator <? super T> comparator;
private final List <T> elements = new ArrayList <T> ();
public VolatilePriorityQueue (Comparator <? super T> comparator)
{
this.comparator = comparator;
}
#Override
public boolean offer (T e)
{
return elements.add (e);
}
#Override
public T poll ()
{
if (elements.isEmpty ()) return null;
else return elements.remove (getMinimumIndex ());
}
#Override
public T peek ()
{
if (elements.isEmpty ()) return null;
else return elements.get (getMinimumIndex ());
}
#Override
public Iterator <T> iterator ()
{
return elements.iterator ();
}
#Override
public int size ()
{
return elements.size ();
}
private int getMinimumIndex ()
{
T e = elements.get (0);
int index = 0;
for (int count = elements.size (), i = 1; i < count; i++)
{
T ee = elements.get (i);
if (comparator.compare (e, ee) > 0)
{
e = ee;
index = i;
}
}
return index;
}
}
I'm writing a Java program that searches for and outputs cycles in a graph. I am using an adjacency list for storing my graph, with the lists stored as LinkedLists. My program takes an input formatted with the first line as the number of nodes in the graph and each subsequent line 2 nodes that form an edge e.g.:
3
1 2
2 3
3 1
My problem is that when the inputs get very large (the large graph I am using has 10k nodes and I don't know how many edges, the file is 23mb of just edges) I am getting a java.lang.StackOverflowError, but I don't get any errors with small inputs. I'm wondering if it would be better to use another data structure to form my adjacency lists or if there is some method I could use to avoid this error, as I'd rather not just have to change a setting on my local installation of Java (because I have to be sure this will run on other computers that I can't control the settings on as much). Below is my code, the Vertex class and then my main class. Thanks for any help you can give!
Vertex.java:
package algorithms311;
import java.util.*;
public class Vertex implements Comparable {
public int id;
public LinkedList adjVert = new LinkedList();
public String color = "white";
public int dTime;
public int fTime;
public int prev;
public Vertex(int idnum) {
id = idnum;
}
public int getId() {
return id;
}
public int compareTo(Object obj) {
Vertex vert = (Vertex) obj;
return id-vert.getId();
}
#Override public String toString(){
return "Vertex # " + id;
}
public void setColor(String newColor) {
color = newColor;
}
public String getColor() {
return color;
}
public void setDTime(int d) {
dTime = d;
}
public void setFTime(int f) {
fTime = f;
}
public int getDTime() {
return dTime;
}
public int getFTime() {
return fTime;
}
public void setPrev(int v) {
prev = v;
}
public int getPrev() {
return prev;
}
public LinkedList getAdjList() {
return adjVert;
}
public void addAdj(int a) { //adds a vertex id to this vertex's adj list
adjVert.add(a);
}
}
CS311.java:
package algorithms311;
import java.util.*;
import java.io.*;
public class CS311 {
public static final String GRAPH= "largegraph1";
public static int time = 0;
public static LinkedList[] DFS(Vertex[] v) {
LinkedList[] l = new LinkedList[2];
l[0] = new LinkedList();
l[1] = new LinkedList(); //initialize the array with blank lists, otherwise we get a nullpointerexception
for(int i = 0; i < v.length; i++) {
v[i].setColor("white");
v[i].setPrev(-1);
}
time = 0;
for(int i = 0; i < v.length; i++) {
if(v[i].getColor().equals("white")) {
l = DFSVisit(v, i, l);
}
}
return l;
}
public static LinkedList[] DFSVisit(Vertex[] v, int i, LinkedList[] l) { //params are a vertex of nodes and the node id you want to DFS from
LinkedList[] VOandBE = new LinkedList[2]; //two lists: visit orders and back edges
VOandBE[0] = l[0]; // l[0] is visit Order, a linked list of ints
VOandBE[1] = l[1]; // l[1] is back Edges, a linked list of arrays[2] of ints
VOandBE[0].add(v[i].getId());
v[i].setColor("gray"); //color[vertex i] <- GRAY
time++; //time <- time+1
v[i].setDTime(time); //d[vertex i] <- time
LinkedList adjList = v[i].getAdjList(); // adjList for the current vertex
for(int j = 0; j < adjList.size(); j++) { //for each v in adj[vertex i]
if(v[(Integer)adjList.get(j)].getColor().equals("gray") && v[i].getPrev() != v[(Integer)adjList.get(j)].getId()) { // if color[v] = gray and Predecessor[u] != v do
int[] edge = new int[2]; //pair of vertices
edge[0] = i; //from u
edge[1] = (Integer)adjList.get(j); //to v
VOandBE[1].add(edge);
}
if(v[(Integer)adjList.get(j)].getColor().equals("white")) { //do if color[v] = WHITE
v[(Integer)adjList.get(j)].setPrev(i); //then "pi"[v] <- vertex i
DFSVisit(v, (Integer)adjList.get(j), VOandBE); //DFS-Visit(v)
}
}
VOandBE[0].add(v[i].getId());
v[i].setColor("black");
time++;
v[i].setFTime(time);
return VOandBE;
}
public static void main(String[] args) {
try {
// --Read First Line of Input File
// --Find Number of Vertices
FileReader file1 = new FileReader("W:\\Documents\\NetBeansProjects\\algorithms311\\src\\algorithms311\\" + GRAPH);
BufferedReader bReaderNumEdges = new BufferedReader(file1);
String numVertS = bReaderNumEdges.readLine();
int numVert = Integer.parseInt(numVertS);
System.out.println(numVert + " vertices");
// --Make Vertices
Vertex vertex[] = new Vertex[numVert];
for(int k = 0; k <= numVert - 1; k++) {
vertex[k] = new Vertex(k);
}
// --Adj Lists
FileReader file2 = new FileReader("W:\\Documents\\NetBeansProjects\\algorithms311\\src\\algorithms311\\" + GRAPH);
BufferedReader bReaderEdges = new BufferedReader(file2);
bReaderEdges.readLine(); //skip first line, that's how many vertices there are
String edge;
while((edge = bReaderEdges.readLine()) != null) {
StringTokenizer ST = new StringTokenizer(edge);
int vArr[] = new int[2];
for(int j = 0; ST.hasMoreTokens(); j++) {
vArr[j] = Integer.parseInt(ST.nextToken());
}
vertex[vArr[0]-1].addAdj(vArr[1]-1);
vertex[vArr[1]-1].addAdj(vArr[0]-1);
}
for(int i = 0; i < vertex.length; i++) {
System.out.println(vertex[i] + ", adj nodes: " + vertex[i].getAdjList());
}
LinkedList[] l = new LinkedList[2];
l = DFS(vertex);
System.out.println("");
System.out.println("Visited Nodes: " + l[0]);
System.out.println("");
System.out.print("Back Edges: ");
for(int i = 0; i < l[1].size(); i++) {
int[] q = (int[])(l[1].get(i));
System.out.println("[" + q[0] + "," + q[1] + "] ");
}
for(int i = 0; i < l[1].size(); i++) { //iterate through the list of back edges
int[] q = (int[])(l[1].get(i)); // q = pair of vertices that make up a back edge
int u = q[0]; // edge (u,v)
int v = q[1];
LinkedList cycle = new LinkedList();
if(l[0].indexOf(u) < l[0].indexOf(v)) { //check if u is before v
for(int z = l[0].indexOf(u); z <= l[0].indexOf(v); z++) { //if it is, look for u first; from u to v
cycle.add(l[0].get(z));
}
}
else if(l[0].indexOf(v) < l[0].indexOf(u)) {
for(int z = l[0].indexOf(v); z <= l[0].indexOf(u); z++) { //if it is, look for u first; from u to v
cycle.add(l[0].get(z));
}
}
System.out.println("");
System.out.println("Cycle detected! : " + cycle);
if((cycle.size() & 1) != 0) {
System.out.println("Cycle is odd, graph is not 2-colorable!");
}
else {
System.out.println("Cycle is even, we're okay!");
}
}
}
catch (IOException e) {
System.out.println("AHHHH");
e.printStackTrace();
}
}
}
The issue is most likely the recursive calls in DFSVisit. If you don't want to go with the 'easy' answer of increasing Java's stack size when you call the JVM, you may want to consider rewriting DFSVisit to use an iterative algorithm instead of recursive. While Depth First Search is more easily defined in a recursive manner, there are iterative approaches to the algorithm that can be used.
For example: this blog post
The stack is a region in memory that is used for storing execution context and passing parameters. Every time your code invokes a method, a little bit of stack is used, and the stack pointer is increased to point to the next available location. When the method returns, the stack pointer is decreased and the portion of the stack is freed up.
If an application uses recursion heavily, the stack quickly becomes a bottleneck, because if there is no limit to the recursion depth, there is no limit to the amount of stack needed. So you have two options: increase the Java stack (-Xss JVM parameter, and this will only help until you hit the new limit) or change your algorithm so that the recursion depth is not as deep.
I am not sure if you were looking for a generic answer, but from a brief glance at your code it appears that your problem is recursion.
If you're sure your algorithm is correct and the depth of recursive calls you're making isn't accidental, then solutions without changing your algorithm are:
add to the JVM command line e.g. -Xss128m to set a 128 MB stack size (not a good solution in multi-threaded programs as it sets the default stack size for every thread not just the particular thread running your task);
run your task in its own thread, which you can initialise with a stack size specific to just that thread (and set the stack size within the program itself)-- see my example in the discussion of fixing StackOverflowError, but essentially the stack size is a parameter to the Thread() constructor;
don't use recursive calls at all-- instead, mimic the recursive calls using an explicit Stack or Queue object (this arguably gives you a bit more control).