I tried to parallelize merge-sort using multi-threading.Here is my code (please forgive if it poorly implemented.I did not care about the space-complexity of the program). I am achieving the sorted array.My question is:Will this process actually reduce the time taken to sort an array of large size?What all modifications are needed to make it efficient and is it o any use?
import java.io.IOException;
import java.util.Arrays;
import java.util.Random;
import java.util.Scanner;
public class Merge {
public static int[] inputArray;
public static int[] arr1;
public static int[] arr2;
public static int[] arr3;
public static int t1_status=0;
public static int t2_status=0;
public static void main(String[] args) throws IOException{
System.out.println("Enter the length of the array");
Scanner in =new Scanner(System.in);
int arraySize=in.nextInt();
inputArray = new int[arraySize];
Random rand=new Random();
for(int i=0;i<arraySize;i++)
{
inputArray[i]=rand.nextInt(100);
}
//diving the original array into two subarrays
arr1=Arrays.copyOfRange(inputArray, 0, inputArray.length/2);
arr2=Arrays.copyOfRange(inputArray, (inputArray.length)/2,inputArray.length);
//printing the original array
System.out.print("The original array is array is ");
for(int h:inputArray)
{
System.out.println(h);
}
Thread t1=new Thread(new Runnable(){
public void run()
{
mergeSort(arr1);
System.out.println("t1 started");
}
});
Thread t2=new Thread(new Runnable(){
public void run()
{
mergeSort(arr2);
System.out.println("t2 started");
}
});
//starting threads
t1.start();
t2.start();
try {
t1.join();
t2.join();
}
catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if(t1.isAlive())
{
t1_status=1;
}
if(t2.isAlive())
{
t2_status=1;
}
t1.stop();
t2.stop();
arr3=new int[inputArray.length];
merge(arr3,arr1,arr2);//merging arr1 and arr2.At this point both arr1 and arr2 are sorted.
System.out.println("The sorted array is ");
for(int m:arr3)
{
System.out.print(m);
System.out.print(" ");
}
System.out.println(" ");
}
static void mergeSort(int[] A)
{
if (A.length > 1)
{
int q = A.length/2;
int[] leftArray = Arrays.copyOfRange(A, 0, q);
int[] rightArray = Arrays.copyOfRange(A,q,A.length);
mergeSort(leftArray);
mergeSort(rightArray);
merge(A,leftArray,rightArray);
}
}
//merge function
static void merge(int[] a, int[] l, int[] r) {
int totElem = l.length + r.length;
int i,li,ri;
i = li = ri = 0;
while ( i < totElem) {
if ((li < l.length) && (ri<r.length)) {
if (l[li] < r[ri]) {
a[i] = l[li];
i++;
li++;
}
else {
a[i] = r[ri];
i++;
ri++;
}
}
else {
if (li >= l.length) {
while (ri < r.length) {
a[i] = r[ri];
i++;
ri++;
}
}
if (ri >= r.length) {
while (li < l.length) {
a[i] = l[li];
li++;
i++;
}
}
}
}
if(t1_status==1){arr1=a;}
else if(t2_status==1){arr2=a;}
else{arr3=a;}
}
}
Yes it can help, quite a bit depending on how many cores do you have and how big your array is. Spawning threads and coordinating work isn't free. There's a soft spot on how many parallel threads are actually useful.
I think you're doing too little,but this is very easy to overdo: Since the process is CPU-bound you want one thread for each core.
A fixed thread pool/executor is handy here.
Check out some example performance gains at CSE373:Data Structures and Algorithms/MergeSort.
Sorting both halves in separate threads is a good start, but you can make use of parallelism through the merging, too.
Also, you should recurse do the subsorts in parallel, too... BUT keep track of the depth of recursion, and stop making new threads when you're already using all your cores. Making new threads for those tiny little leaf sorts is a huge overhead.
All together:
Split into 2 threads
First, Thread 1 sorts the front half of the source array and Thread 2 sorts the back half of the source array. To sort the haves, they either call this function recursively or switch to a serial sort if 2^recursion_depth > number_of_cores; then
Thread 1 does a forward merge of both halves into the front half of the destination, and Thread 2 does a backward merge of both halves into the back half of destination. They both stop when they reach the midpoint of the destination.
See the Collections.parallelSort() and the Fork/Join framework javadoc.
The small enough arrays are sorted as legacy on single thread, but when large enough (8192, I think), the parallelSort will divide and conquer, with the ForkJoinPool default pool (as many threads as there are cores).
Using only 2 threads is probably doubling your speed, but not more.
FYI, the launcher thread should work too, not just sit there joining. It can take the job of the 2nd thread for instance. Then only join once.
Related
Im trying to write kind of "Merge Sort" using threads in Java. Basically Monitor class gets an array and creates an Array Stack which contains arrays of length 1.
Monitor has 3 functions synchronized:
getSubArray - gets an subarray from stack, if the stack is empty it just waits.
setSubArray - push an subarray to stack, and notify all threads.
canContinue - check if the if we finished sorting the array.
Monitor Class:
public class MergeSort {
private Stack<int[]> subArrays;
private int arraySize;
public MergeSort(int[] array)
{
this.subArrays = new Stack<int[]>();
this.arraySize = array.length;
for (int i = 0; i < array.length; i++) {
int[] temp = new int[1];
temp[0] = array[i];
this.subArrays.push(temp);
}
}
public synchronized void setSubArray(int[] newSubArray,String threadName)
{
this.subArrays.push(newSubArray);
System.out.println(threadName+ " pushing new array to Stack, and notifying all threads");
notifyAll();
}
public synchronized boolean canContinue() {
if(this.subArrays.peek()==null)
try {
wait();
} catch(InterruptedException e) {
e.printStackTrace();
}
return this.subArrays.peek() != null && this.subArrays.peek().length < arraySize;
}
public synchronized int[] getSubArray(String threadName)
{
while(this.subArrays.isEmpty()) {
try {
System.out.println(threadName+ ": subarrays is empty.... going to wait");
wait();
System.out.println(threadName+ ": i got notified, continuing getting a subarray");
}
catch (InterruptedException e) {
e.printStackTrace();
}
}
return subArrays.pop();
}
}
Thread Class
public class Sorter extends Thread {
private MergeSort arrayToSort;
private String threadName;
private int[] subArray1;
private int[] subArray2;
public Sorter(MergeSort arrayToSort,String name)
{
this.arrayToSort=arrayToSort;
this.threadName = name;
}
public void run() {
super.run();
while(this.arrayToSort.canContinue())
{
this.subArray1 = this.arrayToSort.getSubArray(this.threadName);
System.out.println(this.threadName+": subArray1 was set successfully");
this.subArray2 = this.arrayToSort.getSubArray(this.threadName);
System.out.println(this.threadName+": Got two sub arrays:\n"+ "\tSubArray1: "+ Arrays.toString(subArray1)+
"\n\tSubArray2: "+ Arrays.toString(subArray2));
int[] subSortedArray=mergeSubArraySorted(this.subArray1,this.subArray2);
System.out.println(this.threadName+ ": Merged two sub arrays: "+ Arrays.toString(subSortedArray));
this.arrayToSort.setSubArray(subSortedArray,this.threadName);
}
System.out.println(this.threadName+": Oh wow, we finished sorting the array :D");
}
public int[] mergeSubArraySorted(int[] subArray1, int[] subArray2)
{
int subArraySortedLength = subArray1.length +subArray2.length;
int[] subArraySorted = new int[subArraySortedLength];
for (int i = 0; i < subArraySorted.length; i++) {
if(i < subArray1.length){
subArraySorted[i] = subArray1[i];
}
else{
subArraySorted[i] = subArray2[i - subArray1.length];
}
}
Arrays.sort(subArraySorted);
return subArraySorted;
}
}
Main
public class Main {
public static void main(String[] args) {
int[] arrayToSort = {4,5,1};
MergeSort merge = new MergeSort(arrayToSort);
Sorter sort1 = new Sorter(merge,"sort1");
Sorter sort2 = new Sorter(merge,"sort2");
sort1.start();
sort2.start();
}
}
The issue I see is that while 2 threads are running, thread1 is waiting in getSubArray and while thread2 is doing his work and then push the sorted. thread2 goes to getSubArray and "steal" the array from thread1. As you see in (row5) Sort1 is in getSubArray and waiting for Sort2 to push to Stack.(row6,7) Sort2 push to stack but it continues and goes to GetSubArray which is synchronized .
the output:
1. sort2: subArray1 was set successfully
2. sort2: Got two sub arrays:
SubArray1: [5]
SubArray2: [4]
3. sort1: subArray1 was set successfully
4. sort2: Merged two sub arrays: [4, 5]
5. sort1: subarrays is empty.... going to wait
6. sort2 pushing new array to Stack, and notifying all threads
7. sort2: subArray1 was set successfully
8. sort1: i got notified, continuing getting a subarray
9. sort1: subarrays is empty.... going to wait
10. sort2: subarrays is empty.... going to wait
I'm playing around with trying to build a arraylist class that is made threadsafe in a very clumsy way by just slapping on the synchronized keyword on all methods
import java.util.stream.*;
import java.util.Arrays;
import java.util.Random;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class LongArrayListUnsafe {
public static void main(String[] args) {
LongArrayList dal1 = LongArrayList.withElements();
ExecutorService executorService = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());
for (int i=0; i<1000; i++) {
executorService.execute(new Runnable() {
public void run() {
for (int i=0; i<10; i++)
dal1.add(i);
}
});}
System.out.println("Using toString(): " + dal1);
for (int i=0; i<dal1.size(); i++)
System.out.println(dal1.get(i));
System.out.println(dal1.size());} }
class LongArrayList {
private long[] items;
private int size;
public LongArrayList() {
reset();
}
synchronized public static LongArrayList withElements(long... initialValues){
LongArrayList list = new LongArrayList();
for (long l : initialValues) list.add( l );
return list;
}
// reset me to initial
synchronized public void reset(){
items = new long[2];
size = 0;
}
// Number of items in the double list
synchronized public int size() {
return size;
}
// Return item number i
synchronized public long get(int i) {
if (0 <= i && i < size)
return items[i];
else
throw new IndexOutOfBoundsException(String.valueOf(i));
}
// Replace item number i, if any, with x
synchronized public long set(int i, long x) {
if (0 <= i && i < size) {
long old = items[i];
items[i] = x;
return old;
} else
throw new IndexOutOfBoundsException(String.valueOf(i));}
// Add item x to end of list
synchronized public LongArrayList add(long x) {
if (size == items.length) {
long[] newItems = new long[items.length * 2];
for (int i=0; i<items.length; i++)
newItems[i] = items[i];
items = newItems;
}
items[size] = x;
size++;
return this;
}
synchronized public String toString() {
return Arrays.stream(items, 0,size)
.mapToObj( Long::toString )
.collect(Collectors.joining(", ", "[", "]"));
}
}
The relevant thing I'm doing is adding a bunch elements to a list, with some tasks. The issue is that when I increase the amount of threads that I pass to the fixedthreadPool, my code runs in the same time as when I only pass only one thread, maybe even slower.
I have three theories on why this is:
This is because of thread overhead, and the tasks I am creating are simply too small, I need to make them bigger before it pays off to use more threads.
It has to do with lock contention, because my class is so clumsily threadsafe, the threads a are competing for the locks, and somehow slowing down everything
I'm making a completely obvious mistake in using the threadexecutorpool
It is not only that your task is to simple. The key issue is that you marked the add function is synchronized which means that only a single thread is allowed to enter this function. No matter how many executers you use, at any single point in time, there is only one thread executing this function, while the others have to wait. Even if you make the task more complex, it won't change. You need to have a more complex task and a more fine grained synchronization thereof.
As for lock contention. Yes, see above, and of course acquiring and releasing locks costs time.
To answer the question in the comments:
sychronized synchronizes on the object on which you invoke the class, (i.e., dal1 which is shared by all your threads).
yes, the contention is fairly obvious. You said yourself "just slapping". Nevertheless for the code you have I would call it adequate. The operation that takes the longest is resizing and copying of the array and during that time you certainly do not want any other thread to modify your array.
I would like to make a programm which count prime numbers using Erastotenes Sieve. In this issue I want to use semaphore to communicate between thread to make calculations on table with numbers.
So far I have written code like that.
public static void main( String[] args ) throws InterruptedException {
System.out.println("Podaj gorny zakres\n");
Scanner scanner = new Scanner(System.in);
Erastotenes erastotenes = new Erastotenes(Integer.parseInt(scanner.nextLine()));
erastotenes.initializeTable();
long start = System.nanoTime();
List<SingleProcess.MyThread> list = new ArrayList<>();
List<Integer> numbers = Dollar.$(2,erastotenes.getMaximumNumber()+1).toList();
for(int i=0;i<2;i++)
{
list.add(new SingleProcess.MyThread(erastotenes,numbers.subList((numbers.size()/2)*i,(numbers.size()/2)*i+numbers.size()/2)));
list.get(list.size()-1).start();
list.get(list.size()-1).join();
}
System.out.println(System.nanoTime() - start);
//System.out.println("Liczba elementów: "+erastotenes.countPrimeElements());
}
Erastotenes class.
public class Erastotenes {
private int upperRange;
private int maximumNumber;
private int table[];
public Erastotenes(int upperRange) {
this.upperRange = upperRange;
this.maximumNumber = (int)(Math.floor(Math.sqrt(upperRange)));
this.table = new int[upperRange+1];
}
public int getMaximumNumber() {
return maximumNumber;
}
public int getUpperRange() {
return upperRange;
}
public void initializeTable()
{
for(int i=1;i<=upperRange;i++) {
table[i] = i;
}
}
public void makeSelectionOfGivenNumber(int number)
{
if (table[number] != 0) {
int multiple;
multiple = number+number;
while (multiple<=upperRange) {
table[multiple] = 0;
multiple += number;
}
}
}
public List<Integer> getList()
{
List<Integer> list = Ints.asList(table);
return list.stream().filter(item->item.intValue()!=0 && item.intValue()!=1).collect(Collectors.toList());
}
}
The class describing single Thread to make calculations with static Semaphore looks like this.
public class SingleProcess {
static Semaphore semaphore = new Semaphore(1);
static class MyThread extends Thread {
Erastotenes erastotenes;
List<Integer> numbers;
MyThread(Erastotenes erastotenes,List<Integer> numbers) {
this.erastotenes = erastotenes;
this.numbers=numbers;
}
public void run() {
for(int number:numbers) {
try {
semaphore.acquire();
//1System.out.println(number + " : got the permit!");
erastotenes.makeSelectionOfGivenNumber(number);
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
semaphore.release();
}
}
}
}
}
I thought that splitting on half table with numbers from 2 to maximum numbers as square root like in erastotrenes algorithm for these two Threads will boost calculations, but with upperRange to 100000000 the difference between paraller and sequence is not so big. How can I in another realize this problem of paraller programming Erastotenes Sieve?
I think your main problem is this:
for(int i=0;i<2;i++)
{
list.add(new SingleProcess.MyThread(erastotenes,numbers.subList((numbers.size()/2)*i,(numbers.size()/2)*i+numbers.size()/2)));
list.get(list.size()-1).start();
list.get(list.size()-1).join();
}
You start a thread and then immediately wait for it to finish; that kills the parallelism entirely. You can start and wait in the end:
for(int i=0;i<2;i++)
{
list.add(new SingleProcess.MyThread(erastotenes,numbers.subList((numbers.size()/2)*i,(numbers.size()/2)*i+numbers.size()/2)));
list.get(list.size()-1).start();
}
for (Thread t : list) {
t.join();
}
But, there's also a problem with your semaphore tbh. Each thread blocks all other threads from doing anything as long as it's working on a number; that means that again, all parallelism is gone.
You can do away with the semaphore altogether IMO; there's not really a lot of danger in setting the same index to 0 several times, which is all that happens in this "critical section" - but it's not critical at all because no one ever reads the array value in question before all threads are finished.
I'm developing a circular buffer with two Threads: Consumer and Producer.
I'm using active waiting with Thread.yield.
I know that it is possible to do that with semaphores, but I wanted the buffer without semaphores.
Both have a shared variable: bufferCircular.
While the buffer is not full of useful information, producer write data in the position pof array, and while there are some useful information consumer read data in the position c of array. The variable nElem from BufferCircular is the number of value datas that haven't been read yet.
The program works quite good 9/10 times that runs. Then, sometimes, it get stucks in a infinite loop before show the last element on screen (number 500 of loop for), or just dont' show any element.
I think is probably a liveLock, but I can't find the mistake.
Shared Variable:
public class BufferCircular {
volatile int[] array;
volatile int p;
volatile int c;
volatile int nElem;
public BufferCircular(int[] array) {
this.array = array;
this.p = 0;
this.c = 0;
this.nElem = 0;
}
public void writeData (int data) {
this.array[p] = data;
this.p = (p + 1) % array.length;
this.nElem++;
}
public int readData() {
int data = array[c];
this.c = (c + 1) % array.length;
this.nElem--;
return data;
}
}
Producer Thread:
public class Producer extends Thread {
BufferCircular buffer;
int bufferTam;
int contData;
public Productor(BufferCircular buff) {
this.buffer = buff;
this.bufferTam = buffer.array.length;
this.contData = 0;
}
public void produceData() {
this.contData++;
this.buffer.writeData(contData);
}
public void run() {
for (int i = 0; i < 500; i++) {
while (this.buffer.nElem == this.bufferTam) {
Thread.yield();
}
this.produceData();
}
}
}
Consumer Thread:
public class Consumer extends Thread {
BufferCircular buffer;
int cont;
public Consumer(BufferCircular buff) {
this.buffer = buff;
this.cont = 0;
}
public void consumeData() {
int data = buffer.readData();
cont++;
System.out.println("data " + cont + ": " + data);
}
public void run() {
for (int i = 0; i < 500; i++) {
while (this.buffer.nElem == 0) {
Thread.yield();
}
this.consumeData();
}
}
}
Main:
public class Main {
public static void main(String[] args) {
Random ran = new Random();
int tamArray = ran.nextInt(21) + 1;
int[] array = new int[tamArray];
BufferCircular buffer = new BufferCircular(array);
Producer producer = new Producer (buffer);
Consumer consumer = new Consumer (buffer);
producer.start();
consumer.start();
try {
producer.join();
consumer.join();
} catch (InterruptedException e) {
System.err.println("Error with Threads");
e.printStackTrace();
}
}
}
Any help will be welcome.
Your problem here is that your BufferCircular methods are sensitive to race conditions. Take for example writeData(). It executes in 3 steps, some of which are also not atomic:
this.array[p] = data; // 1
this.p = (p + 1) % array.length; // 2 not atomic
this.nElem++; // 3 not atomic
Suppose that 2 threads entered writeData() at the same time. At step 1, they both have the same p value, and both rewrite array[p] value. Now, array[p] is rewritten twice and data that first thread had to write, is lost, because second thread wrote to the same index after. Then they execute step 2--and result is unpredictable since p can be incremented by 1 or 2 (p = (p + 1) % array.length consists of 3 operations, where threads can interact). Then, step 3. ++ operator is also not atomic: it uses 2 operations behind the scenes. So nElem becomes also incremented by 1 or 2.
So we have fully unpredictable result. Which leads to poor execution of your program.
The simplest solution is to make readData() and writeData() methods serialized. For this, declare them synchronized:
public synchronized void writeData (int data) { //...
public synchronized void readData () { //...
If you have only one producer and one consumer threads, race conditions may occur on operations involving nElem. Solution is to use AtomicInteger instead of int:
final AtomicInteger nElem = new AtomicInteger();
and use its incrementAndGet() and decrementAndGet() methods.
I was experimenting with this question today, from Euler Problems:
A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.
Find the largest palindrome made from the product of two 3-digit numbers.
I thought about it and it can of course be done with for-loops, however I want to use Java 8 as it opens up new options.
However first of all, I do not know how to generate an IntStream that produces such elements, so I still ended up using normal for-loops:
public class Problem4 extends Problem<Integer> {
private final int digitsCount;
private int min;
private int max;
public Problem4(final int digitsCount) {
this.digitsCount = digitsCount;
}
#Override
public void run() {
List<Integer> list = new ArrayList<>();
min = (int)Math.pow(10, digitsCount - 1);
max = min * 10;
for (int i = min; i < max; i++) {
for (int j = min; j < max; j++) {
int sum = i * j;
if (isPalindrome(sum)) {
list.add(sum);
}
}
}
result = list.stream().mapToInt(i -> i).max().getAsInt();
}
private boolean isPalindrome(final int number) {
String numberString = String.valueOf(number);
String reversed = new StringBuilder(numberString).reverse().toString();
return (numberString.equals(reversed));
}
#Override
public String getName() {
return "Problem 4";
}
}
As you can see I might be a bit lazy, bit really the IntStream::max is a very nice method and I think it is better to use that, as to write it yourself.
Here comes the issue though, I need to have a list now to be able to obtain the maximum in this manner, which means I need to store data, where I really should not do so.
So, the question now, would it be possible to implement this in Java 8?
for (int i = min; i < max; i++) {
for (int j = min; j < max; j++) {
yield i * j;
}
}
And then out of that method create an PrimitiveIterator.OfInt (unboxes version of Iterator<Integer>, or create an IntStream directly?
Then getting the answer with streamFromYield.filter(this::isPalindrome).max().getAsInt() would be really easy to implement.
Lastly, I know this question has been asked before, however the last time is already quite a bit ago and now Java 8 is going to happen very soon, where they have added as big concept Stream<T> and the new language construct, called lambdas.
So making such code may be very different now than when people were making it for Java 6 or 7.
Well, I think we've gotten carried away using the Streams API from the "outside," using flatMap, optimizing the palindrome-finding algorithm, etc. See answers from Boris the Spider and assylias. However, we've sidestepped the original question of how to write a generator function using something like Python's yield statement. (I think the OP's nested-for example with yield was using Python.)
One of the problems with using flatMap is that parallel splitting can only occur on the outermost stream. The inner streams (returned from flatMap) are processed sequentially. We could try to make the inner streams also parallel, but they'd possibly compete with the outer ones. I suppose nested splitting could work, but I'm not too confident.
One approach is to use the Stream.generate or (like assylias' answer) the Stream.iterate functions. These create infinite streams, though, so an external limit must be supplied to terminate the stream.
It would be nice if we could create a finite but "flattened" stream so that the entire stream of values is subject to splitting. Unfortunately creating a stream is not nearly as convenient as Python's generator functions. It can be done without too much trouble, though. Here's an example that uses the StreamSupport and AbstractSpliterator classes:
class Generator extends Spliterators.AbstractIntSpliterator {
final int min;
final int max;
int i;
int j;
public Generator(int min, int max) {
super((max - min) * (max - min), 0);
this.min = min;
this.max = max;
i = min;
j = min;
}
public boolean tryAdvance(IntConsumer ic) {
if (i == max) {
return false;
}
ic.accept(i * j);
j++;
if (j == max) {
i++;
j = min;
}
return true;
}
}
public static void main(String[] args) {
Generator gen = new Generator(100, 1000);
System.out.println(
StreamSupport.intStream(gen, false)
.filter(i -> isPalindrome(i))
.max()
.getAsInt());
}
Instead of having the iteration variables be on the stack (as in the nested-for with yield approach) we have to make them fields of an object and have the tryAdvance increment them until the iteration is complete. Now, this is the simplest form of a spliterator and it doesn't necessarily parallelize well. With additional work one could implement the trySplit method to do better splitting, which in turn would enable better parallelism.
The forEachRemaining method could be overridden, and it would look almost like the nested-for-loop-with-yield example, calling the IntConsumer instead of yield. Unfortunately tryAdvance is abstract and therefore must be implemented, so it's still necessary to have the iteration variables be fields of an object.
How about looking at it from another direction:
You want a Stream of [100,1000), and for each element of that Stream you want another Stream of that element multiplied by each of [100, 1000). This is what flatMap is for:
public static void main(final String[] args) throws Exception {
OptionalInt max = IntStream.range(100, 1000).
flatMap((i) -> IntStream.range(i, 1000).map((j) -> i * j)).
unordered().
parallel().
filter((i) -> {
String forward = Integer.toString(i);
String backward = new StringBuilder(forward).reverse().toString();
return forward.equals(backward);
}).
max();
System.out.println(max);
}
Not sure if getting a String and then the reverse is the most efficient way to detect palindromes, off the top of my head this would seem to be faster:
final String asString = Integer.toString(i);
for (int j = 0, k = asString.length() - 1; j < k; j++, k--) {
if (asString.charAt(j) != asString.charAt(k)) {
return false;
}
}
return true;
It gives the same answer but I haven't put it under an rigorous testing... Seems to be about 100ms faster on my machine.
Also not sure this problem is big enough for unordered().parallel() - removing that gives a little boost to speed too.
Was just trying to demonstrate the capabilities of the Stream API.
EDIT
As #Stuart points out in the comments, as multiplication is commutative, we only need to IntStream.range(i, 1000) in the sub-stream. This is because once we check a x b we don't need to check b x a. I have updated the answer.
There always have been ways to emulate that overrated yield feature, even without Java 8. Basically it is about storing the state of an execution, i.e. the stack frame(s), which can be done by a thread. A very simple implementation could look like this:
import java.util.Iterator;
import java.util.NoSuchElementException;
public abstract class Yield<E> implements Iterable<E> {
protected interface Flow<T> { void yield(T item); }
private final class State implements Runnable, Iterator<E>, Flow<E> {
private E nextValue;
private boolean finished, value;
public synchronized boolean hasNext() {
while(!(value|finished)) try { wait(); } catch(InterruptedException ex){}
return value;
}
public synchronized E next() {
while(!(value|finished)) try { wait(); } catch(InterruptedException ex){}
if(!value) throw new NoSuchElementException();
final E next = nextValue;
value=false;
notify();
return next;
}
public void remove() { throw new UnsupportedOperationException(); }
public void run() {
try { produce(this); }
finally {
synchronized(this) {
finished=true;
notify();
}
}
}
public synchronized void yield(E next) {
while(value) try { wait(); } catch(InterruptedException ex){}
nextValue=next;
value=true;
notify();
}
}
protected abstract void produce(Flow<E> f);
public Iterator<E> iterator() {
final State state = new State();
new Thread(state).start();
return state;
}
}
Once you have such a helper class, the use case will look straight-forward:
// implement a logic the yield-style
Iterable<Integer> y=new Yield<Integer>() {
protected void produce(Flow<Integer> f) {
for (int i = min; i < max; i++) {
for (int j = min; j < max; j++) {
f.yield(i * j);
}
}
}
};
// use the Iterable, e.g. in a for-each loop
int maxPalindrome=0;
for(int i:y) if(isPalindrome(i) && i>maxPalindrome) maxPalindrome=i;
System.out.println(maxPalindrome);
The previous code didn’t use any Java 8 features. But it will allow using them without the need for any change:
// the Java 8 way
StreamSupport.stream(y.spliterator(), false).filter(i->isPalindrome(i))
.max(Integer::compare).ifPresent(System.out::println);
Note that the Yield support class above is not the most efficient implementation and it doesn’t handle the case if an iteration is not completed but the Iterator abandoned. But it shows that such a logic is indeed possible to implement in Java (while the other answers convincingly show that such a yield logic is not necessary to solve such a problem).
I'll give it a go. Version with a loop then with a stream. Although I start from the other end so it's easier because I can limit(1).
public class Problem0004 {
public static void main(String[] args) {
int maxNumber = 999 * 999;
//with a loop
for (int i = maxNumber; i > 0; i--) {
if (isPalindrome(i) && has3DigitsFactors(i)) {
System.out.println(i);
break;
}
}
//with a stream
IntStream.iterate(maxNumber, i -> i - 1)
.parallel()
.filter(i -> isPalindrome(i) && has3DigitsFactors(i))
.limit(1)
.forEach(System.out::println);
}
private static boolean isPalindrome(int n) {
StringBuilder numbers = new StringBuilder(String.valueOf(n));
return numbers.toString().equals(numbers.reverse().toString());
}
private static boolean has3DigitsFactors(int n) {
for (int i = 999; i > 0; i--) {
if (n % i == 0 && n / i < 1000) {
return true;
}
}
return false;
}
}