I am trying to find out the square root of an integer, but in case the integer value is too large for instance - 2147395599. Then the following program gives this exception.
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at com.aakash.BinarySearch.SquareRoot.mySqrt(SquareRoot.java:12)
at com.aakash.BinarySearch.SquareRoot.main(SquareRoot.java:8)
Process finished with exit code 1
Square root Program
package com.aakash.BinarySearch;
import java.util.Arrays;
public class SquareRoot {
public static void main(String[] args) {
int ans = mySqrt(2147395599);
System.out.println(ans);
}
public static int mySqrt(int x) {
int[] arrayUpton = new int[x];
int start =0;
int end = arrayUpton.length-1;
int mid = start + (start-end)/2;
for (int index = start; index <= end; index++) {
arrayUpton[index]=index+1;
}
for (int index = start; index < end; index++) {
if(arrayUpton[index]*arrayUpton[index]==x){
return arrayUpton[index];
} else if (arrayUpton[index]*arrayUpton[index]>x) {
return arrayUpton[index-1];
}
}
return 0;
}
}
You are attempting to allocate an array of nearly 2^31 integers. That will occupy 8GB which is (evidently) too large for your JVMs heap. (And it could well be too large for your computer.)
But your real problem is your algorithm.
You don't need to allocate a huge array to calculate integer square roots. Even if you do it by searching all (positive) int values.
Consider this: your code carefully sets each of the array elements to a number that is one greater than the array subscript. And then it retrieves the values from the array to use them. But if you know that arrayUpton[i] contains i + 1 ... you don't need to retrieve it. Just add 1 to i instead of fetching the (same) value from the array.
In addition:
Irrespective of the tag, your algorithm isn't implementing a binary search.
I'm not even convinced the algorithm will work.
I suggest you do some Googling to see if you can find a better integer square root algorithm.
I have a CSV file with 500,000 rows of data and 22 columns. This data represents all commercial flights in the USA for one year. I am being tasked with finding the tail number of the plane that flew the most miles in the data set. Column 5 contains the airplain's tail number for each flight. Column 22 contains the total distance traveled.
Please see my extractQ3 method below. First, created a HashMap for the whole CSV using the createHashMap() method. Then, I ran a for loop to identify every unique tail number in the dataset and stored them in an array called tailNumbers. Then for each unique tail number, I looped through the entire Hashmap to calculate the total miles of distance for that tail number.
The code runs fine on smaller datasets, but once the sized increased to 500,000 rows the code becomes horribly inefficient and takes an eternity to run. Can anyone provide me with a faster way to do this?
public class FlightData {
HashMap<String,String[]> dataMap;
public static void main(String[] args) {
FlightData map1 = new FlightData();
map1.dataMap = map1.createHashMap();
String answer = map1.extractQ3(map1);
}
public String extractQ3(FlightData map1) {
ArrayList<String> tailNumbers = new ArrayList<String>();
ArrayList<Integer> tailMiles = new ArrayList<Integer>();
//Filling the Array with all tail numbers
for (String[] value : map1.dataMap.values()) {
if(Arrays.asList(tailNumbers).contains(value[4])) {
} else {
tailNumbers.add(value[4]);
}
}
for (int i = 0; i < tailNumbers.size(); i++) {
String tempName = tailNumbers.get(i);
int miles = 0;
for (String[] value : map1.dataMap.values()) {
if(value[4].contentEquals(tempName) && value[19].contentEquals("0")) {
miles = miles + Integer.parseInt(value[21]);
}
}
tailMiles.add(miles);
}
Integer maxVal = Collections.max(tailMiles);
Integer maxIdx = tailMiles.indexOf(maxVal);
String maxPlane = tailNumbers.get(maxIdx);
return maxPlane;
}
public HashMap<String,String[]> createHashMap() {
File flightFile = new File("flights_small.csv");
HashMap<String,String[]> flightsMap = new HashMap<String,String[]>();
try {
Scanner s = new Scanner(flightFile);
while (s.hasNextLine()) {
String info = s.nextLine();
String [] piecesOfInfo = info.split(",");
String flightKey = piecesOfInfo[4] + "_" + piecesOfInfo[2] + "_" + piecesOfInfo[11]; //Setting the Key
String[] values = Arrays.copyOfRange(piecesOfInfo, 0, piecesOfInfo.length);
flightsMap.put(flightKey, values);
}
s.close();
}
catch (FileNotFoundException e)
{
System.out.println("Cannot open: " + flightFile);
}
return flightsMap;
}
}
The answer depends on what you mean by "most efficient", "horribly inefficient" and "takes an eternity". These are subjective terms. The answer may also depend on specific technical factors (speed vs. memory consumption; the number of unique flight keys compared to the number of overall records; etc.).
I would recommend applying some basic streamlining to your code, to start with. See if that gets you a better (acceptable) result. If you need more, then you can consider more advanced improvements.
Whatever you do, take some timings to understand the broad impacts of any changes you make.
Focus on going from "horrible" to "acceptable" - and then worry about more advanced tuning after that (if you still need it).
Consider using a BufferedReader instead of a Scanner. See here. Although the scanner may be just fine for your needs (i.e. if it's not a bottleneck).
Consider using logic within your scanner loop to capture tail numbers and accumulated mileage in one pass of the data. The following is deliberately basic, for clarity and simplicity:
// The string is a tail number.
// The integer holds the accumulated miles flown for that tail number:
Map<String, Integer> planeMileages = new HashMap();
if (planeMileages.containsKey(tailNumber)) {
// add miles to existing total:
int accumulatedMileage = planeMileages.get(tailNumber) + flightMileage;
planeMileages.put(tailNumber, accumulatedMileage);
} else {
// capture new tail number:
planeMileages.put(tailNumber, flightMileage);
}
After that, once you have completed the scanner loop, you can iterate over your planeMileages to find the largest mileage:
String maxMilesTailNumber;
int maxMiles = 0;
for (Map.Entry<String, Integer> entry : planeMileages.entrySet()) {
int planeMiles = entry.getValue();
if (planeMiles > maxMiles) {
maxMilesTailNumber = entry.getKey();
maxMiles = planeMiles;
}
}
WARNING - This approach is just for illustration. It will only capture one tail number. There could be multiple planes with the same maximum mileage. You would have to adjust your logic to capture multiple "winners".
The above approach removes the need for several of your existing data structures, and related processing.
If you still face problems, put in some timers to see which specific areas of your code are slowest - and then you will have more specific tuning opportunities you can focus on.
I suggest you use the java 8 Stream API, so that you can take advantage of Parallel streams.
So, I've searched around stackoverflow for a bit, but I can't seem to find an answer to this issue.
My current homework for my CS class involves reading from a file of 5000 random numbers and doing various things with the data, like putting it into an array, seeing how many times a number occurs, and finding what the longest increasing sequence is. I've got all that done just fine.
In addition to this, I am (for myself) adding in a method that will allow me to overwrite the file and create 5000 new random numbers to make sure my code works with multiple different test cases.
The method works for the most part, however after I call it it doesn't seem to "activate" until after the rest of the program finishes. If I run it and tell it to change the numbers, I have to run it again to actually see the changed values in the program. Is there a way to fix this?
Current output showing the delay between changing the data:
Not trying to change the data here- control case.
elkshadow5$ ./CompileAndRun.sh
Create a new set of numbers? Y for yes. n
What number are you looking for? 66
66 was found 1 times.
The longest sequence is [606, 3170, 4469, 4801, 5400, 8014]
It is 6 numbers long.
The numbers should change here but they don't.
elkshadow5$ ./CompileAndRun.sh
Create a new set of numbers? Y for yes. y
What number are you looking for? 66
66 was found 1 times.
The longest sequence is [606, 3170, 4469, 4801, 5400, 8014]
It is 6 numbers long.
Now the data shows that it's changed, the run after the data should have been changed.
elkshadow5$ ./CompileAndRun.sh
Create a new set of numbers? Y for yes. n
What number are you looking for? 1
1 was found 3 times.
The longest sequence is [1155, 1501, 4121, 5383, 6000]
It is 5 numbers long.
My code:
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.PrintWriter;
import java.util.Scanner;
public class jeftsdHW2 {
static Scanner input = new Scanner(System.in);
public static void main(String args[]) throws Exception {
jeftsdHW2 random = new jeftsdHW2();
int[] data;
data = new int[5000];
random.readDataFromFile(data);
random.overwriteRandNums();
}
public int countingOccurrences(int find, int[] array) {
int count = 0;
for (int i : array) {
if (i == find) {
count++;
}
}
return count;
}
public int[] longestSequence(int[] array) {
int[] sequence;
return sequence;
}
public void overwriteRandNums() throws Exception {
System.out.print("Create a new set of numbers? Y for yes.\t");
String answer = input.next();
char yesOrNo = answer.charAt(0);
if (yesOrNo == 'Y' || yesOrNo == 'y') {
writeDataToFile();
}
}
public void readDataFromFile(int[] data) throws Exception {
try {
java.io.File infile = new java.io.File("5000RandomNumbers.txt");
Scanner readFile = new Scanner(infile);
for (int i = 0; i < data.length; i++) {
data[i] = readFile.nextInt();
}
readFile.close();
} catch (FileNotFoundException e) {
System.out.println("Please make sure the file \"5000RandomNumbers.txt\" is in the correct directory before trying to run this.");
System.out.println("Thank you.");
System.exit(1);
}
}
public void writeDataToFile() throws Exception {
int j;
StringBuilder theNumbers = new StringBuilder();
try {
PrintWriter writer = new PrintWriter("5000RandomNumbers.txt", "UTF-8");
for (int i = 0; i < 5000; i++) {
if (i > 1 && i % 10 == 0) {
theNumbers.append("\n");
}
j = (int) (9999 * Math.random());
if (j < 1000) {
theNumbers.append(j + "\t\t");
} else {
theNumbers.append(j + "\t");
}
}
writer.print(theNumbers);
writer.flush();
writer.close();
} catch (IOException e) {
System.out.println("error");
}
}
}
It is possible that the file has not been physically written to the disk, using flush is not enough for this, from the java documentation here:
If the intended destination of this stream is an abstraction provided by the underlying operating system, for example a file, then flushing the stream guarantees only that bytes previously written to the stream are passed to the operating system for writing; it does not guarantee that they are actually written to a physical device such as a disk drive.
Because of the HDDs read and write speed, it is advisable to depend as little as possible on HDD access.
Perhaps storing the random number strings to a list when re-running and using that would be a solution. You could even write the list to disk, but this way the implementation does not depend on the time the file is being written.
EDIT
After the OP posted more of its code it became apparent that my original answer is not relatede to the problem. Nonetheless it is sound.
The code OP posted is not enough to see when is he reading the file after writing. It seems he is writing to the file after reading, which of course is what is percieved as an error. Reading after writing should produce a program that does what you want.
Id est, this:
random.readDataFromFile(data);
random.overwriteRandNums();
Will be reflected until the next execution. This:
random.overwriteRandNums();
random.readDataFromFile(data);
Will use the updated file in the current execution.
I have very huge text file 18000000 line 4Gbyte, and I want to pick some random lines from it, I wrote the following piece of code to do this but it is slow
import java.io.BufferedWriter;
import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.Random;
import java.util.stream.Collectors;
import java.util.stream.Stream;
public class Main {
public static void main(String[] args) throws IOException {
int sampleSize =3000;
int fileSize = 18000000;
int[] linesNumber = new int[sampleSize];
Random r = new Random();
for (int i = 0; i < linesNumber.length; i++) {
linesNumber[i] = r.nextInt(fileSize);
}
List<Integer> list = Arrays.stream(linesNumber).boxed().collect(Collectors.toList());
Collections.sort(list);
BufferedWriter outputWriter = Files.newBufferedWriter(Paths.get("output.txt"));
for (int i : list) {
try (Stream<String> lines = Files.lines(Paths.get("huge_text_file"))) {
String en=enlines.skip(i-1).findFirst().get();
outputWriter.write(en+"\n");
lines.close();
} catch (Exception e) {
System.err.println(e);
}
}
outputWriter.close();
}
}
is there more elegant faster method to do this?
thanks.
There are several things that I find troublesome about your current code.
You are currently loading the entire file into RAM. I don't know much about your sample file, but the one I used crashed my default JVM.
You are skipping the same lines over and over again, more so for the earlier lines - this is horribly inefficient, like O(n^n) or something. I would be surprised if you could handle even a 500MB file with that approach.
Here's what I came up with:
public static void main(String[] args) throws IOException {
int sampleSize = 3000;
int fileSize = 50000;
int[] linesNumber = new int[sampleSize];
Random r = new Random();
for (int i = 0; i < linesNumber.length; i++) {
linesNumber[i] = r.nextInt(fileSize);
}
List<Integer> list = Arrays.stream(linesNumber).boxed().collect(Collectors.toList());
Collections.sort(list);
BufferedWriter outputWriter = Files.newBufferedWriter(Paths.get("localOutput/output.txt"));
long t1 = System.currentTimeMillis();
try(BufferedReader reader = new BufferedReader(new FileReader("extremely large file.txt")))
{
int index = 0;//keep track of what item we're on in the list
int currentIndex = 0;//keep track of what line we're on in the input file
while(index < sampleSize)//while we still haven't finished the list
{
if(currentIndex == list.get(index))//if we reach a line
{
outputWriter.write(reader.readLine());
outputWriter.write("\n");//readLine doesn't include the newline characters
while(index < sampleSize && list.get(index) <= currentIndex)//have to put this here in case of duplicates in the list
index++;
}
else
reader.readLine();//readLine is dang fast. There may be faster ways to skip a line, but this is still plenty fast.
currentIndex++;
}
} catch (Exception e) {
System.err.println(e);
}
outputWriter.close();
System.out.println(String.format("Took %d milliseconds", System.currentTimeMillis() - t1));
}
This takes ~87 milliseconds for me on a 4.7GB file running with a sample size of 30 and filesize of 50000 and took ~91 milliseconds when I changed the sample size to 3000. It took 122 milliseconds when I increased the filesize to 10,000. Tl;Dr for this paragraph = it scales pretty well, and it scales extremely well with larger sample sizes.
In direct answer to your question "is there more elegant faster method to do this?" Yes, there is. The faster way to do it is to skip lines yourself, don't load the entire file into memory, and make sure to keep using buffered readers and writers. Also, I'd avoid trying to do your own raw Array buffers or anything like that - just don't.
Feel free to step through the method I've included if you want to see more of how it works.
My first cut at an approach would be to have a look at RandomAccess files in Java cf. https://docs.oracle.com/javase/tutorial/essential/io/rafs.html. Typically random seeks will be a lot faster than reading the whole file, but you'd then need read byte by byte to get to the beginning of the next line (for example), then read that line in byte by byte to the next newline, then seek to another random location.
I'm not sure the approach would be more elegant (depends partly on how you code it I guess), but I'd expect it to be faster.
There is no efficient way to seek lines. Only thing I can think of is using a RandomAccessFile, seeking a random possition and then reading the next 200(?) characters into an array. Then do the linebreak finding and form a String.
doc
How to write a Java Program to divide a set of numbers into two sets such that the difference of the sum of their individual numbers, is minimum.
For example, I have an array containing integers- [5,4,8,2]. I can divide it into two arrays- [8,2] and [5,4]. Assuming that the given set of numbers, can have a unique solution like in above example, how to write a Java program to achieve the solution. It would be fine even if I am able to find out that minimum possible difference.
Let's say my method receives an array as parameter. That method has to first divide the array received into two arrays, and then add the integers contained in them. Thereafter, it has to return the difference between them, such that the difference is minimum possible.
P.S.- I have had a look around here, but couldn't find any specific solution to this. Most probable solution seemed to be given here- divide an array into two sets with minimal difference . But I couldn't gather from that thread how can I write a Java program to get a definite solution to the problem.
EDIT:
After looking at the comment of #Alexandru Severin, I tried a java program. It works for one set of numbers [1,3,5,9], but doesn't work for another set [4,3,5,9, 11]. Below is the program. Please suggest changes:-
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class FindMinimumDifference {
public static void main(String[] args) {
int[] arr= new int[]{4,3,5,9, 11};
FindMinimumDifference obj= new FindMinimumDifference();
obj.returnMinDiff(arr);
}
private int returnMinDiff(int[] array){
int diff=-1;
Arrays.sort(array);
List<Integer> list1= new ArrayList<>();
List<Integer> list2= new ArrayList<>();
int sumOfList1=0;
int sumOfList2=0;
for(int a:array){
for(Integer i:list1){
sumOfList1+=i;
}
for(Integer i:list2){
sumOfList2+=i;
}
if(sumOfList1<=sumOfList2){
list1.add(a);
}else{
list2.add(a);
}
}
List<Integer> list3=new ArrayList<>(list1);
List<Integer> list4= new ArrayList<>(list2);
Map<Integer, List<Integer>> mapOfProbables= new HashMap<Integer, List<Integer>>();
int probableValueCount=0;
for(int i=0; i<list1.size();i++){
for(int j=0; j<list2.size();j++){
if(abs(list1.get(i)-list2.get(j))<
abs(getSumOfEntries(list1)-getSumOfEntries(list2))){
List<Integer> list= new ArrayList<>();
list.add(list1.get(i));
list.add(list2.get(j));
mapOfProbables.put(probableValueCount++, list);
}
}
}
int minimumDiff=abs(getSumOfEntries(list1)-getSumOfEntries(list2));
List resultList= new ArrayList<>();
for(List probableList:mapOfProbables.values()){
list3.remove(probableList.get(0));
list4.remove(probableList.get(1));
list3.add((Integer)probableList.get(1));
list4.add((Integer)probableList.get(0));
if(minimumDiff>abs(getSumOfEntries(list3)-getSumOfEntries(list4))){
// valid exchange
minimumDiff=abs(getSumOfEntries(list3)-getSumOfEntries(list4));
resultList=probableList;
}
}
System.out.println(minimumDiff);
if(resultList.size()>0){
list1.remove(resultList.get(0));
list2.remove(resultList.get(1));
list1.add((Integer)resultList.get(1));
list2.add((Integer)resultList.get(0));
}
System.out.println(list1+""+list2); // the two resulting set of
// numbers with modified data giving expected result
return minimumDiff;
}
private static int getSumOfEntries(List<Integer> list){
int sum=0;
for(Integer i:list){
sum+=i;
}
return sum;
}
private static int abs(int i){
if(i<=0)
i=-i;
return i;
}
}
First of all, sorting the array then putting first member in group and second in another wound never work, and here is why:
Given the input[1,2,3,100].
The result would be: [1,3] and [2,100], clearly wrong.
The correct answer should be: [1,2,3] and [100]
You can find many optimization algorithms on google for this problem, but since I assume you're a beginner, I'll try to give you a simple algorithm that you can implement:
sort the array
iterate from highest to lowest value
for each iteration, calculate the sum of each group, then add the element to the group with minimum sum
At the end of the loop you should have two fairly balanced arrays. Example:
Array: [1,5,5,6,7,10,20]
i1: `[20] []`
i2: `[20] [10]`
i3: `[20] [10,7]`
i4: `[20] [20,7,6]`
i5: `[20,5] [10,7,6]`
i6: `[20,5] [10,7,6,5]`
i7: `[20,5,1] [10,7,6,5]`
Where the sums are 26 and 28. As you can see we can further optimize the solution, if we exchange 5 and 6 resulting in [20,6,1] and [20,7,5,5] the sums are equal.
For this step you can:
find all groups of elements (x,y) where x is in group1, y is in group2, and |x-y| < |sum(group1) - sum(group2)|
loop all groups and try exchanging x with y until you get a minimum difference
after each exchange check if the minimum value in the group with the highest sum is higher then the difference of the groups, if so, transfer it to the other group
This algorithm will always return the best solution, and is a whole lot better then a greedy approach. However it is not optimal in terms of complexity, speed and memory. If one needs it for very large arrays and the resources are limited, the most optimal algorithm may differ depending on the speed/memory ration and the accepted error percentage.
This is a variation on the Partition Problem https://en.wikipedia.org/wiki/Partition_problem
If you want the optimal solution you have to test every possible combination of output sets. That may be feasible for small sets but is infeasible for large inputs.
One good approximation is the greedy algorithm I present below.
This heuristic works well in practice when the numbers in the set are
of about the same size as its cardinality or less, but it is not
guaranteed to produce the best possible partition.
First you need to put your input in a sortable collection such as a List.
1) Sort the input collection.
2) Create 2 result sets.
3) Iterate over the sorted input. If the index is even put the item in result1 else put the item in result2.
List<Integer> input = new ArrayList<Integer>();
Collections.sort(input);
Set<Integer> result1 = new HashSet<Integer>();
Set<Integer> result2 = new HashSet<Integer>();
for (int i = 0; i < input.size(); i++) {
if (i % 2 == 0) {// if i is even
result1.add(input.get(i));
} else {
result2.add(input.get(i));
}
}
I seem to have got the perfect solution for this. Below Java program works perfectly. Only assumption is that, the given problem has unique solution (just one solution). This assumption implies- only non-zero number. I am putting the program below. I request everyone to tell if the program could fail for certain scenario, or if it could be improved/optimized in some way. Credits to Mr Alexandru Severin's algorithm posted as one of the answers in this thread.
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class FindMinimumDifference {
static List<Integer> list1= new ArrayList<>();
static List<Integer> list2= new ArrayList<>();
public static void main(String[] args) {
int[] arr= new int[]{3,-2,9,7};
// tested for these sample data:- [1,5,9,3] ; [4,3,5,9,11] ;
//[7,5,11,2,13,15,14] ; [3,2,1,7,9,11,13] ;
//[3,1,0,5,6,9] ; [6,8,10,2,4,0] ; [3,1,5,7,0] ; [4,-1,5,-3,7] ; [3,-2,9,7]
System.out.println("the minimum possible difference is: "+returnMinDiff(arr));
System.out.println("the two resulting set of nos. are: "+list1+" and "+list2);
}
private static int returnMinDiff(int[] array){
int diff=-1;
Arrays.sort(array);
for(int a:array){
int sumOfList1=0;
int sumOfList2=0;
for(Integer i:list1){
sumOfList1+=i;
}
for(Integer i:list2){
sumOfList2+=i;
}
if(sumOfList1<=sumOfList2){
list1.add(a);
}else{
list2.add(a);
}
}
List<Integer> list3=new ArrayList<>(list1);
List<Integer> list4= new ArrayList<>(list2);
if(list3.size()!=list4.size()){ // both list should contain equal no. of entries.
//If not, add 0 to the list having lesser no. of entries
if(list3.size()<list4.size()){
list3.add(0);
}else{
list4.add(0);
}
}
Map<Integer, List<Integer>> mapOfProbables= new HashMap<Integer, List<Integer>>();
int probableValueCount=0;
for(int i=0; i<list3.size();i++){
for(int j=0; j<list4.size();j++){
if(abs(list3.get(i)-list4.get(j))
<abs(getSumOfEntries(list3)-getSumOfEntries(list4))){
List<Integer> list= new ArrayList<>();
list.add(list3.get(i));
list.add(list4.get(j));
mapOfProbables.put(probableValueCount++, list);
}
}
}
int minimumDiff=abs(getSumOfEntries(list1)-getSumOfEntries(list2));
List resultList= new ArrayList<>();
for(List probableList:mapOfProbables.values()){
list3=new ArrayList<>(list1);
list4= new ArrayList<>(list2);
list3.remove(probableList.get(0));
list4.remove(probableList.get(1));
list3.add((Integer)probableList.get(1));
list4.add((Integer)probableList.get(0));
if(minimumDiff>abs(getSumOfEntries(list3)-getSumOfEntries(list4))){ // valid exchange
minimumDiff=abs(getSumOfEntries(list3)-getSumOfEntries(list4));
resultList=probableList;
}
}
if(resultList.size()>0){ // forming the two set of nos. whose difference of sum comes out to be minimum
list1.remove(resultList.get(0));
list2.remove(resultList.get(1));
if(!resultList.get(1).equals(0) ) // (resultList.get(1).equals(0) && !list1.contains(0))
list1.add((Integer)resultList.get(1));
if(!resultList.get(0).equals(0) || (resultList.get(0).equals(0) && list2.contains(0)))
list2.add((Integer)resultList.get(0));
}
return minimumDiff; // returning the minimum possible difference
}
private static int getSumOfEntries(List<Integer> list){
int sum=0;
for(Integer i:list){
sum+=i;
}
return sum;
}
private static int abs(int i){
if(i<=0)
i=-i;
return i;
}
}
For this question, assume that we can divide the array into two subarrays such that their sum is equal. (Even thought they are not equal , it will work)
So if the sum of elements in array is S. Your goal is to find a subset with sum S/2. You can write a recursive function for this.
int difference = Integer.MAX_VALUE;
public void recursiveSum(int[] array, int presentSum, int index,Set<Integer> presentSet){
if(index == array.length){
if(Math.abs(presentSum - (S/2)) < difference)){
difference = Math.abs(presentSum - (S/2);
// presentSet is your answer
return;
}
}
recursiveSum(array,presentSum,index+1,presentSet); // don't consider the present element in the final solution
presentSet.add(array[index]);
recursiveSum(array,presentSum + array[index],index+1,presentSet); //consider the present element in the final solution
}
You can also write an equivalent O(N^2) dynamic programming code for this.
I was just demonstrating the idea.
So when you find this set with sum S/2, automatically you have divided the array in to two parts with same sum (S/2 here).
It seems that you are more interested in the algorithm than the code. So, here is my psuedocode:-
int A[];//This contains your elements in sorted (descending) order
int a1[],a2[];//The two sub-arrays
int sum1=0,sum2=0;//These store the sum of the elements of the 2 subarrays respectively
for(i=0;i<A.length;i++)
{
//Calculate the absolute difference of the sums for each element and then add accordingly and thereafter update the sum
if(abs(sum1+A[i]-sum2)<=abs(sum2+A[i]-sum1))
{a1.add(A[i]);
sum1+=A[i];}
else
{a2.add(A[i]);
sum2+=A[i];}
}
This will work for all integers, positive or negative.