Addition of Two Numbers represented by Linked List nodes [closed] - java

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I'm preparing for an interview. The question I got is: Two numbers are represented by a linked list, where each node contains a single digit.The digits are stored in reverse oder, such that the 1's digit is at the head of the list. Write a function that adds the two numbers and returns the sum as a linked list.
The suggested answer adds each digits individually and keeps a "carry" number. For example, if the first digits of the two numbers are "5" and "7". The algorithm records "2" in the first digit of resulting sum and keeps "1" as a "carry" to add to 10th digit of result.
However, my solution is to traverse the two linked lists and translate them into two integers. Then I add the numbers and translate sum to a new linked list. Wouldn't my solution be more straight forward and efficient?
Thanks!

While your solution may be more straightforward, I don't think it's actually more efficient.
I'm assuming the "correct" algorithm is something like this:
pop the first element of both lists
Add them together (with the carry if there is one) and make a new node using the ones digit
Pass the carry (dividing the sum by 10 to get the actual thing to carry) and repeat 1) and 2), with each successive node being pointed to by the previous one.
The main things I see when I'm comparing your algorithm with that one is:
In terms of memory, you want to create two BigIntegers to store intermediate values (I'm assuming you're using BigInteger or some equivalent Object to avoid the constraints of an int or long), on top of the final linked list itself. The original algorithm doesn't use more than a couple of ints to do intermediate calculations, and so in the long run, the original algorithm actually uses less memory.
You're also suggesting that you do all of your arithmetic using the BigIntegers, rather that in ints. Although it's possible that BigInteger is really optimized to the point where it isn't much slower than primitive operations, I highly doubt that calling BigInteger#add is faster than doing the + operation on ints.
Also, some more food for thought. Suppose you didn't have something handy like BigInteger to store arbitrarily large integers. Then you're going to have to have some way to store arbitrarily large integers for your algorithm to work properly. After that, you basically need a way to add arbitrarily large integers to add arbitrarily large integers, and you end up with a problem where you either have to do something like the original algorithm anyway, or you end up using a completely different representation in a subroutine (yikes).

(Assuming by "integer" you mean int.)
Your solution does not scale beyond numbers that can fit in an int, whereas the original solution is only limited by the amount of available memory.
As far as efficiency is concerned, there is nothing about your solution that would make it more efficient than the original.

Your solution is more straightforward to describe, certainly - and an argument might be made in certain situations that the readability of the code your solution would produce would be preferable, when working with large teams.
However, most of the time - their suggested answer is a lot more memory efficient, and probably more CPU-efficient.
You're suggesting going through the first linked-list, storing it as a number (+1 store). Going through the second, storing it as a number (+1 store). Adding the 2 numbers, saving the result (+1 store). Converting this number into a linked list, and saving it (+1 store)
Their solution involves going through the first and second linked list, while writing to the third, and storing it as a new one (+1 store)
This is a +1 store, vs. your +4 store. This might seem like not much, but if we were to try and add n pairs of numbers at the same time (on a distributed system or something), you're looking at 4n stores, rather than just n stores. Which could be a big deal.

Related

If array needs to be sorted would it count as part of the binary search algorithm

I am trying to understand the speed of the Binary Search algorithm.
I understand it needs to operate on a sorted array.
However if the array comes in unsorted and performing the sorting. Wouldn't the sorting be part of the Binary Search and thus its performance would be slower?
I am confused because I think that there is very little chance to use this algorithm if the data does not come in sorted.
And if my code needs to sort it then why isn't it counting towards the search algorithm.
Sorry if I am confusing,
Thank you for helping.
You can't just point at an algorithm and say: It's got O(n^2) complexity!
That's what people usually say, sure. But that's shorthand. They're omitting things; assuming that the listener / reader will make assumptions.
You need to fully describe the exact algorithm, the conditions under which it is applied, and the precise definition of n and any other variable.
Then, you can answer that question. The problem you're having here is that the definition of 'what is the performance of binary search' is unclear. If you assume it means X whilst your buddy assumes it means Y, and you then argue about the answers, you're not actually having a constructive debate at all. You're just tilting at windmills; the real problem is that neither of you figured out the problem is communicating the basics.
Given that there is some confusion here, I'll give you 3 different more or less equally sensible more fleshed out definitions, along with the actual answer for each such definition. Hint, for one of them, 'binary search' isn't the fastest algorithm!
Given [1] a list that is already sorted, and [2] a single value, write me an algorithm that determines if this value is in the list or not.
The best answer would be: A binary sort algorithm, and its complexity would be O(log n).
Given [1] a list that is not sorted, and [2] a single value, write me an algorithm that determines if this value is in the list or not.
The best answer would be: Just iterate through the list. Its complexity would be O(n), and binary sort is not part of this answer at all.
given [1] a list that is not sorted, and [2] a list of tests, whereby each individual test is defined by a single value, but they all use the same input unsorted list, write an algorithm that will, for each test, determine if the value for that test is in the list or not, and then give me the amortized complexity (basically, the complexity of the whole thing, divided by the # of tests we ran).
Then the best answer would be: First sort the list, spending O(n log n) time to do so, but we get to amortize that over the test case count, and then use binary search for each individual test, adding an O(log n) complexity to each test. If we term n the size of the input list and t the number of tests we have, this gets us:
O( (n log n)/t + O(log n) )
Which is the actual answer to the question, complex as it may look. But, if t is large or even considered effectively infinite in size, OR we add one more rider to the question:
The list from [1] is given to you in advance and, within reasonable time and memory limits, you may preprocess this data without needing to amortize these costs across your test cases
then that boils down to just O(log n), as the large value for t makes that (n log n) / t factor approach zero.
In communicating this to your buddy, given that we don't talk in entire scientific papers, one might then say: "The algorithmic complexity of the binary sort algorithm is O(log n)", even if that omits a gigantic chunk of the full story.
You interpret the question as per the second case (input is unsorted, the input comprises both the list and the value to search for, no multi-test clause). Someone who says 'binary search is O(log n)' is labouring under either the first or third. You're both right.
NB: The third definition seems unusually complicated. However, it matches common scenarios. For example, 'we have compiled a list of folks living in town and their phone numbers, and we want to print them in a giant book with the aim of letting recipients of this book look up phone numbers. We expect over the lifetime of a single print run that the 100,000 citizens of the township will eaech do on average about 50 lookups, for a grand total of 5 million lookups for this single list. That gives you t= 5 million, n = 200,000 (let's say 200k people live here, half of which get a phonebook). Plug those numbers in and sorting the phonebook wins by a landslide vs. releasing the phonebook in arbitrary, unsorted order. Even if, yes, you start 'down' the effort of sorting it and won't make up for that loss until a few folks have speedily looked up a few phone numbers to make up for your efforts in sorting it before printing the book.
Yes. If
the data comes in unsorted
you only need to search for one element
...then you would have to first sort the data to use binary search, which would take a total of O(n log n + log n) = O(n log n) time.
But once the data is sorted, you can then binary search on that data as many times as you want. You don't have to sort it again each time.

A good data structure for storing and searching integers?

Edit: Typos fixed and ambiguity tried to fix.
I have a list of five digit integers in a text file. The expected amount can only be as large as what a 5-digit integer can store. Regardless of how many there are, the FIRST line in this file tells me how many integers are present, so resizing will never be necessary. Example:
3
11111
22222
33333
There are 4 lines. The first says there are three 5-digit integers in the file. The next three lines hold these integers.
I want to read this file and store the integers (not the first line). I then want to be able to search this data structure A LOT, nothing else. All I want to do, is read the data, put it in the structure, and then be able to determine if there is a specific integer in there. Deletions will never occur. The only things done on this structure will be insertions and searching.
What would you suggest as an appropriate data structure? My initial thought was a binary tree of sorts; however, upon thinking, a HashTable may be the best implementation. Thoughts and help please?
It seems like the requirements you have are
store a bunch of integers,
where insertions are fast,
where lookups are fast, and
where absolutely nothing else matters.
If you are dealing with a "sufficiently small" range of integers - say, integers up to around 16,000,000 or so, you could just use a bitvector for this. You'd store one bit per number, all initially zero, and then set the bits to active whenever a number is entered. This has extremely fast lookups and extremely fast setting, but is very memory-intensive and infeasible if the integers can be totally arbitrary. This would probably be modeled with by BitSet.
If you are dealing with arbitrary integers, a hash table is probably the best option here. With a good hash function you'll get a great distribution across the table slots and very, very fast lookups. You'd want a HashSet for this.
If you absolutely must guarantee worst-case performance at all costs and you're dealing with arbitrary integers, use a balanced BST. The indirection costs in BSTs make them a bit slower than other data structures, but balanced BSTs can guarantee worst-case efficiency that hash tables can't. This would be represented by TreeSet.
Given that
All numbers are <= 99,999
You only want to check for existence of a number
You can simply use some form of bitmap.
e.g. create a byte[12500] (it is 100,000 bits which means 100,000 booleans to store existence of 0-99,999 )
"Inserting" a number N means turning the N-th bit on. Searching a number N means checking if N-th bit is on.
Pseduo code of the insertion logic is:
bitmap[number / 8] |= (1>> (number %8) );
searching looks like:
bitmap[number/8] & (1 >> (number %8) );
If you understand the rationale, then a even better news for you: In Java we already have BitSet which is doing what I was describing above.
So code looks like this:
BitSet bitset = new BitSet(12500);
// inserting number
bitset.set(number);
// search if number exists
bitset.get(number); // true if exists
If the number of times each number occurs don't matter (as you said, only inserts and see if the number exists), then you'll only have a maximum of 100,000. Just create an array of booleans:
boolean numbers = new boolean[100000];
This should take only 100 kilobytes of memory.
Then instead of add a number, like 11111, 22222, 33333 do:
numbers[11111]=true;
numbers[22222]=true;
numbers[33333]=true;
To see if a number exists, just do:
int whichNumber = 11111;
numberExists = numbers[whichNumber];
There you are. Easy to read, easier to mantain.
A Set is the go-to data structure to "find", and here's a tiny amount of code you need to make it happen:
Scanner scanner = new Scanner(new FileInputStream("myfile.txt"));
Set<Integer> numbers = Stream.generate(scanner::nextInt)
.limit(scanner.nextInt())
.collect(Collectors.toSet());

Time complexity assignment

I have an assignment in my intro to programming course that I don't understand at all. I've been falling behind because of problems at home. I'm not asking you to do my assignment for me I'm just hoping for some help for a programming boob like me.
The question is this:
Calculate the time complexity in average case for searching, adding, and removing in a
- unsorted vector
- sorted vector
- unsorted singlelinked list
- sorted singlelinked list
- hash table
Let n be the number of elements in the datastructure
and present the solution in a
table with three rows and five columns.
I'm not sure what this even means.. I've read as much as I can about time complexity but I don't understand it.. It's so confusing. I don't know where I would even start.. Remember I'm a novice programmer, as dumb as they come. I did really well last semester but had problems at home at the start of this one so I missed a lot of lectures and the first assignments so now I'm in over my head..
Maybe if someone could give me the answer and the reasoning behind it on a couple of them I could maybe understand it and do the others? I have a hard time learning through theory, examples work best.
Time complexity is a formula that describes how the cost of an operation varies related to the number of elements. It is usually expressed using "big-O" notation, for example O(1) or constant time, O(n) where cost relates linearly to n, O(n2) where cost increases as the square of the size of the input. There can be others involving exponentials or logarithms. Read up on "Big-O Notation".
You are being asked to evaluate five different data structures, and provide average cost for three different operations on each data structure (hence the table with three rows and five columns).
Time complexity is an abstract concept, that allows us to compare the complexity of various algorithms by looking at how many operations are performed in order to handle its inputs. To be precise, the exact number of operations isn't important, the bottom line is, how does the number of operations scale with increasing complexity of inputs.
Generally, the number of inputs is denoted as n and the complexity is denoted as O(p(n)), with p(n) being some kind of expression with n. If an algorithm has O(n) complexity, it means, that is scales linearly, with every new input, the time needed to run the algorithm increases by the same amount.
If an algorithm has complexity of O(n^2) it means, that the amount of operations grows as a square of number of inputs. This goes on and on, up to exponencially complex algorithms, that are effectively useless for large enough inputs.
What your professor asks from you is to have a look at the given operations and judge, how are going to scale with increasing size of lists, you are handling. Basically this is done by looking at the algorithm and imagining, what kinds of cycles are going to be necessary. For example, if the task is to pick the first element, the complexity is O(1), meaning that it doesn't depend on the size of input. However, if you want to find a given element in the list, you already need to scan the whole list and this costs you depending on the list size. Hope this gives you a bit of an idea how algorithm complexity works, good luck with your assignment.
Ok, well there are a few things you have to start with first. Algorithmic complexity has a lot of heavy math behind it and so it is hard for novices to understand, especially if you try to look up Wikipedia definitions or more-formal definitions.
A simple definition is that time-complexity is basically a way to measure how much an operation costs to perform. Alternatively, you can also use it to see how long a particular algorithm can take to run.
Complexity is described using what is known as big-O notation. You'll usually end up seeing things like O(1) and O(n). n is usually the number of elements (possibly in a structure) on which the algorithm is operating.
So let's look at a few big-O notations:
O(1): This means that the operation runs in constant time. What this means is that regardless of the number of elements, the operation always runs in constant time. An example is looking at the first element in a non-empty array (arr[0]). This will always run in constant time because you only have to directly look at the very first element in an array.
O(n): This means that the time required for the operation increases linearly with the number of elements. An example is if you have an array of numbers and you want to find the largest number. To do this, you will have to, in the worst case, look at every single number in the array until you find the largest one. Why is that? This is because you can have a case where the largest number is the last number in the array. So you cannot be sure until you have examined every number in the array. This is why the cost of this operation is O(n).
O(n^2): This means that the time required for the operation increases quadratically with the number of elements. This usually means that for each element in the set of elements, you are running through the entire set. So that is n x n or n^2. A well-known example is the bubble-sort algorithm. In this algorithm you run through and swap adjacent elements to ensure that the array is sorted according to the order you need. The array is sorted when no-more swaps need to be made. So you have multiple passes through the array, which in the worst case is equal to the number of elements in the array.
Now there are certain things in code that you can look at to get a hint to see if the algorithm is O(n) or O(n^2).
Single loops are usually O(n), since it means you are iterating over a set of elements once:
for(int i = 0; i < n; i++) {
...
}
Doubly-nested loops are usually O(n^2), since you are iterating over an entire set of elements for each element in the set:
for(int i = 0; i < n; i++) {
for(j = 0; j < n; j++) {
...
}
}
Now how does this apply to your homework? I'm not going to give you the answer directly but I will give you enough and more hints to figure it out :). What I wrote above, describing big-O, should also help you. Your homework asks you to apply runtime analyses to different data structures. Well, certain data structures have certain runtime properties based on how they are set up.
For example, in a linked list, the only way you can get to an element in the middle of the list, is by starting with the first element and then following the next pointer until you find the element that you want. Think about that. How many steps would it take for you to find the element that you need? What do you think those steps are related to? Do the number of elements in the list have any bearing on that? How can you represent the cost of this function using big-O notation?
For each datastructure that your teacher has asked you about, try to figure out how they are set up and try to work out manually what each operation (searching, adding, removing) entails. I'm talking about writing the steps out and drawing pictures of the strucutres on paper! This will help you out immensely! Looking at that, you should have enough information to figure out the number of steps required and how it relates to the number of elements in the set.
Using this approach you should be able to solve your homework. Good luck!

Iterating over Permutations of an Array

I'm working on some java code for some research I'm working on, and need to have a way to iterate over all permutations of an ArrayList. I've looked over some previous questions asked here, but most were not quite what I want to do, and the ones that were close had answers dealing with strings and example code written in Perl, or in the case of the one implementation that seemed like it would work ... do not actually work.
Ideally I'm looking for tips/code snippets to help me write a function permute(list, i) that as i goes from 0 to list.size()! gives me every permutation of my ArrayList.
There is a way of counting from 0 to (n! - 1) that will list off all permutations of a list of n elements. The idea is to rewrite the numbers as you go using the factorial number system and interpreting the number as an encoded way of determining which permutation to use. If you're curious about this, I have a C++ implementation of this algorithm. I also once gave a talk about this, in case you'd like some visuals on the topic.
Hope this helps!
If iterating over all permutations is enough for you, see this answer: Stepping through all permutations one swap at a time .
For a given n the iterator produces all permutations of numbers 0 to (n-1).
You can simply wrap it into another iterator that converts the permutation of numbers into a permutation of your array elements. (Note that you cannot just replace int[] within the iterator with an arbitrary array/list. The algorithm needs to work with numbers.)

What data-structure should I use to create my own "BigInteger" class?

As an optional assignment, I'm thinking about writing my own implementation of the BigInteger class, where I will provide my own methods for addition, subtraction, multiplication, etc.
This will be for arbitrarily long integer numbers, even hundreds of digits long.
While doing the math on these numbers, digit by digit isn't hard, what do you think the best datastructure would be to represent my "BigInteger"?
At first I was considering using an Array but then I was thinking I could still potentially overflow (run out of array slots) after a large add or multiplication. Would this be a good case to use a linked list, since I can tack on digits with O(1) time complexity?
Is there some other data-structure that would be even better suited than a linked list? Should the type that my data-structure holds be the smallest possible integer type I have available to me?
Also, should I be careful about how I store my "carry" variable? Should it, itself, be of my "BigInteger" type?
Check out the book C Interfaces and Implementations by David R. Hanson. It has 2 chapters on the subject, covering the vector structure, word size and many other issues you are likely to encounter.
It's written for C, but most of it is applicable to C++ and/or Java. And if you use C++ it will be a bit simpler because you can use something like std::vector to manage the array allocation for you.
Always use the smallest int type that will do the job you need (bytes). A linked list should work well, since you won't have to worry about overflowing.
If you use binary trees (whose leaves are ints), you get all the advantages of the linked list (unbounded number of digits, etc) with simpler divide-and-conquer algorithms. You do not have in this case a single base but many depending the level at which you're working.
If you do this, you need to use a BigInteger for the carry. You may consider it an advantage of the "linked list of ints" approach that the carry can always be represented as an int (and this is true for any base, not just for base 10 as most answers seem to assume that you should use... In any base, the carry is always a single digit)
I might as well say it: it would be a terrible waste to use base 10 when you can use 2^30 or 2^31.
Accessing elements of linked lists is slow. I think arrays are the way to go, with lots of bound checking and run time array resizing as needed.
Clarification: Traversing a linked list and traversing an array are both O(n) operations. But traversing a linked list requires deferencing a pointer at each step. Just because two algorithms both have the same complexity it doesn't mean that they both take the same time to run. The overhead of allocating and deallocating n nodes in a linked list will also be much heavier than memory management of a single array of size n, even if the array has to be resized a few times.
Wow, there are some… interesting answers here. I'd recommend reading a book rather than try to sort through all this contradictory advice.
That said, C/C++ is also ill-suited to this task. Big-integer is a kind of extended-precision math. Most CPUs provide instructions to handle extended-precision math at comparable or same speed (bits per instruction) as normal math. When you add 2^32+2^32, the answer is 0… but there is also a special carry output from the processor's ALU which a program can read and use.
C++ cannot access that flag, and there's no way in C either. You have to use assembler.
Just to satisfy curiosity, you can use the standard Boolean arithmetic to recover carry bits etc. But you will be much better off downloading an existing library.
I would say an array of ints.
An Array is indeed a natural fit. I think it is acceptable to throw OverflowException, when you run out of place in your memory. The teacher will see attention to detail.
A multiplication roughly doubles digit numbers, addition increases it by at most 1. It is easy to create a sufficiently big Array to store the result of your operation.
Carry is at most a one-digit long number in multiplication (9*9 = 1, carry 8). A single int will do.
std::vector<bool> or std::vector<unsigned int> is probably what you want. You will have to push_back() or resize() on them as you need more space for multiplies, etc. Also, remember to push_back the correct sign bits if you're using two-compliment.
i would say a std::vector of char (since it has to hold only 0-9) (if you plan to work in BCD)
If not BCD then use vector of int (you didnt make it clear)
Much less space overhead that link list
And all advice says 'use vector unless you have a good reason not too'
As a rule of thumb, use std::vector instead of std::list, unless you need to insert elements in the middle of the sequence very often. Vectors tend to be faster, since they are stored contiguously and thus benefit from better spatial locality (a major performance factor on modern platforms).
Make sure you use elements that are natural for the platform. If you want to be platform independent, use long. Remember that unless you have some special compiler intrinsics available, you'll need a type at least twice as large to perform multiplication.
I don't understand why you'd want carry to be a big integer. Carry is a single bit for addition and element-sized for multiplication.
Make sure you read Knuth's Art of Computer Programming, algorithms pertaining to arbitrary precision arithmetic are described there to a great extent.

Categories