checking if a string is interleaving of two other strings - java

I came across multiple links to the solution of the problem - "How to check if a string is interleaving of two other strings"
Two solutions looked particularly interesting to me which work but I have doubts in both of them.
FIRST I did not get the hashing part in this where author is saying "A pure recursive solution will cause time limit exceed. We can optimize it by caching the false visited solutions in the visited set. That will short circuit many repeated search path"
SECOND I did not the the "else condition" on line 18 in recursive. Won't one of the conditions (line 14th and line 16th) will always be true as they are inside else of line 11th if condition which is if(s2.charAt(0) != s3.charAt(0) && s1.charAt(0) != s3.charAt(0)) {

First
This is actually space-time tradeoff (the computation time can be reduced at the cost of increased memory use). Why does the author say pure recursive solution slow (in fact it's exponential time complexity)? It comes from repeated recursion and because of that, it computes the same values again and again.
So what can you do? You can store the value you already computed. Next time you want this value again, just look up in a table. This is called caching, when the values are cached, you can treat every recursive call inside the function as it would run in O(1) time complexity. The core idea is don't calculate the same things twice.
Second
In the case s2.charAt(0) == s3.charAt(0) && s1.charAt(0) == s3.charAt(0).

Related

Time space trade-off

I was asked on a quiz the following question and had NO idea what to ask my self when prompted to design a more efficient segment of code. I mean I know if-else are time consuming, I was thinking maybe a for loop? I was curious if someone could A. tell me if there is only 1 answer and B. walk me through why what ever the solution may be runs so much faster.
It says: Suppose the following segment of code is very time consuming, write a segment that shaves at least 2 minutes of the run time.
if (f(n)%==0)
key = 3*f(n)+4*f(n)+7;
else
key = 6*f(n)*f(n)-33;
"I mean I know if-else are time consuming, I was thinking maybe a for loop" this is not correct. Consider what's happening here that's actually time consuming. Hint: f(n) could be doing many things. But if the code takes a long time to process, your only best bet is that f(n) is the culprit. The only other thing happening here is an if-statement which is fast, and some arithmetic (which computers are pretty darn fast at).
Luckily, you are calculating f(n) for a fixed input n multiple times! Save yourself the trouble by saving the output of this method in a variable then just using the variable. I don't know where you or your teacher got "2 minutes" from, that's arbitrary nonsense in my opinion.
The thing to note is that f(n) gets called 3 times in all cases. If we are assuming that is the bottleneck, then we want to minimize the number of times we call that function.
Note, too, that the result of f(n) is a constant (assuming no external factors). Therefore, you only need to calculate it once.
According to the quiz, the optimized code segment will shave at least two minutes of time as a result. You can deduce that the given code segment takes at least two minutes to calculate.
Regardless to the results of the conditional, if statement, you are calling f(n) 3 times in the given code segment.
By calculating f(n) once at the beginning and assigning the value to a variable to be used in subsequent calculations...
Something like this:
result = f(n)
if (result%==0)
key = 3*result+4*result+7;
else
key = 6*result*result-33;
... you will reduce the time of execution by (2 x execution time of f(n) call) - (execution time of declaring a variable and assigning a value to it + ( 2 x execution time of reading value from that variabl). The execution time of declaring and assigning a value, reading the value from that variable, and the other statements in the given code (like the if statement and logical expression operations) is insignificant (probably less than 1 millisecond).
In accordance to the expected result, you can deduce that each call to f(n) is taking at least 1 min to execute; to re-iterate, the difference in time between the given code segement execution yield, and the now optimized code segment execution yeild, as a result, is 2 minutes.
If your professor says that you need to shave two minutes from the code, then you can say that the code takes at least two minutes to calculate. You are calculating f(n) 3 times in the code, then it is a safe bet to say that each f(n) calculation takes around 40 seconds, assuming no cache. Then, by calculating f(n) once at the beginning and saving the result to use it in the other four calls will save you 40*2 seconds.
Something like this:
result = f(n)
if (result%==0)
key = 3*result+4*result+7;
else
key = 6*result*result-33;

When does it make sense to store the result of a comparison versus recalculating a comparison in terms of speed?

I'd like to have a solid understanding of when (ignoring available memory space) it makes sense to store the result of a comparison instead of recalculating it. What is the tipping point for justifying the time cost incurred by storage? Is it 2, 3, or 4 comparisons? More?
For example, in this particular case, which option (in general) will perform better in terms of speed?
Option 1:
int result = id.compareTo(node.id);
return result > 0 ? 1 : result < 0 ? -1 : 0;
Option 2:
return id.compareTo(node.id) > 0 ? 1 : id.compareTo(node.id) < 0 ? -1 : 0;
I tried to profile the two options myself in order to answer my own question, but I don't have much experience with this sort of performance testing and, as such, would rather get a more definitive answer from someone with either more experience or else a better grasp of the theoretical elements involved.
I know it's not a big deal and that most of the time the difference will be negligible. However, I'm a perfectionist, and I'd really just like to resolve this particular issue so that I can get on with my life, haha.
Additionally, I think the answer is likely to prove enlightening in regards to similar situations I may encounter in the future wherein the difference might very well be significant (such as when the cost of a comparison or memory allocation is either unable to be incurred or else complex enough to cause a real issue concerning performance).
Answers should be relevant to programming with Java and not other languages, please.
I know I've mentioned it a few times already, but PLEASE focus answers ONLY on the SPEED DIFFERENCE! I am well aware that many other factors can and should be taken into account when writing code, but here I want just a straight-forward argument for which is FASTER.
Experience tells me that option 1 should be faster, because you're making just one call to the compare method and storing the result for reuse. Facts that support this belief are that local variables live on the stack and making a method call involves a lot more work from the stack than just pushing a value onto it. However profiling is the best and safest way to compare two implementations.
The first thing to realise is that the java compiler and JVM together may optimise your code how it wishes to get the job done most efficiently (as long as certain rules are followed). Chances are there is no difference in performance, and chances are also that what is actually executed is not what you think it is.
One really important difference however is in debugging: if you put a break point on the return statement for the store-in-variable version, you can see what was returned from the call, otherwise you can't see that in a debugger. Even more handy is when you seemingly uselessly store the value to be returned from the method in a variable, then return it, so you may see what's going to be returned from a method while debugging, otherwise there's no way to see it.
Option 1 cannot be slower than 2, if the compiler optimizes then both could be equal, but then still 1) is more readable, compacter, and better testable.
So there is no argument for Option 2).
If you like you could change to final int result = ....
Although i expect that the compiler is so clever that the final keyword makes no difference in this case, and the final makes the code a bit less readable.
option1 one always preferred one ,because here the real world scenarion
----->ok lets
1) thread exceution at id.compareTo(node.id) > 0 ? 1 , in this process some other thread
changes the value of node.id right after id.compareTo(node.id) > 0 ? 1 before going to
id.compareTo(node.id) < 0 ? -1 : 0 this check , the result not identical?
performance wise option1 has more performance when there is bit of functionality exisist in checking.
When does it make sense to store the result of a comparison versus recalculating a comparison in terms of speed?
Most of the time, micro-optimizations like option #1 versus option #2 don't make any significant difference. Indeed, it ONLY makes a significant performance difference if:
the comparison is expensive,
the comparison is performed a large number of times, AND
performance matters.
Indeed, the chances are that you have alrady spent more time and money thinking about this than will be saved over the entire useful lifetime of the application.
Instead of focussing on performance, you should be focussing on making your code readable. Think about the next person who has to read and modify the code, and make it so that he/she is less likely to misread it.
In this case, the first option is more readable than the second one. THAT is why you should use it, not performance reasons. (Though, if anything, the first version is probably faster.)

Debugging of a recursive algorithm

My question is if there are some smart ways of debugging complicated recursive algorithms.
Assume that we have a complicated one (not a simple case when recursion counter is decreased in each 'nested iteration').
I mean something like recursive traversing of a graph when loops are possible.
I need to check if I am not getting endless loop somewhere. And doing this just using a debugger gives not certain answer (because I am not sure if an algorithm is in endless loop or just process as it should).
It's hard to explain it without concrete example. But what I need is...
'to check if the endless loops don't occur in let's say complicated recursive algorithm'.
You need to form a theory for why you think the algorithm does terminate. Ideally, prove the theory as a mathematical theorem.
You can look for a function of the problem state that does reduce on each recursive call. For example, see the following discussion of Ackermann's function, from Wikipedia
It may not be immediately obvious that the evaluation of A(m, n) always terminates. However, the recursion is bounded because in each recursive application either m decreases, or m remains the same and n decreases. Each time that n reaches zero, m decreases, so m eventually reaches zero as well. (Expressed more technically, in each case the pair (m, n) decreases in the lexicographic order on pairs, which is a well-ordering, just like the ordering of single non-negative integers; this means one cannot go down in the ordering infinitely many times in succession.) However, when m decreases there is no upper bound on how much n can increase — and it will often increase greatly.
That is the type of reasoning you should be thinking of applying to your algorithm.
If you cannot find any way to prove your algorithm terminates, consider looking for a variation whose termination you can prove. It is not always possible to decide whether an arbitrary program terminates or not. The trick is to write algorithms you can prove terminate.
Best is proving finiteness by pre and post conditions, variants and invariants. If you can specify a (virtual) formula which value increases on every call you have a guarantee.
This is the same as proving loops to be finite. Furthermore it might make complex algorithms more tackable.
You need to count the depth of recursive calls ... and then throw an exception if the depth of recursive calls reaches a certain threshold.
For example:
void TheMethod(object[] otherParameters, int recursiveCallDepth)
{
if (recursiveCallDepth > 100) {
throw new Exception("...."); }
TheMethod(otherParameters, ++recursiveCallDepth);
}
if you want to check for endless loops,
write a System.out.println("no its not endless"); at the next line of calling the recursive function.
if the loop would be endless, this statement wont get print, if otherwise you will see the output
One suggestion is the following:
If you have endless loop then in the graph case you will obtain a path with number of vertices greater than the total number of vertices in the graph. Assuming that the number of vertices in the graph is a global variable (which, I think, is the most common case) you can do a conditional breakpoint in the beginning of the recursion if the depth is already above the total number of vertices.
Here is a link how you do conditional breakpoints for java in Eclipse.

RLE sequence, setting a value

Say I have an arbitrary RLE Sequence. (For those who don't know, RLE compresses an array like [4 4 4 4 4 6 6 1 1] into [(5,4) (2,6) (2,1)]. First comes the number of a particular integer in a run, then the number itself.)
How can I determine an algorithm to set a value at a given index without decompressing the whole thing? For example, if you do set(0,1) the RLE would become [(1,1) (4,4) (2,6) (2,1)]. (In set, the first value is the index, the second is the value)
Also, I've divided this compressed sequence into an ArrayList of Entries. That is, each Entry is one of these: (1,1) where it has an amount and value.
I'm trying to find an efficient way to do this, right now I can just think of methods that have WAY too many if statements to be considered clean. There are so many possible variations: for example, if the given value splits an existing entry, or if it has the same value as an existing entry, etc...
Any help would be much appreciated. I'm working on an algorithm now, here is some of it:
while(i<rleAL.size() && count != index)
{
indexToStop=0;
while(count<index || indexToStop == rleAL.get(i).getAmount())
{
count++;
indexToStop++;
}
if(count != index)
{
i++;
}
}
As you can see this is getting increasingly sloppy...
Thanks!
RLE is generally bad at updates, exactly for the reason stated. Changing ArrayList to LinkedList won't help much, as LinkedList is awfully slow in everything but inserts (and even with inserts you must already hold a reference to a specific location using e.g. ListIterator).
Talking about the original question, though, there's no need to decompress all. All you need is find the right place (summing up the counts), which is linear-time (consider skip list to make it faster) after which you'll have just four options:
You're in a block, and this block is the same as a number you're trying to save.
You're inside a block, and the number is different.
You're in the beginning or the end of the block, the number differs from the block's but same as neighbour has.
You're in the beginning or the end of the block, the number is neither the same as block's nor the one of neighbour's.
The actions are the same, obviously:
Do nothing
Change counter in the block, add two blocks
Change counters in two blocks
Change counter in one block, insert a new one
(Note though if you have skip lists, you must update those as well.)
Update: it gets more interesting if the block to update is of length 1, true. Still, it all stays as trivial: in any case, the changes are limited to maximum of three blocks.

BigO running time on some methods

Ok, these are all pretty simple methods, and there are a few of them, so I didnt want to just create multiple questions when they are all the same thing. BigO is my weakness. I just cant figure out how they come up with these answers. Is there anyway you can give me some insight into your thinking for analyzing running times of some of these methods? How do you break it down? How should I think when I see something like these? (specifically the second one, I dont get how thats O(1))
function f1:
loop 3 times
loop n times
Therefore O(3*n) which is effectively O(n).
function f2:
loop 50 times
O(50) is effectively O(1).
We know it will loop 50 times because it will go until n = n - (n / 50) is 0. For this to be true, it must iterate 50 times (n - (n / 50)*50 = 0).
function f3:
loop n times
loop n times
Therefore O(n^2).
function f4:
recurse n times
You know this because worst case is that n = high - low + 1. Disregard the +1.
That means that n = high - low.
To terminate,
arr[hi] * arr[low] > 10
Assume that this doesn't occur until low is incremented to the highest it can go (high).
This means n = high - 0 and we must recurse up to n times.
function 5:
loops ceil(log_2(n)) times
We know this because of the m/=2.
For example, let n=10. log_2(10) = 3.3, the ceiling of which is 4.
10 / 2 =
5 / 2 =
2.5 / 2 =
1.25 / 2 =
0.75
In total, there are 4 iterations.
You get an n^2 analysis when performing a loop within a loop, such as the third method.
However, the first method doesn't a n^2 timing analysis because the first loop is defined as running three times. This makes the timing for the first one 3n, but we don't care about numbers for Big-O.
The second one, introduces an interesting paradigm, where despite the fact that you have a single loop, the timing analysis is still O(1). This is because if you were to chart the timing it takes to perform this method, it wouldn't behave as O(n) for smaller numbers. For larger numbers it becomes obvious.
For the fourth method, you have an O(n) timing because you're recursive function call is passing lo + 1. This is similar to if you were using a for loop and incrementing with lo++/++lo.
The last one has a O(log n) timing because your dividing your variable by two. Just remember than anything that reminds you of a binary search will have a log n timing.
There is also another trick to timing analysis. Say you had a loop within a loop, and within each of the two loops you were reading lines from a file or popping of elements from a stack. This actually would only be a O(n) method, because a file only has a certain number of lines you can read, and a stack only has a certain number of elements you can pop off.
The general idea of big-O notation is this: it gives a rough answer to the question "If you're given a set of N items, and you have to perform some operation repeatedly on these items, how many times will you need to perform this operation?" I say a rough answer, because it (most of the time) doesn't give a precise answer of "5*N+35", but just "N". It's like a ballpark. You don't really care about the precise answer, you just want to know how bad it will get when N gets large. So answers like O(N), O(N*N), O(logN) and O(N!) are typical, because they each represent sort of a "class" of answers, which you can compare to each other. An algorithm with O(N) will perform way better than an algorithm with O(N*N) when N gets large enough, it doesn't matter how lengthy the operation is itself.
So I break it down thus: First identify what the N will be. In the examples above it's pretty obvious - it's the size of the input array, because that determines how many times we will loop. Sometimes it's not so obvious, and sometimes you have multiple input data, so instead of just N you also get M and other letters (and then the answer is something like O(N*M*M)).
Then, when I have my N figured out, I try to identify the loop which depends on N. Actually, these two things often get identified together, as they are pretty much tied together.
And, lastly of course, I have to figure out how many iterations the program will make depending on N. And to make it easier, I don't really try to count them, just try to recognize the typical answers - O(1), O(N), O(N*N), O(logN), O(N!) or perhaps some other power of N. The O(N!) is actually pretty rare, because it's so inefficient, that implementing it would be pointless.
If you get an answer of something like N*N+N+1, then just discard the smaller ones, because, again, when N gets large, the others don't matter anymore. And ignore if the operation is repeated some fixed number of times. O(5*N) is the same as O(N), because it's the ballpark we're looking for.
Added: As asked in the comments, here are the analysis of the first two methods:
The first one is easy. There are only two loops, the inner one is O(N), and the outer one just repeats that 3 times. So it's still O(N). (Remember - O(3N) = O(N)).
The second one is tricky. I'm not really sure about it. After looking at it for a while I understood why it loops at most only 50 times. Since this is not dependant on N at all, it counts as O(1). However, if you were to pass it, say, an array of only 10 items, all positive, it would go into an infinite loop. That's O(∞), I guess. So which one is it? I don't know...
I don't think there's a formal way of determining the big-O number for an algorithm. It's like the halting problem. In fact, come to think of it, if you could universally determine the big-O for a piece of code, you could also determine if it ever halts or not, thus contradicting the halting problem. But that's just my musings.
Typically I just go by... dunno, sort of a "gut feeling". Once you "get" what the Big-O represents, it becomes pretty intuitive. But for complicated algorithms it's not always possible to determine. Take Quicksort for example. On average it's O(N*logN), but depending on the data it can degrade to O(N*N). The questions you'll get on the test though should have clear answers.
The second one is 50 because big O is a function of the length of the input. That is if the input size changes from 1 million to 1 billion, the runtime should increase by 1000 if the function is O(N) and 1 million if it's O(n^2). However the second function runs in time 50 regardless of the input length, so it's O(1). Technically it would be O(50) but constants don't matter for big O.

Categories