Time complexity of system.out.println - java

I've been told different things over my course on algorithms, and was wondering if I could get a definitive answer as to the time complexity of Java's System.out.println() command.
For example, what would the time complexity of the following be, with respect to N?
String stringy = "";
while(stringy.length() < N) {
System.out.println(stringy);
stringy += "X";
}
Thanks for helping out the new guy!

the Time complexity of this code is O(N*N) because it's a loop of N times that prints. I don't know what have you been told but the time complexity of printing it not worse then O(N) in Java.
in your code you add "X" to each line, and therefor your printing will be:
X
XX
XXX
XXXX
XXXXX
XXXXXX
.
.
.
so it's complexity is calculated as an Arithmetic progression and we get:
(1+N)*N/2=O(N^2)
to read on how the command work you can read here or here:
There is a general notion that SOPs are bad in performance. When we
analyze deeply, the sequence of calls are like println -> print ->
write() + newLine(). This sequence flow is an implementation of
Sun/Oracle JDK. Both write() and newLine() contains a synchronized
block. Synchronization has a little overhead, but more than that the
cost of adding characters to the buffer and printing is high.
When we run a performance analysis, run multiple number of SOP and
record the time, the execution duration increases proportionally.
Performance degrades when we print more that 50 characters and print
more than 50,000 lines.
It all depends on the scenario we use it. Whatever may be the case, do
not use System.out.println for logging to stdout.

I have run a basic python program to check the time complexity of the print statement in Python for a variable number of characters to be printed. The code goes as -
import time
def current_milli_time():
return round(time.time() * 1000)
=====================================
startTime1 = current_milli_time()
for i in range(10000):
print("a", end="")
endTime1 = current_milli_time()
=====================================
startTime2 = current_milli_time()
for i in range(10000):
print("ab", end="")
endTime2 = current_milli_time()
=====================================
startTime3 = current_milli_time()
for i in range(10000):
print("abc", end="")
endTime3 = current_milli_time()
=====================================
print("\nTime(ms) for first case: ", endTime1 - startTime1)
print("Time(ms) for second case: ", endTime2 - startTime2)
print("Time(ms) for second case: ", endTime3 - startTime3)
We can see that in the first case we printed only "a", in the second case we printed "ab" and in the third case we printed "abc", the time complexity increased linearly with the number of characters.
Therefore, it can be said that for every language the print statement takes O(lengthOfString) time.

time complexity tells you how much more work your algorithm has to do per increment of input size, give or take some constant coefficient.
So an upper bound complexity of O(2 N) is equal to complexity O(23587 N) because the actual definition found here
http://en.wikipedia.org/wiki/Big_O_notation
states that the coefficient can be any number no matter how large, as long as it is fixed with regards to the size of the input.
because you are not using 'N' within the loop, you are just adding a char on to a String, the amount of work per iteration is equal to how many iterations you have -> O(N)
if you had "stringy += stringy;" instead it would be O(N^2) because each iteration you are doubling the amount of work you have to do
**NOTE
I am assuming system.out.print is an atomic statement, ie it prints all the characters as a single action.. if it printed each character individually then its O(N^2)....

The complexity of this code is O(n^2). It iterates the loop N times, and because System.out.println must print each character, which prints from 0 to N characters each iteration, averaging N/2, you drop the constant, N*N = N^2. In the same manner, adding to the string is going to cause the entire string to get copied (Strings are immutable in Java, so any changes mean you have to copy the entire string into a new string). This is another linear operation. So you have n * (n/2 + n/2) is still on a quadratic order - O(n^2).
String stringy = "";
while(stringy.length() < N) { // will iterate N times
System.out.println(stringy); // has to print N letters
stringy += "X"; // has to copy N letters into a new string
}

A great answer can be found here:
http://www.quora.com/What-exactly-is-the-time-complexity-for-System-out-println-in-Java-O-1-or-O-N
The main idea is that printing a string is actualy copying it to the stdout - and we know that copy of a string is o(n).
The second part says that you can try printing a large number of times:
- one character
- A very large string
And you will see the time difference!! (if printing would be o(1) you wouldn't)

Time complexity of System.out.println(stringy); command ???
You basically meant the time complexity of the code snippet above.Look , time complexity is not particularly related to one specific code or language it basically means how much time theoretically will be taken by the line of code. This usually depends on two or three things :
size of the input
degree of polynomial (in case of solving polynomial equations)
Now in this part of your code :
String stringy = "";
while(stringy.length() < N) {// the loop will execute in order of N times
System.out.println(stringy);//println will execute in order of N times too as in printing each character
stringy += "X";
}
It will obviously depend on the size of input which is of course the length of the string.
First the while loop executes little less than N (because of the condition stringy.length() < N making it <= will make it run through the full length of the string ) which we can say in the order of N and printing the string will be done in the order of N so overall code will have a running time of O(N^2)

Related

How to calculate Big O time complexity for while loops

I am having trouble understanding how while loops affect the Big O time complexity.
For example, how would I calculate the time complexity for the code below?
Since it has a for loop that traverses through each element in the array and two nested while loops my initial thought was O(n^3) for the time complexity but I do not think that is right.
HashMap<Integer,Boolean> ht = new HashMap<>();
for(int j : array){
if(ht.get(j)) continue;
int left = j-1;
//check if hashtable contains number
while(ht.containsKey(left)){
//do something
left--;
}
int right = j+1;
//check if hashtable contains number
while(ht.containsKey(right)){
//do something
right++;
}
int diff = right - left;
if(max < diff) {
//do something
}
}
There is best case, average case, and worst case.
I'm going to have to assume there is something that constrains the two while loops so that neither iterates more than n times, where n is the number of elements in the array.
In the best case, you have O(n). That is because if(ht.get(j)) is always true, the continue path is always taken. Neither while loop is executed.
For the worst case, if(ht.get(j)) is always false, the while loops will be executed. Also, in the worst case, each while loop will have n passes. [1] The net result is 2 * n for both inner loops multiplied by n for the outer loop: (2 * n) * n. That would give you time complexity of O(n^2). [2]
The lookup time could potentially be a factor. A hash table lookup usually runs in constant time: O(1). That's the best case. But, the worst case is O(n). This happens when all entries have the same hash code. If that happens, it could potentially change your worst case to O(n^3).
[1] I suspect the worst case, the number of passes of the first while loop plus the number of passes of the second while loop is actually n or close to it. But, that doesn't change the result.
[2] In Big O, we chose the term that grows the fastest, and ignore the coefficients. So, in this example, we drop the 2 in 2*n*n.
Assuming there are m and n entries in your HashMap and array, respectively.
Since you have n elements for the for loop, the complexity can be written as n * complexity_inside_for.
Inside the for loop, you have two consecutive (not nested) while loops, each contributing a complexity of m as in worst case it'll need to go through all entries in your HashMap. Therefore, complexity_inside_for = m + m = 2m.
So overall, time complexity is n * 2m. However, as m and n approach infinity, the number 2 doesn't matter because it is not a function of m and/or n and can be discarded. This gives a big-O time complexity of O(m*n)
for one nested loop the time complexity works like this: O(n^2).
In each iteration of i, inner loop is executed 'n' times. The time complexity of a loop is equal to the number of times the innermost statement is to be executed.
so for your case that would be O(n^2)+O(n).
there you can find more explanation
Time-complexity

Big O notation (Algorithms)

Hi I am new to Big O Notation and having trouble specifying big O for the following if someone could kindly explain to me how to work it out please
int sum=1;
for(int count=n; count>0; count/=2) {
sum=sum*count;
}
As each time count is divided by 2, it will be Theta(log n) (from n to 0). Hence, it is in O(log(n)) as well.
In the first line, the code will run once, so for it, the complexity is O(1).
In second line a for loop will run until count > 0 and we can see that it will be divided by 2 every time loop runs and so it will just go on continuously infinite times. So basically, you cannot apply Big O notation on an infinite loop. And so also you cannot apply big O to the code inside the loop as well. So if there exist any O(infinite) then it could be an answer, but it won't exist. So you cannot define time complexity until and unless n<=0. if n<=0 then only the code will run once and complexity will be O(1).
count gets halved per each iteration. If n = 64:
step1 => count = 64
step2 => count = 32
step3 => count = 16
...
Therefore, its worst case scenario has a O(log n) time complexity.
In this case, however, your loop's body has a constant number of operations, therefore, best, worst, and average case scenarios are same, and:
Ω(log n) = Θ(log n) = O(log n)

How to find the range for the number of basic operations? [duplicate]

I have gone through Google and Stack Overflow search, but nowhere I was able to find a clear and straightforward explanation for how to calculate time complexity.
What do I know already?
Say for code as simple as the one below:
char h = 'y'; // This will be executed 1 time
int abc = 0; // This will be executed 1 time
Say for a loop like the one below:
for (int i = 0; i < N; i++) {
Console.Write('Hello, World!!');
}
int i=0; This will be executed only once.
The time is actually calculated to i=0 and not the declaration.
i < N; This will be executed N+1 times
i++ This will be executed N times
So the number of operations required by this loop are {1+(N+1)+N} = 2N+2. (But this still may be wrong, as I am not confident about my understanding.)
OK, so these small basic calculations I think I know, but in most cases I have seen the time complexity as O(N), O(n^2), O(log n), O(n!), and many others.
How to find time complexity of an algorithm
You add up how many machine instructions it will execute as a function of the size of its input, and then simplify the expression to the largest (when N is very large) term and can include any simplifying constant factor.
For example, lets see how we simplify 2N + 2 machine instructions to describe this as just O(N).
Why do we remove the two 2s ?
We are interested in the performance of the algorithm as N becomes large.
Consider the two terms 2N and 2.
What is the relative influence of these two terms as N becomes large? Suppose N is a million.
Then the first term is 2 million and the second term is only 2.
For this reason, we drop all but the largest terms for large N.
So, now we have gone from 2N + 2 to 2N.
Traditionally, we are only interested in performance up to constant factors.
This means that we don't really care if there is some constant multiple of difference in performance when N is large. The unit of 2N is not well-defined in the first place anyway. So we can multiply or divide by a constant factor to get to the simplest expression.
So 2N becomes just N.
This is an excellent article: Time complexity of algorithm
The below answer is copied from above (in case the excellent link goes bust)
The most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N as N approaches infinity. In general you can think of it like this:
statement;
Is constant. The running time of the statement will not change in relation to N.
for ( i = 0; i < N; i++ )
statement;
Is linear. The running time of the loop is directly proportional to N. When N doubles, so does the running time.
for ( i = 0; i < N; i++ ) {
for ( j = 0; j < N; j++ )
statement;
}
Is quadratic. The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.
while ( low <= high ) {
mid = ( low + high ) / 2;
if ( target < list[mid] )
high = mid - 1;
else if ( target > list[mid] )
low = mid + 1;
else break;
}
Is logarithmic. The running time of the algorithm is proportional to the number of times N can be divided by 2. This is because the algorithm divides the working area in half with each iteration.
void quicksort (int list[], int left, int right)
{
int pivot = partition (list, left, right);
quicksort(list, left, pivot - 1);
quicksort(list, pivot + 1, right);
}
Is N * log (N). The running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic.
In general, doing something with every item in one dimension is linear, doing something with every item in two dimensions is quadratic, and dividing the working area in half is logarithmic. There are other Big O measures such as cubic, exponential, and square root, but they're not nearly as common. Big O notation is described as O ( <type> ) where <type> is the measure. The quicksort algorithm would be described as O (N * log(N )).
Note that none of this has taken into account best, average, and worst case measures. Each would have its own Big O notation. Also note that this is a VERY simplistic explanation. Big O is the most common, but it's also more complex that I've shown. There are also other notations such as big omega, little o, and big theta. You probably won't encounter them outside of an algorithm analysis course. ;)
Taken from here - Introduction to Time Complexity of an Algorithm
1. Introduction
In computer science, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input.
2. Big O notation
The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms. When expressed this way, the time complexity is said to be described asymptotically, i.e., as the input size goes to infinity.
For example, if the time required by an algorithm on all inputs of size n is at most 5n3 + 3n, the asymptotic time complexity is O(n3). More on that later.
A few more examples:
1 = O(n)
n = O(n2)
log(n) = O(n)
2 n + 1 = O(n)
3. O(1) constant time:
An algorithm is said to run in constant time if it requires the same amount of time regardless of the input size.
Examples:
array: accessing any element
fixed-size stack: push and pop methods
fixed-size queue: enqueue and dequeue methods
4. O(n) linear time
An algorithm is said to run in linear time if its time execution is directly proportional to the input size, i.e. time grows linearly as input size increases.
Consider the following examples. Below I am linearly searching for an element, and this has a time complexity of O(n).
int find = 66;
var numbers = new int[] { 33, 435, 36, 37, 43, 45, 66, 656, 2232 };
for (int i = 0; i < numbers.Length - 1; i++)
{
if(find == numbers[i])
{
return;
}
}
More Examples:
Array: Linear Search, Traversing, Find minimum etc
ArrayList: contains method
Queue: contains method
5. O(log n) logarithmic time:
An algorithm is said to run in logarithmic time if its time execution is proportional to the logarithm of the input size.
Example: Binary Search
Recall the "twenty questions" game - the task is to guess the value of a hidden number in an interval. Each time you make a guess, you are told whether your guess is too high or too low. Twenty questions game implies a strategy that uses your guess number to halve the interval size. This is an example of the general problem-solving method known as binary search.
6. O(n2) quadratic time
An algorithm is said to run in quadratic time if its time execution is proportional to the square of the input size.
Examples:
Bubble Sort
Selection Sort
Insertion Sort
7. Some useful links
Big-O Misconceptions
Determining The Complexity Of Algorithm
Big O Cheat Sheet
Several examples of loop.
O(n) time complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount. For example following functions have O(n) time complexity.
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
for (int i = n; i > 0; i -= c) {
// some O(1) expressions
}
O(nc) time complexity of nested loops is equal to the number of times the innermost statement is executed. For example, the following sample loops have O(n2) time complexity
for (int i = 1; i <=n; i += c) {
for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}
for (int i = n; i > 0; i += c) {
for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}
For example, selection sort and insertion sort have O(n2) time complexity.
O(log n) time complexity of a loop is considered as O(log n) if the loop variables is divided / multiplied by a constant amount.
for (int i = 1; i <=n; i *= c) {
// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}
For example, [binary search][3] has _O(log n)_ time complexity.
O(log log n) time complexity of a loop is considered as O(log log n) if the loop variables is reduced / increased exponentially by a constant amount.
// Here c is a constant greater than 1
for (int i = 2; i <=n; i = pow(i, c)) {
// some O(1) expressions
}
//Here fun is sqrt or cuberoot or any other constant root
for (int i = n; i > 0; i = fun(i)) {
// some O(1) expressions
}
One example of time complexity analysis
int fun(int n)
{
for (int i = 1; i <= n; i++)
{
for (int j = 1; j < n; j += i)
{
// Some O(1) task
}
}
}
Analysis:
For i = 1, the inner loop is executed n times.
For i = 2, the inner loop is executed approximately n/2 times.
For i = 3, the inner loop is executed approximately n/3 times.
For i = 4, the inner loop is executed approximately n/4 times.
…………………………………………………….
For i = n, the inner loop is executed approximately n/n times.
So the total time complexity of the above algorithm is (n + n/2 + n/3 + … + n/n), which becomes n * (1/1 + 1/2 + 1/3 + … + 1/n)
The important thing about series (1/1 + 1/2 + 1/3 + … + 1/n) is around to O(log n). So the time complexity of the above code is O(n·log n).
References:
1
2
3
Time complexity with examples
1 - Basic operations (arithmetic, comparisons, accessing array’s elements, assignment): The running time is always constant O(1)
Example:
read(x) // O(1)
a = 10; // O(1)
a = 1,000,000,000,000,000,000 // O(1)
2 - If then else statement: Only taking the maximum running time from two or more possible statements.
Example:
age = read(x) // (1+1) = 2
if age < 17 then begin // 1
status = "Not allowed!"; // 1
end else begin
status = "Welcome! Please come in"; // 1
visitors = visitors + 1; // 1+1 = 2
end;
So, the complexity of the above pseudo code is T(n) = 2 + 1 + max(1, 1+2) = 6. Thus, its big oh is still constant T(n) = O(1).
3 - Looping (for, while, repeat): Running time for this statement is the number of loops multiplied by the number of operations inside that looping.
Example:
total = 0; // 1
for i = 1 to n do begin // (1+1)*n = 2n
total = total + i; // (1+1)*n = 2n
end;
writeln(total); // 1
So, its complexity is T(n) = 1+4n+1 = 4n + 2. Thus, T(n) = O(n).
4 - Nested loop (looping inside looping): Since there is at least one looping inside the main looping, running time of this statement used O(n^2) or O(n^3).
Example:
for i = 1 to n do begin // (1+1)*n = 2n
for j = 1 to n do begin // (1+1)n*n = 2n^2
x = x + 1; // (1+1)n*n = 2n^2
print(x); // (n*n) = n^2
end;
end;
Common running time
There are some common running times when analyzing an algorithm:
O(1) – Constant time
Constant time means the running time is constant, it’s not affected by the input size.
O(n) – Linear time
When an algorithm accepts n input size, it would perform n operations as well.
O(log n) – Logarithmic time
Algorithm that has running time O(log n) is slight faster than O(n). Commonly, algorithm divides the problem into sub problems with the same size. Example: binary search algorithm, binary conversion algorithm.
O(n log n) – Linearithmic time
This running time is often found in "divide & conquer algorithms" which divide the problem into sub problems recursively and then merge them in n time. Example: Merge Sort algorithm.
O(n2) – Quadratic time
Look Bubble Sort algorithm!
O(n3) – Cubic time
It has the same principle with O(n2).
O(2n) – Exponential time
It is very slow as input get larger, if n = 1,000,000, T(n) would be 21,000,000. Brute Force algorithm has this running time.
O(n!) – Factorial time
The slowest!!! Example: Travelling salesman problem (TSP)
It is taken from this article. It is very well explained and you should give it a read.
When you're analyzing code, you have to analyse it line by line, counting every operation/recognizing time complexity. In the end, you have to sum it to get whole picture.
For example, you can have one simple loop with linear complexity, but later in that same program you can have a triple loop that has cubic complexity, so your program will have cubic complexity. Function order of growth comes into play right here.
Let's look at what are possibilities for time complexity of an algorithm, you can see order of growth I mentioned above:
Constant time has an order of growth 1, for example: a = b + c.
Logarithmic time has an order of growth log N. It usually occurs when you're dividing something in half (binary search, trees, and even loops), or multiplying something in same way.
Linear. The order of growth is N, for example
int p = 0;
for (int i = 1; i < N; i++)
p = p + 2;
Linearithmic. The order of growth is n·log N. It usually occurs in divide-and-conquer algorithms.
Cubic. The order of growth is N3. A classic example is a triple loop where you check all triplets:
int x = 0;
for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
for (int k = 0; k < N; k++)
x = x + 2
Exponential. The order of growth is 2N. It usually occurs when you do exhaustive search, for example, check subsets of some set.
Loosely speaking, time complexity is a way of summarising how the number of operations or run-time of an algorithm grows as the input size increases.
Like most things in life, a cocktail party can help us understand.
O(N)
When you arrive at the party, you have to shake everyone's hand (do an operation on every item). As the number of attendees N increases, the time/work it will take you to shake everyone's hand increases as O(N).
Why O(N) and not cN?
There's variation in the amount of time it takes to shake hands with people. You could average this out and capture it in a constant c. But the fundamental operation here --- shaking hands with everyone --- would always be proportional to O(N), no matter what c was. When debating whether we should go to a cocktail party, we're often more interested in the fact that we'll have to meet everyone than in the minute details of what those meetings look like.
O(N^2)
The host of the cocktail party wants you to play a silly game where everyone meets everyone else. Therefore, you must meet N-1 other people and, because the next person has already met you, they must meet N-2 people, and so on. The sum of this series is x^2/2+x/2. As the number of attendees grows, the x^2 term gets big fast, so we just drop everything else.
O(N^3)
You have to meet everyone else and, during each meeting, you must talk about everyone else in the room.
O(1)
The host wants to announce something. They ding a wineglass and speak loudly. Everyone hears them. It turns out it doesn't matter how many attendees there are, this operation always takes the same amount of time.
O(log N)
The host has laid everyone out at the table in alphabetical order. Where is Dan? You reason that he must be somewhere between Adam and Mandy (certainly not between Mandy and Zach!). Given that, is he between George and Mandy? No. He must be between Adam and Fred, and between Cindy and Fred. And so on... we can efficiently locate Dan by looking at half the set and then half of that set. Ultimately, we look at O(log_2 N) individuals.
O(N log N)
You could find where to sit down at the table using the algorithm above. If a large number of people came to the table, one at a time, and all did this, that would take O(N log N) time. This turns out to be how long it takes to sort any collection of items when they must be compared.
Best/Worst Case
You arrive at the party and need to find Inigo - how long will it take? It depends on when you arrive. If everyone is milling around you've hit the worst-case: it will take O(N) time. However, if everyone is sitting down at the table, it will take only O(log N) time. Or maybe you can leverage the host's wineglass-shouting power and it will take only O(1) time.
Assuming the host is unavailable, we can say that the Inigo-finding algorithm has a lower-bound of O(log N) and an upper-bound of O(N), depending on the state of the party when you arrive.
Space & Communication
The same ideas can be applied to understanding how algorithms use space or communication.
Knuth has written a nice paper about the former entitled "The Complexity of Songs".
Theorem 2: There exist arbitrarily long songs of complexity O(1).
PROOF: (due to Casey and the Sunshine Band). Consider the songs Sk defined by (15), but with
V_k = 'That's the way,' U 'I like it, ' U
U = 'uh huh,' 'uh huh'
for all k.
For the mathematically-minded people: The master theorem is another useful thing to know when studying complexity.
O(n) is big O notation used for writing time complexity of an algorithm. When you add up the number of executions in an algorithm, you'll get an expression in result like 2N+2. In this expression, N is the dominating term (the term having largest effect on expression if its value increases or decreases). Now O(N) is the time complexity while N is dominating term.
Example
For i = 1 to n;
j = 0;
while(j <= n);
j = j + 1;
Here the total number of executions for the inner loop are n+1 and the total number of executions for the outer loop are n(n+1)/2, so the total number of executions for the whole algorithm are n + 1 + n(n+1/2) = (n2 + 3n)/2.
Here n^2 is the dominating term so the time complexity for this algorithm is O(n2).
Other answers concentrate on the big-O-notation and practical examples. I want to answer the question by emphasizing the theoretical view. The explanation below is necessarily lacking in details; an excellent source to learn computational complexity theory is Introduction to the Theory of Computation by Michael Sipser.
Turing Machines
The most widespread model to investigate any question about computation is a Turing machine. A Turing machine has a one dimensional tape consisting of symbols which is used as a memory device. It has a tapehead which is used to write and read from the tape. It has a transition table determining the machine's behaviour, which is a fixed hardware component that is decided when the machine is created. A Turing machine works at discrete time steps doing the following:
It reads the symbol under the tapehead.
Depending on the symbol and its internal state, which can only take finitely many values, it reads three values s, σ, and X from its transition table, where s is an internal state, σ is a symbol, and X is either Right or Left.
It changes its internal state to s.
It changes the symbol it has read to σ.
It moves the tapehead one step according to the direction in X.
Turing machines are powerful models of computation. They can do everything that your digital computer can do. They were introduced before the advent of digital modern computers by the father of theoretical computer science and mathematician: Alan Turing.
Time Complexity
It is hard to define the time complexity of a single problem like "Does white have a winning strategy in chess?" because there is a machine which runs for a single step giving the correct answer: Either the machine which says directly 'No' or directly 'Yes'. To make it work we instead define the time complexity of a family of problems L each of which has a size, usually the length of the problem description. Then we take a Turing machine M which correctly solves every problem in that family. When M is given a problem of this family of size n, it solves it in finitely many steps. Let us call f(n) the longest possible time it takes M to solve problems of size n. Then we say that the time complexity of L is O(f(n)), which means that there is a Turing machine which will solve an instance of it of size n in at most C.f(n) time where C is a constant independent of n.
Isn't it dependent on the machines? Can digital computers do it faster?
Yes! Some problems can be solved faster by other models of computation, for example two tape Turing machines solve some problems faster than those with a single tape. This is why theoreticians prefer to use robust complexity classes such as NL, P, NP, PSPACE, EXPTIME, etc. For example, P is the class of decision problems whose time complexity is O(p(n)) where p is a polynomial. The class P do not change even if you add ten thousand tapes to your Turing machine, or use other types of theoretical models such as random access machines.
A Difference in Theory and Practice
It is usually assumed that the time complexity of integer addition is O(1). This assumption makes sense in practice because computers use a fixed number of bits to store numbers for many applications. There is no reason to assume such a thing in theory, so time complexity of addition is O(k) where k is the number of bits needed to express the integer.
Finding The Time Complexity of a Class of Problems
The straightforward way to show the time complexity of a problem is O(f(n)) is to construct a Turing machine which solves it in O(f(n)) time. Creating Turing machines for complex problems is not trivial; one needs some familiarity with them. A transition table for a Turing machine is rarely given, and it is described in high level. It becomes easier to see how long it will take a machine to halt as one gets themselves familiar with them.
Showing that a problem is not O(f(n)) time complexity is another story... Even though there are some results like the time hierarchy theorem, there are many open problems here. For example whether problems in NP are in P, i.e. solvable in polynomial time, is one of the seven millennium prize problems in mathematics, whose solver will be awarded 1 million dollars.

Algorithm, Big O notation: Is this function O(n^2) ? or O(n)?

This is code from a algorithm book, "Data structures and Algorithms in Java, 6th Edition." by by Michael T. GoodRich, Roberto Tamassia, and Michael H. Goldwasser
public static String repeat1(char c, int n)
{
String answer = "";
for(int j=0; j < n; j++)
{
answer += c;
}
return answer;
}
According to the authors, the Big O notation of this algorithm is O(n^2) with reason:
"The command, answer += c, is shorthand for answer = (answer + c). This
command does not cause a new character to be added to the existing String
instance; instead it produces a new String with the desired sequence of
characters, and then it reassigns the variable, answer, to refer to that new
string. In terms of efficiency, the problem with this interpretation is that
the creation of a new string as a result of a concatenation, requires time
that is proportional to the length of the resulting string. The first time
through this loop, the result has length 1, the second time through the loop
the result has length 2, and so on, until we reach the final string of length
n."
However, I do not understand, how this code can have O(n^2) as its number of primitive operations just doubles each iteration regardless of the value of n(excluding j < n and j++).
The statement answer += c requires two primitive operations each iteration regardless of the value n, therefore I think the equation for this function supposed to be 4n + 3. (Each loop operates j
Or, is the sentence,"In terms of efficiency, the problem with this interpretation is that the creation of a new string as a result of a concatenation, requires time that is proportional to the length of the resulting string.," just simply saying that creating a new string as a result of a concatenation requires proportional time to its length regardless of the number of primitive operations used in the function? So the number of primitive operations does not have big effects on the running time of the function because the built-in code for concatenated String assignment operator's running time runs in O(n^2).
How can this function be O(n^2)?
Thank you for your support.
During every iteration of the loop, the statement answer += c; must copy each and every character already in the string answer to a new string.
E.g. n = 5, c = '5'
First loop: answer is an empty string, but it must still create a new string. There is one operation to append the first '5', and answer is now "5".
Second loop: answer will now point to a new string, with the first '5' copied to a new string with another '5' appended, to make "55". Not only is a new String created, one character '5' is copied from the previous string and another '5' is appended. Two characters are appended.
"n"th loop: answer will now point to a new string, with n - 1 '5' characters copied to a new string, and an additional '5' character appended, to make a string with n 5s in it.
The number of characters copied is 1 + 2 + ... + n = n(n + 1)/2. This is O(n2).
The efficient way to constructs strings like this in a loop in Java is to use a StringBuilder, using one object that is mutable and doesn't need to copy all the characters each time a character is appended in each loop. Using a StringBuilder has a cost of O(n).
Strings are immutable in Java. I believe this terrible code is O(n^2) for that reason and only that reason. It has to construct a new String on each iteration. I'm unsure if String concatenation is truly linearly proportional to the number of characters (it seems like it should be a constant time operation since Strings have a known length). However if you take the author's word for it then iterating n times with each iteration taking a time proportional to n, you get n^2. StringBuilder would give you O(n).
I mostly agree with it being O(n^2) in practice, but consider:
Java is SMART. In many cases it uses StringBuilder instead of string for concatenation under the covers. You can't just assume it's going to copy the underlying array every time (although it almost certainly will in this case).
Java gets SMARTER all the time. There is no reason it couldn't optimize that entire loop based on StringBuilder since it can analyze all your code and figure out that you don't use it as a string inside that loop.
Further optimizations can happen--Strings currently use an array AND an length AND a shared flag (And maybe a start location so that splits wouldn't require copying, I forget, but they changed that split implementation anyway)--so appending into an oversized array and then returning a new string with a reference to the same underlying array but a higher end without mutating the original string is altogether possible (by design, they do stuff like this already to a degree)...
So I think the real question is, is it a great idea to calculate O() based on a particular implementation of a language-level construct?
And although I can't say for sure what the answer to that is, I can say it would be a REALLY BAD idea to optimize on the assumption that it was O(n^2) unless you absolutely needed it--you could take away java's ability to speed up your code later by hand optimizing today.
ps. this is from experience. I had to optimize some java code that was the UI for a spectrum analyzer. I saw all sorts of String+ operations and figured I'd clean them all up with .append(). It saved NO time because Java already optimizes String+ operations that are not in a loop.
The complexity becomes O(n^2) because each time the string increase the length by one and to create it each time you need n complexity. Also, the outer loop is n in complexity. So the exact complexity will be (n * (n+1))/2 which is O(n^2)
For example,
For abcdefg
a // one length string object is created so complexity is 1
ab // similarly complexity is 2
abc // complexity 3 here
abcd // 4 now.
abcde // ans so on.
abcdef
abcedefg
Now, you see the total complexity is 1 + 2 + 3 + 4 + ... + n = (n * (n+1))/2. In big O notation it's O(n^2)
Consider the length of the string as "n" so every time we need to add the element at the end so iteration for the string is "n" and also we have the outer for loop so "n" for that, So as a result we get O(n^2).
That is because:
answer += c;
is a String concatenation. In java Strings are immutable.
It means concatenated string is created by creating a copy of original string and appending c to it. So a simple concatenation operation is O(n) for n sized String.
In first iteration, answer length is 0, in second iteration its 1, in third its 2 and so on.
So you're doing these operations every time i.e.
1 + 2 + 3 + ... + n = O(n^2)
For string manipulations StringBuilder is the preferred way i.e. it appends any character in O(1) time.

Is time complexity of an algorithm calculated only based on number of times loop excecutes?

I have a big doubt in calculating time complexity. Is it calculated based on number of times loop executes? My question stems from the situation below.
I have a class A, which has a String attribute.
class A{
String name;
}
Now, I have a list of class A instances. This list has different names in it. I need to check whether the name "Pavan" exist in any of the objects in the list.
Scenario 1:
Here the for loop executes listA.size times, which can be said as O(n)
public boolean checkName(List<String> listA, String inputName){
for(String a : listA){
if(a.name.equals(inputName)){
return true;
}
}
return false;
}
Scenario 2:
Here the for loop executes listA.size/2 + 1 times.
public boolean checkName(List<String> listA, String inputName){
int length = listA.size/2
length = length%2==0 ? length : length + 1
for(int i=0; i < length ; i++){
if(listA[i].name.equals(inputName) || listA[listA.size - i - 1].name.equals(inputName)){
return true;
}
}
return false;
}
I minimized the number of times for loop executes, but I increased the complexity of the logic.
Can we say this is O(n/2)? If so, can you please explain me?
First note that in Big-O notation there is nothing such as O(n/2) as 1/2 is a constant factor which is ignored in this notation. The complexity would remain as O(n). So by modifying your code you haven't changed anything regarding complexity.
In general estimating the number of times a loop is executed with respect to input size and the operation that actually is associated with a cost in time is the way to get to the complexity class of the algorithm.
The operation that is producing cost in your method is String.equals, which by looking at it's implementation, is producing cost by comparing characters.
In your example the input size is not strictly equal to the size of the list. It also depends on how large the strings contained in that list are and how large the inputName is.
So let's say the largest string in the list is m1 characters and the inputName is m2 characters in length. So for your original checkName method the complexity would be O(n*min(m1,m2)) because of String.equals comparing at most all characters of a string.
For most applications the term min(m1,m2) doesn't matter as either one of the compared strings is stored in a fixed size database column for example and therefore this expression is a constant, which is, as said above, ignored.
No. In big O expression, all constant values are ignored.
We only care about n, such as O(n^2), O(logn).
Time and space complexity is calculated based on the number or operations executed, respectively the number the units of memory used.
Regarding time complexity: all the operations are taken into account and numbered. Because it's hard to compare say O(2*n^2+5*n+3) with O(3*n^2-3*n+1), equivalence classes are used. That means that for very large values of n, the two previous example will have a roughly similar value (more exactly said: they have a similar rate of grouth). Therefor, you reduce the expression to it's most basic form, saying that the two example are in the same equivalence class as O(n^2). Similarly, O(n) and O(n/2) are in the same class and therefor both are in O(n).
Due to what I said before, you can ignore most constant operations (such as .size(), .lenth() on collections, assignments, etc) as they don't really count in the end. Therefor, you're left with loop operations and sometimes complex computations (that somewhere lower on the stack use loops themselves).
To better get an understanding on the 3 classes of complexity, try reading articles on the subject, such as: http://discrete.gr/complexity/
Time complexity is a measure for theoretical time it will take for an operation to be executed.
While normally any improvement in the time required is significant in time complexity we are interested in the order of magnitude. The former means
If an operation for N objects requires N time intervals then it has complexity O(N).
If an operation for N objects requires N/2 it's complexity is still O(N) though.
The above paradox is explained if you get to calculate the operation for large N then there is no big difference in the /2 part as for the N part. If complexity is O(N^2) then O(N) is negligible for large N so that's why we are interested in order of magnitude.
In other words any constant is thrown away when calculating complexity.
As for the question if
Is it calculated based on number of times loop executes ?
well it depends on what a loop contains. But if only basic operation are executed inside a loop then yes. To give an example if you have a loop inside which an eigenanaluysis is executed in each run, which has complexity O(N^3) you cannot say that your complexity is simply O(N).
Complexity of an algorithm is measured based on the response made on the input size in terms of processing time or space requirement. I think you are missing the fact that the notations used to express the complexity are asymptotic notations.
As per your question, you have reduced the loop execution count, but not the linear relation with the input size.

Categories