Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have written a code for both counting sort and quicksort in the Java language to sort integers. Both the codes work fine for smaller inputs, but when I gave the size of the array of the order of 100,000, the quicksort stops working whereas the counting sort sorted it correctly. So can I say that it is better to use counting sort over quicksort when the size of the unsorted array is very large? I am using Eclipse IDE Oxygen.3a Release (4.7.3a).Thanks in advance.
Counting sort has better time complexity but worse space complexity. So if you have very large sets it really depends on what is more important to you memory consumption or CPU consumption.
It should be noted that while counting sort is computationally superior it only applies to sorting small integer values. So while it is superior it is not always a valid replacement for Quicksort. So it is not accurate to claim that it replaces Quicksort as an option entirely.
See the following link for details: http://bigocheatsheet.com/
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a question. I'm currently taking a class on Object Oriented Programming with Data Structures, and there was a question on a quiz that I apparently got wrong.
"Which of the following search algorithms is the fastest at sorting a 1-D array which is already mostly sorted?"
The answers listed were: Quick Sort, Insertion Sort, Merge Sort, Selection Sort
I wasn't sure how large the lists were, so I chose Insertion sort because I know that O(n) would be the best case time complexity for a mostly sorted list that wasn't too large. The correct answer for it was Merge sort. I decided to run some tests and, for a mostly sorted list of 500, the results showed that Insertion sort was the fastest. However, I know that Merge sort would win if the list were larger. When I asked why Insertion sort would not also be an answer, I received replies stating that I'm always to assume that the lists in these questions are very large lists and to always assume worst case. No one could provide me any documentation on that answer, so that's why I'm here. When asked a question like this, am I to always assume the worst case and that the list will be large?
I think in this particular case it has to be said that the quiz question is rather ill-defined. I mean what is mostly sorted. The worst case for merge sort is O(nlog(n)) so unless mostly sorted means < log(n) is sorted, then merge wins. Otherwise I think you would be correct. The size of the list doesn't really matter.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
As we have many Sorting algorithms,I wanted to select the proper sorting algorithm for my case.
For Ex:Insertion sort is best for Small case of numbers ,whereas Merge sort is best suited for Large case of numbers.
I dont know what is that small range of numbers means .i.e 1-100 or 1-1000 or so.
Probably what is the best case for sorting a list of numbers where the same set of numbers are present present repeatedly.I am planning to store it in a hash and then store the elements accordingly .
Whether doing in through hash is a better way or Using some sorting algorithm is the best way
But as here it contains the same data again and again
If you add some elements to already sorted array(list), then you have only small number of inversions. In this case insertion sort will work rather fast.
Alternatives - natural merge sort or TimSort(if implementation is available for your language) - these ones will behave good in all cases, including unsorted arrays.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
In many programming languages, the array index begins with 0. Is there a reason why it was designed so?
According to me, it would have been more convenient if the length of the array was equal to the last index. We could avoid most of the ArrayIndexOutOfBounds exceptions.
I can understand when it comes to a language like C. C is an old language and the developers may have not thought about the issues and discomfort. But in case of modern languages like java, they still had a chance to redefine the design. Why have they chosen to keep it the same?
Is it somehow related to working of operating systems or did they actually wanted to continue with the familiar behaviour or design structure (though new programmers face a lot of problems related to this)?
An array index is just a memory offset.
So the first element of an array is at the memory it is already pointing to, which is simply
*(arr) == *(arr+0).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
consider the following Strings:
he llo
goodbye
hello
= (goodbye)
(he)(llo)
good bye
helium
I'm trying to sort these in such a way that similar words comes together, I know
alphanumerical sorting is not an option
removing special chars ",-_ and etc then comparing is certainly helpful but results won't be as good as I hope for.
NOTE :
there might be few different desired ouput for this, one of which is :
DESIRED OUTPUT:
hello
he llo
(he)(llo)
helium
goodbye
good bye
= (goodbye)
so my question is that if there is a java package that compares strings and ultimately sort them based on it .
I've heard of terms such as n-gram and skip-gram but didn't quite understand them. I'm not even sure if they can be useful for me at all.
UPDATE:
finding similarities is certainly part of my question but the main problem is the sorting part.
Here's one possible approach.
Calculate the edit distance/Levenshtein distance between each pair of strings and then you use view the strings as a complete graph where the edge weights come from the edit distance. Choose a threshold for those weights and remove all the weights that to high. Then find the cliques in this graph. If your threshold is fairly low perhaps even finding connected components would be an option.
Note:
Perhaps it would be better to substitute some edit distance with one of the similarity measures in the link that #dognose posted.
Also, note that finding cliques will be very slow if you have a large numbers of strings
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
This question was asked before and looked that without sorting and multithreading one simply needs to iterate through all elements. Is there any other idea? Java implementation is fine
If there are no other prerequisites for the input array, then no: You have to iterate through all the elements until you find the one you are looking for (thus, in worst and average case that would be O(n)).
If you have something else (like a heap, a search tree or any otherwise sorted array) you can use smarter and much faster techniques, of course (e.g. binary search).