Computational complexity of TreeSet operations in Java? - java

I am trying to clear up some things regarding complexity in some of the operations of TreeSet. On the javadoc it says:
"This implementation provides
guaranteed log(n) time cost for the
basic operations (add, remove and
contains)."
So far so good. My question is what happens on addAll(), removeAll() etc. Here the javadoc for Set says:
"If the specified collection is also a
set, the addAll operation effectively
modifies this set so that its value is
the union of the two sets."
Is it just explaining the logical outcome of the operation or is it giving a hint about the complexity? I mean, if the two sets are represented by e.g. red-black trees it would be better to somehow join the trees than to "add" each element of one to the other.
In any case, is there a way to combine two TreeSets into one with O(logn) complexity?
Thank you in advance. :-)

You could imagine how it would be possible to optimize special cases to O(log n), but the worst case has got to be O(m log n) where m and n are the number of elements in each tree.
Edit:
http://net.pku.edu.cn/~course/cs101/resource/Intro2Algorithm/book6/chap14.htm
Describes a special case algorithm that can join trees in O(log(m + n)) but note the restriction: all members of S1 must be less than all members of S2. This is what I meant that there are special optimizations for special cases.

Looking at the java source for TreeSet, it looks like it if the passed in collection is a SortedSet then it uses a O(n) time algorithm. Otherwise it calls super.addAll, which I'm guessing will result in O(n logn).
EDIT - guess I read the code too fast, TreeSet can only use the O(n) algorithm if it's backing map is empty

According to this blog post:
http://rgrig.blogspot.com/2008/06/java-api-complexity-guarantees.html
it's O(n log n). Because the documentation gives no hints about the complexity, you might want to write your own algorithm if the performance is critical for you.

It is not possible to perform merging of trees or join sets like in Disjoint-set data structures because you don't know if the elements in the 2 trees are disjoint. Since the data structures have knowledge about the content in other trees, it is necessary to check if one element exists in the other tree before adding to it or at-least trying to add it into another tree and abort adding it if you find it on the way.
So, it should be O(MlogN)

Related

Is it possible to add/update a sorted list in constant time?

Suppose you are given a list of integers that have already been sorted such as (1,7,13,14,50). It should be noted that the list will contain no duplicates.
Is there some data structure that could store this while allowing me to add any new element (at it's proper location) in constant time? add(10) would yield (1,7,10,13,14,50).
Similarly, would I be able to update an element (such as changing 7 to 19) and shift the order accordingly in constant time? change(7,19) yields (1,13,14,19,50).
For a class I need to write a data structure that performs these operations as quickly as possible, but I just wanted to know if constant time could be done and if not, then what would the ideal runtime be?
To insert in constant time, O(1), this would only occur as a best case for any of the data structures. Hash tables generally have the best insertion time, but it might not always be O(1) if there are collisions and there is separate chaining. You do not sort a hash so the complexity is irrelevent.
Binary tree's have a good insertion time, and as a bonus, it is sorted already upon inserting a new node. This takes on average O(logn) time however. The best case for inserting is O(1) if the tree is empty.
Those were just a couple examples, see here for more info on the complexities of these operations: http://bigocheatsheet.com/
In general? No. Determining where to insert a new element or re-ordering the list after insertion involves performing analysis of the list's contents, which involves reading the elements of the list, which (in general) means iterating over some portion of the length of the list. This (again, in general) is dependant on how many elements are in the list, which by definition is not a constant. Hence, a constant-time sorted insert is simply not possible except in special cases.
A binary tree, TreeSet, would be adequate. An array with Arrays.binarySearch and Arrays.copy would be fine too because here we have ints, and then we do not need the wrapper class Integer.
For real constant time, O(1), one must pay in space. Use a BitSet. To add 17 simply set 17 to true. There are optimized methods to find the next set bit and so on.
But I doubt optimizing is really needed at this spot. File I/O might pay off more.

Sorted data structure with O(logN) insertion that gives insertion point index

My goal is a sorted data structure that can accomplish 2 things:
Fast insertion (at the location according to sort order)
I can quickly segment my data into the sets of everything greater than or less than or equal to an element. I need to know the size of each of these partitions, and I need to be able to "get" these partitions.
Currently, I'm implementing this in java using an ArrayList which provides #2 very easily since I can perform binary search (Collections.binarySearch) and get an insertion index telling me at what point an element would be inserted. Then based on the fact that indices range from 0 to the size of the array, I immediately know how many elements are greater than my element or smaller than my elements, and I can easily get at those elements (as a sublist). However, this doesn't have property #1, and results in too much array copying.
This makes me want to use something like a SkipList or RedBlackTree that could perform the insertions faster, but then I can't figure out how to satisfy property #2 without making it take O(N) time.
Any suggestions would be appreciated. Thanks
EDIT: Thanks for the answers below that reference data structures that perform the insertion in O(logN) time and that can partition quickly as well, but I want to highlight the size() requirement - I need to know the size of these partitions without having to traverse the entire partition (which, according to this is what the TreeSet does. The reasoning behind this is that in my use case I maintain my data using several different copies of data structures each using a different comparator, and then need to ask "according to what comparator is the set of all things larger than a particular element smallest". In the ArrayList case, this is actually easy and takes only O(YlogN) where Y is the number of comparators, because I just binary search each of the Y arrays and return the arraylist with the highest insertion index. It's unclear to me how I could this with a TreeSet without taking O(YN).
I should also add that an approximate answer for the insertion index would still be valuable even if it couldn't be solved exactly.
Use a common Java TreeSet. Insertion takes O(logN), so #1 of your requirements is done. Here's the qouting from documentation:
This implementation provides guaranteed log(n) time cost for the basic operations (add, remove and contains).
And as it implements the NavigableSet interface, you have #2 or your requirements with the following methods:
tailSet(someElem) returns a Set view starting from someElem till the last element
headSet(someElem) returns a Set view starting from the first element till someElem
subSet(fromElem, toElem) returns a Set view starting from fromElem till toElem
These operations are overloaded with versions that include/exclude the bounds provided.
TreeSet is quite flexible: it allows you to define a Comparator to order the Set in a custom way, or you can also rely on the natural ordering of the elements.
EDIT:
As per the requirement of returned subsets size() operation to not be O(n), I'm afraid there's no adhoc implementation in the Java API.
It is true, the set views returned by TreeSet range operations, implement size() by 'jumping' to the first element of the view in O(log n) time, and then iterating over the subsequent elements, adding 1 in each iteration, until the end of the subset is reached.
I must say this is quite unfortunate, since it's not always needed to traverse the returned subset view, but sometimes, knowing the size of the subset in advance can be quite useful (as it's your use case).
So, in order to fulfil your requirement, you need another structure, or at least, an auxiliary structure. After some research, I suggest you use a Fenwick tree. A Fenwick tree is also known as a Binary Indexed Tree (BIT), and can be either immutable or mutable. The immutable version is implemented with an array, while the mutable version could be implemented with a balanced binary tree, i.e. a Red-Black tree (Java TreeSet is actually implemented as a Red-Black tree). Fenwick trees are mainly used to store frequencies and calculate the sum of all frequencies up to a given element in O(log n) time.
Please refer to this question here on Stack Overflow for a complete introduction to this quite unknown but yet incredibly useful structure. (As the explanation is here in Stack Overflow, I won't copy it here).
Here's another Stack Overflow question asking how to properly initialize a Fenwick tree, and here's actual Java code showing how to implement Fenwick tree's operations. Finally, here's a very good theoretic explanation about the structure and the underlying algorithms being used.
The problem with all the samples in the web is that they use the immutable version of the structure, which is not suitable to you, since you need to interleave queries with adding elements to the structure. However, they are all very useful to fully understand the structure and algorithms being used.
My suggestion is that you study Java TreeMap's implementation and see how to modify/extend it so that you can turn it into a Fenwick tree that keeps 1 as a value for every key. This 1 would be each key's frequency. So Fenwick tree's basic operation getSum(someElement) would actually return the size of the subset from first element up to someElement, in O(log n) time.
So the challenge is to implement a balanced tree (a descendant of Java's Red-Black TreeMap, actually), that implements all Fenwick tree's operations you need. I believe you'd be done with getSum(somElement), but maybe you could also extend the returned subtree range views so that they all refer to getSum(someElelment) when implementing size() operation for range views.
Hope this helps, at least I hope it's a good place to start. Please, let me know if you need clarifications, as well as examples.
If you don't need duplicate elements (or if you can make the elements look distinct), I'd use a java.util.TreeSet. It meets your stated requirements.
O(log n) sorted insertion due to binary tree structure
O(log n) segmentation time using in-place subsets
Unfortunately, the O(log n) segmentation time is effectively slowed to O(n) by your requirement to always know the size of the segment, due to the reason in the answer you linked. The in-place subsets don't know their size until you ask them, and then they count. The counted size is stored, but if the backing set is changed in any way, the subset has to count again.
I think the best data structure for this problem would be a B-Tree with a dense index. Such a B-Tree is built from:
- inner nodes containing only pointers to child nodes
- leafs containing pointers to paged arrays
- a number of equal-sized-arrays (pages)
Unfortunately there are few generic implementations of a B-Tree in Java, probably because so many Variations exist.
The cost of insertion would be
O(log(n)) to find the position
O(p) to insert a new element into a page (where p is the constant page size)
Maybe this data structure also covers your segmentation problem. If not: The cost of extracting would be
O(log(n)) to find the borders
O(e) to copy the extract (where e is the size of the extract)
One easy way to get what you want involves augmenting your favourite binary search tree data structure (red-black trees, AVL trees, etc...) with left and right subtree sizes at each node --- call them L-size and R-size.
Assume that updating these fields in your tree data structures can be done efficiently (say constant time). Then here is what you get:
Insertion, deletion, and all the regular binary search tree operations as efficient as your choice of data structure --- O(log n) for red-back trees.
Given a key x, you can get the number of elements in your tree that have keys less than x in O(log n) time, by descending down the tree to find the appropriate location for x, summing up the L-sizes (plus one for the actual node you're traversing) each time you "go right". The "greater than" case is symmetrical.
Given a key x, you can get the sorted list x_L of elements that are less than x in O(log n + |x_L|) time by, again, descending down the tree to find the appropriate location for x, and each time you go right you tag the node you just traversed, appending it to a list h_L. Then doing in-order traversals of each of the nodes in h_L (in order of addition to h_L) will give you x_L (sorted). The "greater than" case is symmetrical.
Finally, for my answer to work, I need to guarantee you that we can maintain these L- and R-sizes efficiently for your choice of specific tree data structure. I'll consider the case of red-black trees.
Note that maintaining L-sizes and R-sizes is done in constant time for vanilla binary search trees (when you add a node starting from the root, just add one to L-sizes if the node should go in the left subtree, or one to the R-sizes if it goes in the right subtree). Now the additional balancing procedures of red-black trees only alter the tree structure through local rotations of nodes --- see Wikipedia's depiction of rotations in red-black trees. It's easy to see that the post-rotation L-size and R-size of P and Q can be recalculated from the L-sizes and R-sizes of A,B,C. This only adds a constant amount of work to the red-black tree's operations.

Java: Inserting into LinkedList efficiently

I am optimizing an implementation of a sorted LinkedList.
To insert an element I traverse the list and compare each element until I have the correct index, and then break loop and insert.
I would like to know if there is any other way that I can insert the element at the same time as traversing the list to reduce the insert from O(n + (n capped at size()/2)) to O(n).
A ListIterator is almost what Im after because of its add() method, but unfortunately in the case where there are elements in the list equal to the insert, the insert has to be placed after them in the list. To implement this ListIterator needs a peek() which it doesnt have.
edit: I have my answer, but will add this anyway since a lot of people havent understood correctly:
I am searching for an insertion point AND inserting, which combined is higher than O(n)
You may consider a skip list, which is implemented using multiple linked lists at varying granularity. E.g. the linked list at level 0 contains all items, level 1 only links to every 2nd item on average, level 2 to only every 4th item on average, etc.... Searching starts from the top level and gradually descends to lower levels until it finds an exact match. This logic is similar to a binary search. Thus search and insertion is an O(log n) operation.
A concrete example in the Java class library is ConcurrentSkipListSet (although it may not be directly usable for you here).
I'd favor Péter Török suggestion, but I'd still like to add something for the iterator approach:
Note that ListIterator provides a previous() method to iterate through the list backwards. Thus first iterate until you find the first element that is greater and then go to the previous element and call add(...). If you hit the end, i.e. all elements are smaller, then just call add(...) without going back.
I have my answer, but will add this anyway since a lot of people havent understood correctly: I am searching for an insertion point AND inserting, which combined is higher than O(n).
Your require to maintain a collection of (possibly) non-unique elements that can iterated in an order given by a ordering function. This can be achieved in a variety of ways. (In the following I use "total insertion cost" to mean the cost of inserting a number (N) of elements into an initially empty data structure.)
A singly or doubly linked list offers O(N^2) total insertion cost (whether or not you combine the steps of finding the position and doing the insertion!), and O(N) iteration cost.
A TreeSet offers O(NlogN) total insertion cost and O(N) iteration cost. But has the restriction of no duplicates.
A tree-based multiset (e.g. TreeMultiset) has the same complexity as a TreeSet, but allows duplicates.
A skip-list data structure also has the same complexity as the previous two.
Clearly, the complexity measures say that a data structure that uses a linked list performs the worst as N gets large. For this particular group of requirements, a well-implemented tree-based multiset is probably the best, assuming there is only one thread accessing the collection. If the collection is heavily used by many threads (and it is a set), then a ConcurrentSkipListSet is probably better.
You also seem to have a misconception about how "big O" measures combine. If I have one step of an algorithm that is O(N) and a second step that is also O(N), then the two steps combined are STILL O(N) .... not "more than O(N)". You can derive this from the definition of "big O". (I won't bore you with the details, but the Math is simple.)

Collections.Sort performance on subsequent sorts?

I'm using Collections.sort with a custom comparator class. I've heard that this has O(N log N) runtime complexity. I'm curious to know what happens on subsequent sorts when the collection hasn't changed.
By example, lets say I have an ArrayList of Eggs, each which has an approximate size field (which my comparator sorts by). If I insert ten eggs into the array list, and sort it, I can expect it to take O(N log N) time.
If I sort it again, without adding, removing, or changing any elements, will it still take N log N time?
The Javadoc says 'the merge is omitted if the highest element in the low sublist is less than the lowest element in the high sublist'. That appears to mean nothing happens so it should be quicker.
You could always test it.
I have not analysed the code in the current sun java library. However, the javadoc states that a merge sort is used. Most merge sorts yield a O(n) performance on already sorted collection. Although this is not stated in the documentation. My personal experience has shown me really good performance on sorted or nearly sorted lists.
Per JavaDoc Collections.sort uses merge sort algorithm.
You can see how it does, for yourself, here -> http://www.sorting-algorithms.com/
To expand on EJP's answer, if the documentation indicates that the merge pass is the step skipped, then in this best case runtime would be LG N because it will still break the list into LG N subproblems. The multiplication against the linear scan is the improvement in efficiency.

why it is better to convert hashset to treeset then working directly with treeset

In many places in the web including sun website, the following sentence appear:
It generally faster to preform actions
on hashSet and then convert the
hashset to treeset.
well, i'm a little bit confused, thats correct that add element in hashset is o(1) and adding object in treeset (black & red tree) is o(logn) but when i convert the hashset to the treeset i need to sort my data which is o(nlogn) so why it is faster to work with hashset and then convert it to treeset? i know that if you preform remove or existing element so there is a diffrence between hash and tree but i don't think it is the factor that sun refer to (at least i hope so since it looks like a very small thing) another thing is the hashcode methods can be not so good and then adding elements to the hash will not be o(1) or the hashcode method can be complicated. so generally i don't understand the sentence. can anyone help me?
It depends on how many operations happen in the hash table before you copy the elements to the sorted tree structure. If all you do is insert n distinct elements to the hash table, then no, it will not be faster to do so then copy them to the tree :)
A hashed set of items can be converted to a sorted tree by either: using a regular sort then building the tree from that, or inserting the items into the tree one at a time. The former means an extra copy/traversal; the latter means extra overhead to maintain a balanced tree (although if you iterate a hash table, you get the items in effectively random order, which means you could probably avoid most rebalancing).
Hash tables are indeed typically faster than search trees for the operations that are well supported (insert/modify/delete), but it's definitely not worth doing what Sun recommends until you actually measure the performance of your whole application and can expect a valuable overall speedup from what will likely be a slight improvement.
Hash tables do have an even larger advantage over sorted trees when the key comparison is expensive (as with strings), because for large sets, fewer items will have a hash collision than a search tree is deep, and because it's possible to cache the hash code for keys already in the set, skipping the expensive comparison for (probably) all but the matching result.

Categories