I have a Java 8 application with an arbitrary Map<T, String>, where T extends Comparable<T>.
The easiest example uses integers:
Map<Integer,String> numbers = new HashMap<>(4);
numbers.put(10,"value10");
numbers.put(20,"value20");
numbers.put(30,"value30");
numbers.put(40,"value40");
I want to search this Map for the key that is close to an arbitrary input value, rounding up, unless no key exists that is greater, then round down. For instance:
input -5 returns key 10 (round up to the smallest key that is larger then input)
input 8 returns key 10 (round up to the smallest key that is larger then input)
input 10 return key 10 (exact match)
input 11 returns key 20 (round up to the smallest key that is larger then input)
input 40 return key 40 (exact match)
input 100 returns key 40 (round down, no key exists that is greater than 10)
I have a working implementation that naively loops over all keys, does all necessary comparisons and returns the best matching key based on these criteria. My application needs to check the same Map for different values often so this naive lookup can become a bottleneck. As demonstrated in this other question, I believe a sorted TreeMap might significantly increase the lookup time, but this class is a bit too complex for me to understand without some guidance .
Which methods of TreeMap can I use to implement this lookup?
How would the algorhitm below be simplified by using advantage of the fact that a TreeMap is sorted?
If not TreeMap, is another data structure more suited for this?
Here is the naive (but working) implementation. The Collection is actually the Map's Keyset:
private T getBestMatchingKeyForValue(Collection<T> keys, T value)
{
T bestMatchingKeySoFar = null;
for (T keyToCheck : keys)
{
if (bestMatchingKeySoFar == null)
{
bestMatchingKeySoFar = keyToCheck;
}
else
{
int valueComparedToBestMatching = value.compareTo(bestMatchingKeySoFar);
int valueComparedToKeyToCheck = value.compareTo(keyToCheck);
int partitionTocheckComparedToBestMatching = keyToCheck.compareTo(bestMatchingKeySoFar);
int signValueComparedToBestMatching = Integer.signum(valueComparedToBestMatching);
int signValueComparedToKeyToCheck = Integer.signum(valueComparedToKeyToCheck);
int signKeyToCheckComparedToBestMatching = Integer.signum(partitionTocheckComparedToBestMatching);
if (signValueComparedToBestMatching == signValueComparedToKeyToCheck)
{
if (signValueComparedToBestMatching == signKeyToCheckComparedToBestMatching)
{
bestMatchingKeySoFar = keyToCheck;
}
}
else if (valueComparedToKeyToCheck == 0)
{
bestMatchingKeySoFar = keyToCheck;
}
else if (valueComparedToBestMatching != 0)
{
if ((this.preferUpperBound && partitionTocheckComparedToBestMatching > 0)
|| (!this.preferUpperBound && partitionTocheckComparedToBestMatching < 0))
{
bestMatchingKeySoFar = keyToCheck;
}
}
}
}
return bestMatchingKeySoFar;
}
TreeMap's floor and ceiling methods do exactly what you're looking for:
TreeMap<K, V> map = ...
K search = ...
K closest = map.ceilingKey(search);
if (closest == null) {
closest = map.floorKey(search);
}
Related
I am trying to find the integer that appears an odd numbers of time, but somehow the tests on qualified.io are not returning true. May be there is something wrong with my logic?
The problem is that in an array [5,1,1,5,2,2,5] the number 5 appears 3 times, therefore the answer is 5. The method signature wants me to use List<>. So my code is below.
public static List<Integer> findOdd( List<Integer> integers ) {
int temp = integers.size();
if (integers.size() % 2 == 0) {
//Do something here.
}
return integers;
}
}
I need to understand couple things. What is the best way to check all elements inside integers list, and iterate over to see if any similar element is present, if yes, return that element.
If you are allowed to use java 8, you can use streams and collectors for this:
Map<Integer, Long> collect = list.stream()
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));
Given a list with integers, this code will generate a map, where the key is the actual number and value is number of repetitions.
You just have to iterate through map and find out what are you interested in.
You want to set up a data structure that will let you count every integer that appears in the list. Then iterate through your list and do the counting. When you're done, check your data structure for all integers that occur an odd number of times and add them to your list to return.
Something like:
public static List<Integer> findOdd(List<Integer> integers) {
Map<Integer, MutableInt> occurrences = new HashMap<>(); // Count occurrences of each integer
for (Integer i : integers) {
if (occurrences.containsKey(i)) {
occurrences.get(i).increment();
} else {
occurrences.put(i, new MutableInt(1));
}
}
List<Integer> answer = new ArrayList<>();
for (Integer i : occurrences.keySet()) {
if ((occurrences.get(i) % 2) == 1) { // It's odd
answer.add(i)
}
}
return answer;
}
MutableInt is an Apache Commons class. You can do it with plain Integers, but you have to replace the value each time.
If you've encountered streams before you can change the second half of the answer above (the odd number check) to something like:
return occurrences.entrySet().stream()
.filter(i -> i % 2 == 1)
.collect(Collectors.toList());
Note: I haven't compiled any of this myself so you may need to tweak it a bit.
int findOdd(int[] nums) {
Map<Integer, Boolean>evenNumbers = new HashMap<>();
nums.forEach(num -> {
Boolean status = evenNumbers.get(num);
if(status == null) {
evenNumbers.put(num, false);
}else{
evenNumbers.put(num, !status);
}
});
// Map holds true for all values with even occurrences
Iterator<Integer> it = evenNumbers.keySet().iterator();
while(it.hasNext()){
Integer key = it.next();
Boolean next = evenNumbers.get(key);
if(next == false){
return key;
}
}
}
You could use the reduce method from the IntStream package.
Example:
stream(ints).reduce(0, (x, y) -> x ^ y);
I have been given an assignment to change to upgrade an existing one.
Figure out how to recode the qualifying exam problem using a Map for each terminal line, on the
assumption that the size of the problem is dominated by the number of input lines, not the 500
terminal lines
The program takes in a text file that has number, name. The number is the PC number and the name is the user who logged on. The program returns the user for each pc that logged on the most. Here is the existing code
public class LineUsageData {
SinglyLinkedList<Usage> singly = new SinglyLinkedList<Usage>();
//function to add a user to the linked list or to increment count by 1
public void addObservation(Usage usage){
for(int i = 0; i < singly.size(); ++i){
if(usage.getName().equals(singly.get(i).getName())){
singly.get(i).incrementCount(1);
return;
}
}
singly.add(usage);
}
//returns the user with the most connections to the PC
public String getMaxUsage(){
int tempHigh = 0;
int high = 0;
String userAndCount = "";
for(int i = 0; i < singly.size(); ++i){//goes through list and keeps highest
tempHigh = singly.get(i).getCount();
if(tempHigh > high){
high = tempHigh;
userAndCount = singly.get(i).getName() + " " + singly.get(i).getCount();
}
}
return userAndCount;
}
}
I am having trouble on the theoretical side. We can use a hashmap or a treemap. I am trying to think through how I would form a map that would hold the list of users for each pc? I can reuse the Usage object which will hold the name and the count of the user. I am not supposed to alter that object though
When checking if Usage is present in the list you perform a linear search each time (O(N)). If you replace your list with the Map<String,Usage>, you'll be able to search for name in sublinear time. TreeMap has O(log N) time for search and update, HashMap has amortized O(1)(constant) time.
So, the most effective data structure in this case is HashMap.
import java.util.*;
public class LineUsageData {
Map<String, Usage> map = new HashMap<String, Usage>();
//function to add a user to the map or to increment count by 1
public void addObservation(Usage usage) {
Usage existentUsage = map.get(usage.getName());
if (existentUsage == null) {
map.put(usage.getName(), usage);
} else {
existentUsage.incrementCount(1);
}
}
//returns the user with the most connections to the PC
public String getMaxUsage() {
Usage maxUsage = null;
for (Usage usage : map.values()) {
if (maxUsage == null || usage.getCount() > maxUsage.getCount()) {
maxUsage = usage;
}
}
return maxUsage == null ? null : maxUsage.getName() + " " + maxUsage.getCount();
}
// alternative version that uses Collections.max
public String getMaxUsageAlt() {
Usage maxUsage = map.isEmpty() ? null :
Collections.max(map.values(), new Comparator<Usage>() {
#Override
public int compare(Usage o1, Usage o2) {
return o1.getCount() - o2.getCount();
}
});
return maxUsage == null ? null : maxUsage.getName() + " " + maxUsage.getCount();
}
}
Map can also be iterated in the time proportional to it's size, so you can use the same procedure to find maximum element in it. I gave you two options, either manual approach, or usage of Collections.max utility method.
With simple words: You use a LinkedList (singly or doubly) when you have a list of items, and you usually plan to traverse them,
and a Map implementation when you have "Dictionary-like" entries, where a key corresponds to a value and you plan to access the value using the key.
In order to convert your SinglyLinkedList to a HashMap or TreeMap, you need find out which property of your item will be used as your key (it must be an element with unique values).
Assuming you are using the name property from your Usage class, you can do this
(a simple example):
//You could also use TreeMap, depending on your needs.
Map<String, Usage> usageMap = new HashMap<String, Usage>();
//Iterate through your SinglyLinkedList.
for(Usage usage : singly) {
//Add all items to the Map
usageMap.put(usage.getName(), usage);
}
//Access a value using its name as the key of the Map.
Usage accessedUsage = usageMap.get("AUsageName");
Also note that:
Map<string, Usage> usageMap = new HashMap<>();
Is valid, due to diamond inference.
I Solved this offline and didn't get a chance to see some of the answers which looked to be both very helpful. Sorry about that Nick and Aivean and thanks for the responses. Here is the code i ended up writing to get this to work.
public class LineUsageData {
Map<Integer, Usage> map = new HashMap<Integer, Usage>();
int hash = 0;
public void addObservation(Usage usage){
hash = usage.getName().hashCode();
System.out.println(hash);
while((map.get(hash)) != null){
if(map.get(hash).getName().equals(usage.name)){
map.get(hash).count++;
return;
}else{
hash++;
}
}
map.put(hash, usage);
}
public String getMaxUsage(){
String str = "";
int tempHigh = 0;
int high = 0;
//for loop
for(Integer key : map.keySet()){
tempHigh = map.get(key).getCount();
if(tempHigh > high){
high = tempHigh;
str = map.get(key).getName() + " " + map.get(key).getCount();
}
}
return str;
}
}
I need some advice in terms of performance. I've got a Map<DateTime, String>. And I need something like the following method:
Map<DateTime, BigDecimal> map; // about 50 entries. Btw: Which impl to choose?
BigDecimal findNextSmaller(DateTime input) {
DateTime tmp = null;
for(DateTime d : map.keySet()) {
if(tmp == null && d < input) {
tmp = d;
}
if(d < input && d > tmp) {
tmp = d;
}
}
return map.get(tmp);
}
So basically I just iterate over the keySet of my Map and try to find the key which is the next smallest compared to input.
This method will get called about 1.000.000 times in a row:
BigDecimal sum;
List<Item> items; // about 1.000.000 Items
for(Item i : items) {
sum = sum.add(findNextSmaller(i.getDateTime()));
}
Now I'm looking for a way to make things faster.
My first thought was to make an OrderedList out of the Map's keySet. So in average I just have to iterate over half of the DateTimes. And then just do a map.get(dateTimeFromOrderedList) to get the matching value.
But is that all I can do about it?
You can use a TreeMap which has a built-in method for that:
TreeMap<DateTime, BigDecimal> map = new TreeMap<>();
//populate the map
BigDecimal findNextSmaller(DateTime input) {
return map.ceilingEntry(input).getValue(); //add exception checking as required
}
Note: you may want ceilingEntry or higherEntry depending on whether you want (resp.) >= or >.
Have a look at NavigableMap. This seems to be exactly what you need.
As you are searching for the DateTime closest and strictly less than the input, I would choose floorEntry(key) for the lookup. But make sure that you are handling nulls correctly. There may not be a key in the map that is strictly smaller than the input! If you try to add a null reference to a BigDecimal, a NullPointerException will be thrown.
I am working on an assignment where I have to implement my own HashMap. In the assignment text it is being described as an Array of Lists, and whenever you want to add an element the place it ends up in the Array is determined by its hashCode. In my case it is positions from a spreadsheet, so I have just taken columnNumber + rowNumber and then converted that to a String and then to an int, as the hashCode, and then I insert it that place in the Array. It is of course inserted in the form of a Node(key, value), where the key is the position of the cell and the value is the value of the cell.
But I must say I do not understand why we need an Array of Lists, because if we then end up with a list with more than one element, will it not increase the look up time quite considerably? So should it not rather be an Array of Nodes?
Also I have found this implementation of a HashMap in Java:
public class HashEntry {
private int key;
private int value;
HashEntry(int key, int value) {
this.key = key;
this.value = value;
}
public int getKey() {
return key;
}
public int getValue() {
return value;
}
}
public class HashMap {
private final static int TABLE_SIZE = 128;
HashEntry[] table;
HashMap() {
table = new HashEntry[TABLE_SIZE];
for (int i = 0; i < TABLE_SIZE; i++)
table[i] = null;
}
public int get(int key) {
int hash = (key % TABLE_SIZE);
while (table[hash] != null && table[hash].getKey() != key)
hash = (hash + 1) % TABLE_SIZE;
if (table[hash] == null)
return -1;
else
return table[hash].getValue();
}
public void put(int key, int value) {
int hash = (key % TABLE_SIZE);
while (table[hash] != null && table[hash].getKey() != key)
hash = (hash + 1) % TABLE_SIZE;
table[hash] = new HashEntry(key, value);
}
}
So is it correct that the put method, looks first at the table[hash], and if that is not empty and if what is in there has not got the key, being inputted in the method put, then it moves on to table[(hash + 1) % TABLE_SIZE]. But if it is the same key it simply overwrites the value. So is that correctly understood? And is it because the get and put method use the same method of looking up the place in the Array, that given the same key they would end up at the same place in the Array?
I know these questions might be a bit basic, but I have spend quite some time trying to get this sorted out, why any help would be much appreciated!
Edit
So now I have tried implementing the HashMap myself via a Node class, which just
constructs a node with a key and a corresponding value, it has also got a getHashCode method, where I just concatenate the two values on each other.
I have also constructed a SinglyLinkedList (part of a previous assignment), which I use as the bucket.
And my Hash function is simply hashCode % hashMap.length.
Here is my own implementation, so what do you think of it?
package spreadsheet;
public class HashTableMap {
private SinglyLinkedListMap[] hashArray;
private int size;
public HashTableMap() {
hashArray = new SinglyLinkedListMap[64];
size = 0;
}
public void insert(final Position key, final Expression value) {
Node node = new Node(key, value);
int hashNumber = node.getHashCode() % hashArray.length;
SinglyLinkedListMap bucket = new SinglyLinkedListMap();
bucket.insert(key, value);
if(hashArray[hashNumber] == null) {
hashArray[hashNumber] = bucket;
size++;
}
if(hashArray[hashNumber] != null) {
SinglyLinkedListMap bucket2 = hashArray[hashNumber];
bucket2.insert(key, value);
hashArray[hashNumber] = bucket2;
size++;
}
if (hashArray.length == size) {
SinglyLinkedListMap[] newhashArray = new SinglyLinkedListMap[size * 2];
for (int i = 0; i < size; i++) {
newhashArray[i] = hashArray[i];
}
hashArray = newhashArray;
}
}
public Expression lookUp(final Position key) {
Node node = new Node(key, null);
int hashNumber = node.getHashCode() % hashArray.length;
SinglyLinkedListMap foundBucket = hashArray[hashNumber];
return foundBucket.lookUp(key);
}
}
The look up time should be around O(1), so I would like to know if that is the case? And if not how can I improve it, in that regard?
You have to have some plan to deal with hash collisions, in which two distinct keys fall in the same bucket, the same element of your array.
One of the simplest solutions is to keep a list of entries for each bucket.
If you have a good hashing algorithm, and make sure the number of buckets is bigger than the number of elements, you should end up with most buckets having zero or one items, so the list search should not take long. If the lists are getting too long it is time to rehash with more buckets to spread the data out.
It really depends on how good your hashcode method is. Lets say you tried to make it as bad as possible: You made hashcode return 1 every time. If that were the case, you'd have an array of lists, but only 1 element of the array would have any data in it. That element would just grow to have a huge list in it.
If you did that, you'd have a really inefficient hashmap. But, if your hashcode were a little better, it'd distribute the objects into many different array elements and as a result it'd be much more efficient.
The most ideal case (which often isn't achievable) is to have a hashcode method that returns a unique number no matter what object you put into it. If you could do that, you wouldn't ever need an array of lists. You could just use an array. But since your hashcode isn't "perfect" it's possible for two different objects to have the same hashcode. You need to be able to handle that scenario by putting them in a list at the same array element.
But, if your hashcode method was "pretty good" and rarely had collisions, you rarely would have more than 1 element in the list.
The Lists are often referred to as buckets and are a way of dealing with collisions. When two data elements have the same hash code mod TABLE SIZE they collide, but both must be stored.
A worse kind of collision is two different data point having the same key -- this is disallowed in hash tables and one will overwrite the others. If you just add row to column, then (2,1) and (1,2) will both have a key of 3, which means they cannot be stored in the same hash table. If you concatenated the strings together without a separator then the problem is with (12,1) versus (1, 21) --- both have key "121" With a separator (such as a comma) all the keys will be distinct.
Distinct keys can land in the same buck if there hashcodes are the same mod TABLE_SIZE. Those lists are one way to store the two values in the same bucket.
class SpreadSheetPosition {
int column;
int row;
#Override
public int hashCode() {
return column + row;
}
}
class HashMap {
private Liat[] buckets = new List[N];
public void put(Object key, Object value) {
int keyHashCode = key.hashCode();
int bucketIndex = keyHashCode % N;
...
}
}
Compare having N lists, with having just one list/array. For searching in a list one has to traverse possibly the entire list. By using an array of lists, one at least reduces the single lists. Possibly even getting a list of one or zero elements (null).
If the hashCode() is as unique as possible the chance for an immediate found is high.
Maybe I am not using the right data structure. I need to use a set, but also want to efficiently return the k-th smallest element. Can TreeSet in Java do this? There seems no built-in method of TreeSet to do this.
I don't believe that TreeSet has a method that directly does this. There are binary search trees that do support O(log n) random access (they are sometimes called order statistic trees), and there are Java implementations of this data structure available. These structures are typically implemented as binary search trees that store information in each node counting how many elements are to the left or right of the node, so a search down the tree can be made to find the appropriate element by descending into the appropriate subtree at each step. The classic "Introduction to Algorithms, Third Edition" book by Cormen, Rivest, Leisserson, and Stein explores this data structure in their chapter "Augmenting Data Structures" if you are curious how to implement one yourself.
Alternatively, you may be able (in some cases) to use the TreeSet's tailSet method and a modified binary search to try to find the kth element. Specifically, look at the first and last elements of the TreeSet, then (if possible given the contents) pick some element that is halfway between the two and pass it as an argument to tailSet to get a view of the elements of the set after the midpoint. Using the number of elements in the tailSet, you could then decide whether you've found the element, or whether to explore the left or right halves of the tree. This is a slightly modified interpolation search over the tree, and could potentially be fast. However, I don't know the internal complexity of the tailSet methods, so this could be actually be worse than the order statistic tree. It also might fail if you can't compute the "midpoint" of two elements, for example, if you are storing Strings in your TreeSet.
You just need to iterate to element k. One way to do that would be to use one of Guava's Iterables.get methods:
T element = Iterables.get(set, k);
There's no built in method to do this because a Set is not a List and index-based operations like that are generally the reserved for Lists. A TreeSet is more appropriate for things like finding the closest contained element that is >= some value.
One thing you could do if the fastest possible access to the kth smallest element were really important would be to use an ArrayList rather than a TreeSet and handle inserts by binary searching for the insertion point and either inserting the element at that index or replacing the existing element at that index, depending on the result of the search. Then you could get the kth smallest element in O(1) by just calling get(k).
You could even create an implementation of SortedSet that handles all that and adds the get(index) method if you really wanted.
Use TreeSet.iterator() to get an iterator in ascending order and call next() K times:
// Example for Integers
Iterator<Integer> it = treeSet.iterator();
int i = 0;
Integer current = null;
while(it.hasNext() && i < k) {
current = it.next();
i++;
}
https://github.com/geniot/indexed-tree-map
I had the same problem. So I took the source code of java.util.TreeMap and wrote IndexedTreeMap. It implements my own IndexedNavigableMap:
public interface IndexedNavigableMap<K, V> extends NavigableMap<K, V> {
K exactKey(int index);
Entry<K, V> exactEntry(int index);
int keyIndex(K k);
}
The implementation is based on updating node weights in the red-black tree when it is changed. Weight is the number of child nodes beneath a given node, plus one - self. For example when a tree is rotated to the left:
private void rotateLeft(Entry<K, V> p) {
if (p != null) {
Entry<K, V> r = p.right;
int delta = getWeight(r.left) - getWeight(p.right);
p.right = r.left;
p.updateWeight(delta);
if (r.left != null) {
r.left.parent = p;
}
r.parent = p.parent;
if (p.parent == null) {
root = r;
} else if (p.parent.left == p) {
delta = getWeight(r) - getWeight(p.parent.left);
p.parent.left = r;
p.parent.updateWeight(delta);
} else {
delta = getWeight(r) - getWeight(p.parent.right);
p.parent.right = r;
p.parent.updateWeight(delta);
}
delta = getWeight(p) - getWeight(r.left);
r.left = p;
r.updateWeight(delta);
p.parent = r;
}
}
updateWeight simply updates weights up to the root:
void updateWeight(int delta) {
weight += delta;
Entry<K, V> p = parent;
while (p != null) {
p.weight += delta;
p = p.parent;
}
}
And when we need to find the element by index here is the implementation that uses weights:
public K exactKey(int index) {
if (index < 0 || index > size() - 1) {
throw new ArrayIndexOutOfBoundsException();
}
return getExactKey(root, index);
}
private K getExactKey(Entry<K, V> e, int index) {
if (e.left == null && index == 0) {
return e.key;
}
if (e.left == null && e.right == null) {
return e.key;
}
if (e.left != null && e.left.weight > index) {
return getExactKey(e.left, index);
}
if (e.left != null && e.left.weight == index) {
return e.key;
}
return getExactKey(e.right, index - (e.left == null ? 0 : e.left.weight) - 1);
}
Also comes in very handy finding the index of a key:
public int keyIndex(K key) {
if (key == null) {
throw new NullPointerException();
}
Entry<K, V> e = getEntry(key);
if (e == null) {
throw new NullPointerException();
}
if (e == root) {
return getWeight(e) - getWeight(e.right) - 1;//index to return
}
int index = 0;
int cmp;
if (e.left != null) {
index += getWeight(e.left);
}
Entry<K, V> p = e.parent;
// split comparator and comparable paths
Comparator<? super K> cpr = comparator;
if (cpr != null) {
while (p != null) {
cmp = cpr.compare(key, p.key);
if (cmp > 0) {
index += getWeight(p.left) + 1;
}
p = p.parent;
}
} else {
Comparable<? super K> k = (Comparable<? super K>) key;
while (p != null) {
if (k.compareTo(p.key) > 0) {
index += getWeight(p.left) + 1;
}
p = p.parent;
}
}
return index;
}
You can find the result of this work at https://github.com/geniot/indexed-tree-map.
TreeSet<Integer> a=new TreeSet<>();
a.add(1);
a.add(2);
a.add(-1);
System.out.println(a.toArray()[0]);
it can be helpfull
[Below, I abbreviate "kth smallest element search operation" as "Kth op."]
You need to give more details. Which operations will your data structure provide? is K in Kth operation very small compared to N, or can it be anything? How often will you have insertions & deletions compared to look ups? How often will you have Kth smallest element search compared to look ups? Are you looking for a quick solution of couple of lines within Java library, or are you willing to spend some effort to build a custom data structure?
The operations to provide could be any subset of:
LookUp (find an element by its key; where key is comparable and can be anything)
Insert
Delete
Kth
Here are some possibilities:
If there will be no/very few insertions&deletions, you can just sort the elements and use an array, with O(Log(N)) look up time and O(1) for Kth.
If O(Log(N)) for LookUp, Insert, Delete and O(k) for Kth op. is good enough, probably the easiest implementation would be Skip Lists. (Wikipedia article is very good if you need more detail)
If K is small enough, or Kth operations will only come after "insertions&deletions phase" you can keep the smallest K elements in a heap, sorting after the insertions&deletions for O(N + k Log k) time. (You will also need a seperate Hash for LookUp)
If K is arbitrary and O(N) is good enough for Kth operation, you can use a Hash for O(1) time lookup, and use a "one-sided-QuickSort" algorithm for Kth operations (the basic idea is do a quick sort but on every binary divide recurse only on the side you really need; which would give (this is a gross simplification) N (1/2 + 1/4 + 1/8 + ... ) = O(N) expected time)
You can build an Augmented "simple" Interval Tree structure with each node keeping the number of his children, so that LookUp, Insert, Delete, Kth all compute in O(Log N) time as long as the tree is balanced but perhaps it would not be difficult to implement if you are a novice.
etc. etc. The set of alternatives is infinite as the possible interpretations of your question.
Could you use a ConcurrentSkipListSet and use the toArray() method? ConcurrentSkipListSet is sorted by the natural order of elements. The only thing I am not sure about is if the toArray() is O(n) or since it's backed by a List (backed by an array, like ArrayList) it's O(1).
If toArray() is O(1) the you should be able to be a skipList.toArray()[k] to get the k-th smallest element.
I know this question is quite old, but since TreeSet implements NavigableSet you have access to the subSet method which runs in constant time.
subSet(k, k + 1).first();
The first() call takes log(n) time where n is the size of the original set. This does create some unnecessary objects which could be avoided with a more robust implementation of TreeSet, but it avoids using a third party library.