#SuppressWarnings("unchecked")
public static final Ordering<EmailTemplate> ARBITRARY_ORDERING = (Ordering)Ordering.arbitrary();
public static final Ordering<EmailTemplate> ORDER_BY_NAME = Ordering.natural().nullsFirst().onResultOf(GET_NAME);
public static final Ordering<EmailTemplate> ORDER_BY_NAME_SAFE = Ordering.allEqual().nullsFirst()
.compound(ORDER_BY_NAME)
.compound(ARBITRARY_ORDERING);
Here's the code a use to order EmailTemplate.
If i have a list of EmailTemplate i want the null elements of the list to appear at the beginning, then the elements with a null name, and then by natural name order, and if they have the same name, an arbitrary order.
Is it how i am supposed to do? It seems strange to start the comparator by "allEqual" i think...
I also wonder what's the best way to deal with the Ordering.arbitrary(), since it's a static method that returns Ordering. Is there any elegant way to use it? I don't really like this kind of useless, with warning, line:
#SuppressWarnings("unchecked")
public static final Ordering<EmailTemplate> ARBITRARY_ORDERING = (Ordering)Ordering.arbitrary();
By the way, the documentation says:
Returns an arbitrary ordering over all objects, for which compare(a,
b) == 0 implies a == b (identity equality). There is no meaning
whatsoever to the order imposed, but it is constant for the life of the VM.
Does this mean that my object being compared with this Ordering will never be garbage collected?
Regarding the second question: no. Guava uses the identity hash codes of the objects to sort them arbitrarily.
Regarding the first question: I would use a comparison chain to sort by name, then by arbitrary order:
private class ByNameThenArbitrary implements Comparator<EmailTemplate> {
#Override
public int compare(EmailTemplate e1, EmailTemplate e2) {
return ComparisonChain.start()
.compare(e1.getName(), e2.getName(), Ordering.natural().nullsFirst(),
.compare(e1, e2, Ordering.arbitrary())
.result();
}
}
Then I would create the real ordering to order the templates with nulls first:
private static final Ordering<EmailTemplate> ORDER =
Ordering.fromComparator(new ByNameThenArbitrary()).nullsFirst();
Not tested, though.
I'm pretty sure, you're doing it too complicated:
Ordering.arbitrary() works with any Object and the compound doesn't require to restrict it to EmailTemplate
Saying nullsFirst() takes priority when null gets compared, and I'd suggest to apply it last
You don't need to define multiple constants, it all should be easy
I'd go for
public static final Ordering<EmailTemplate> ORDER_BY_NAME_SAFE = Ordering
.natural()
.onResultOf(GET_NAME)
.compound(Ordering.arbitrary())
.nullsFirst();
but I haven't tested it.
What's confusing here, is the way how compound and nullsFirst work. With the former, this takes precedence, while with the latter testing for null wins. Both is logical:
compound works left to right
nullsFirst must first test for null, otherwise we'd get an expection
but taken together it's confusing.
Does this mean that my object being compared with this Ordering will never be garbage collected?
No, it uses weak references. Whenever an object isn't referenced elsewhere, it can be garbage collected. This is no contradiction to "the ordering is constant for the life of the VM", since a no more existing object can't be compared anymore.
Note that Ordering.arbitrary() is indeed arbitrary and based on object's identity rather than on equals, which means that
Ordering.arbitrary().compare(new String("a"), new String("a"))
doesn't return 0.
I wonder if an "equals-compatible arbitrary ordering" could be implemented.
Related
Consider the following compareTo method, implementing the Comparable<T> interface.:
#Override
public int compareTo(MyObject o)
{
if (o.value.equals(value)
return 0;
return 1;
}
Apparantly, the programmer implemented the compareTo as if it was equals(). Obviously a mistake. I would expect this to cause Collections.sort() to crash, but it doesn't. Instead it will just give an arbitrairy result: the sorted result is dependant on the initial ordering.
public class MyObject implements Comparable<MyObject>
{
public static void main(String[] args)
{
List<MyObject> objects =
Arrays.asList(new MyObject[] {
new MyObject(1), new MyObject(2), new MyObject(3)
});
Collections.sort(objects);
System.out.println(objects);
List<MyObject> objects2 =
Arrays.asList(new MyObject[] {
new MyObject(3), new MyObject(1), new MyObject(2)
});
Collections.sort(objects2);
System.out.println(objects2);
}
public int value;
public MyObject(int value)
{
this.value = value;
}
#Override
public int compareTo(MyObject o)
{
if (value == o.value)
return 0;
return 1;
}
public String toString()
{
return "" + value;
}
}
Result:
[3, 2, 1]
[2, 1, 3]
Can we come up with a use case for this curious implementation of the compareTo, or is it always invalid. And in case of the latter, should it throw an exception, or perhaps not even compile?
There's no reason for it to crash or throw an exception.
You're required to fulfil the contract when you implement the method, but if you don't, it just means that you'll get arbitrary results from anything that relies on it. Nothing is going to go out of its way to check the correctness of your implementation, because that would just slow everything down.
A sorting algorithm's efficiency is defined in terms of the number of comparisons it makes. That means that it's not going to add in extra comparisons just to check that your implementation is consistent, any more than a HashMap is going to call .hashcode() on everything twice just to check it gives the same result both times.
If it happens to spot a problem during the course of sorting, then it might throw an exception; but don't rely on it.
Violating the contract of Comparable or Comparator does not necessarily result in an exception. The sort method won’t spend additional efforts to detect such a situation. Therefore, it might result in an inconsistent order, an apparently correct result or in an exception being thrown.
The actual behavior depends on the input data and the current implementation. E.g. Java 7 introduced TimSort in it’s sort implementation which is more likely to throw an exception for inconsistent Comparable or Comparator implementations than the implementations of earlier Java releases. This might spot errors that remained undetected when using previous Java versions, however, that’s not a feature to aid debugging in the first place, it’s just a side-effect of more sophisticated optimizations.
Note that it isn’t entirely impossible for a compiler or code audit tool to detect asymmetrical behavior of a compare method for simple cases like in your question. However, as far as I know, there are no compilers performing such a check automatically. If you want to be on the safe side, you should always implement unit tests for classes implementing Comparable or Comparator.
According to the documentation, Collections.sort() uses a variant of merge sort, which divides the list into multiple sublists and then repeatedly merge those lists repeatedly; the sorting part is done during the merging of those lists; if your comparison method is arbitrary, what will happen is that this merging will be done in an arbitrary order.
As a result of this every element is bigger than all the other elements.
Depending on the mathematical group or order you want to represent it may be a valid case and therefore there is no reason to throw any errors. However the example you show does not represent the natural ordering of numbers as you know them by standard.
The presented order is not total.
==EDIT==
Thanks for the comment, indeed it is not allowed by the specification to use the compareTo method to implement non-total or non-antisymmetric orders.
Which on these instructions is better in terms of performance and memory usage :
if(val.equals(CONSTANT1) || val.equals(CONSTANT2) ..... || val.equals(CONSTANTn)) {
}
OR
if(Arrays.asList(CONSTANT1,CONSTANT2, ..... ,CONSTANTn).contains(val)) {
}
A better question to ask would be how to write this code more clearly (and faster, if performance actually matters). The answer to that would be a switch statement (or possibly even polymorphism, if you want to convert your constants into an enum) or a lookup array.
But if you insist on comparing your two approaces, the first is slightly faster. To see this, let's look at what the second aproach entails:
create a new array with the constants, to pass them to the vararg parameter of Arrays.asList
create a new list object wrapping that array
iterate over that array, comparing each element with equals
The third step is equivalent to your first approach.
Finally, it's worth noting that such an operation will likely take far less than a micro second, so unless you invoke this method millions of times per second, any approach will be fast enough.
Theoretically #1 is faster but insignificantly, because Arrays.asList creates only one object - a list view (wrapper) of the specified array, there is no array copying:
public static <T> List<T> asList(T... a) {
return new ArrayList<T>(a);
}
private static class ArrayList<E> extends AbstractList<E>
implements RandomAccess, java.io.Serializable
{
private static final long serialVersionUID = -2764017481108945198L;
private final E[] a;
ArrayList(E[] array) {
if (array==null)
throw new NullPointerException();
a = array;
}
Since you are not using a loop I guess that the number of values is so low that in practice any differences will be irrelevant.
However, having said that, if one was to iterate by hand and use equals() versus asList() and contains()... it would still be the same.
Arrays.asList() returns a private implementation of a list which extends AbstractList and simply wraps around the existing array by reference (no copy is done). The contains() method uses the indexOf() which goes through the array using equals() on each element until it finds a match and then returns it. If you would break on your loop when you find an equals then both implementations would be quite equivalent.
The only difference would be a tiny memory footprint for the additional list structure that Arrays.asList() creates, other than that...
if(val.equals(CONSTANT1) || val.equals(CONSTANT2) ..... || val.equals(CONSTANTn)) {
}
is the better in terms of performance and memory because the 2nd one will take time to build a list and start searching for the val in that list. Here extra memory is required to maintain the list and also extra time is spent on iterating through the list.Where as the comparing the val with constant will make use of short circuit comparison approach.
Consider a class with a comparable (consistent with equals) and a non-comparable field (of a class about which I do not know whether it overrides Object#equals or not).
The class' instances shall be compared, where the resulting order shall be consistent with equals, i.e. 0 returned iff both fields are equal (as per Object#equals) and consistent with the order of the comparable field. I used System.identityHashCode to cover most of the cases not covered by these requirements (the order of instances with same comparable, but different other value is arbitrary), but am not sure whether this is the best approach.
public class MyClass implements Comparable<MyClass> {
private Integer intField;
private Object nonCompField;
public int compareTo(MyClass other) {
int intFieldComp = this.intField.compareTo(other.intField);
if (intFieldComp != 0)
return intFieldComp;
if (this.nonCompField.equals(other.nonCompField))
return 0;
// ...and now? My current approach:
if (Systems.identityHashCode(this.nonCompField) < Systems.identityHashCode(other.nonCompField))
return -1;
else
return 1;
}
}
Two problems I see here:
If Systems.identityHashCode is the same for two objects, each is greater than the other. (Can this happen at all?)
The order of instances with same intField value and different nonCompField values need not be consistent between runs of the program, as far as I understand what Systems.identityHashCode does.
Is that correct? Are there more problems? Most importantly, is there a way around this?
The first problem, although highly unlikely, could happen (I think you would need an enormous amount of memory, and a very bad luck). But it's solved by Guava's Ordering.arbitrary(), which uses the identity hash code behind the scenes, but maintains a cache of comparison results for the cases where two different objects have the same identity hash code.
Regarding your second question, no, the identity hash codes are not preserved between runs.
Systems.identityHashCode […] the same for two objects […] (Can this happen at all?)
Yes it can. Quoting from the Java API Documentation:
As much as is reasonably practical, the hashCode method defined by class Object does return distinct integers for distinct objects.
identityHashCode(Object x) returns the same hash code for the given object as would be returned by the default method hashCode(), whether or not the given object's class overrides hashCode().
So you may encounter hash collisions, and with memory ever growing but hash codes staying fixed at 32 bit, they will become increasingly more likely.
The order of instances with same intField value and different nonCompField values need not be consistent between runs of the program, as far as I understand what Systems.identityHashCode does.
Right. It might even be different during a single invocation of the same program: You could have (1,foo) < (1,bar) < (1,baz) even though foo.equals(baz).
Most importantly, is there a way around this?
You can maintain a map which maps each distinct value of the non-comparable type to a sequence number which you increase for each distinct value you encounter.
Memory management will be tricky, though: You cannot use a WeakHashMap as the code might make your key object unreachable but still hold a reference to another object of the same value. So either you maintain a list of weak references to all the objects of a given value, or you simply use strong references and accept the fact that any uncomparable value ever encountered will never be garbage collected.
Note that this scheme will still not result in reproducible sequence numbers unless you create values reproducibly in just the same order.
If the class of the nonCompField has implemented a reasonably good toString(), you might be able to use
return String.valueOf(this.nonCompField).compareTo(String.valueOf(other.nonCompField));
Unfortunately, the default Object.toString() uses the hashcode, which has potential issues as noted by others.
I'm using enumerations to replace String constants in my java app (JRE 1.5).
Is there a performance hit when I treat the enum as a static array of names in a method that is called constantly (e.g. when rendering the UI)?
My code looks a bit like this:
public String getValue(int col) {
return ColumnValues.values()[col].toString();
}
Clarifications:
I'm concerned with a hidden cost related to enumerating values() repeatedly (e.g. inside paint() methods).
I can now see that all my scenarios include some int => enum conversion - which is not Java's way.
What is the actual price of extracting the values() array? Is it even an issue?
Android developers
Read Simon Langhoff's answer below, which has pointed out earlier by Geeks On Hugs in the accepted answer's comments. Enum.values() must do a defensive copy
For enums, in order to maintain immutability, they clone the backing array every time you call the Values() method. This means that it will have a performance impact. How much depends on your specific scenario.
I have been monitoring my own Android app and found out that this simple call used 13.4% CPU time! in my specific case.
In order to avoid cloning the values array, I decided to simple cache the values as a private field and then loop through those values whenever needed:
private final static Protocol[] values = Protocol.values();
After this small optimisation my method call only hogged a negligible 0.0% CPU time
In my use case, this was a welcome optimisation, however, it is important to note that using this approach is a tradeoff of mutability of your enum. Who knows what people might put into your values array once you give them a reference to it!?
Enum.values() gives you a reference to an array, and iterating over an array of enums costs the same as iterating over an array of strings. Meanwhile, comparing enum values to other enum values can actually be faster that comparing strings to strings.
Meanwhile, if you're worried about the cost of invoking the values() method versus already having a reference to the array, don't worry. Method invocation in Java is (now) blazingly fast, and any time it actually matters to performance, the method invocation will be inlined by the compiler anyway.
So, seriously, don't worry about it. Concentrate on code readability instead, and use Enum so that the compiler will catch it if you ever try to use a constant value that your code wasn't expecting to handle.
If you're curious about why enum comparisons might be faster than string comparisons, here are the details:
It depends on whether the strings have been interned or not. For Enum objects, there is always only one instance of each enum value in the system, and so each call to Enum.equals() can be done very quickly, just as if you were using the == operator instead of the equals() method. In fact, with Enum objects, it's safe to use == instead of equals(), whereas that's not safe to do with strings.
For strings, if the strings have been interned, then the comparison is just as fast as with an Enum. However, if the strings have not been interned, then the String.equals() method actually needs to walk the list of characters in both strings until either one of the strings ends or it discovers a character that is different between the two strings.
But again, this likely doesn't matter, even in Swing rendering code that must execute quickly. :-)
#Ben Lings points out that Enum.values() must do a defensive copy, since arrays are mutable and it's possible you could replace a value in the array that is returned by Enum.values(). This means that you do have to consider the cost of that defensive copy. However, copying a single contiguous array is generally a fast operation, assuming that it is implemented "under the hood" using some kind of memory-copy call, rather than naively iterating over the elements in the array. So, I don't think that changes the final answer here.
As a rule of thumb : before thinking about optimizing, have you any clue that this code could slow down your application ?
Now, the facts.
enum are, for a large part, syntactic sugar scattered across the compilation process. As a consequence, the values method, defined for an enum class, returns a static collection (that's to say loaded at class initialization) with performances that can be considered as roughly equivalent to an array one.
If you're concerned about performance, then measure.
From the code, I wouldn't expect any surprises but 90% of all performance guesswork is wrong. If you want to be safe, consider to move the enums up into the calling code (i.e. public String getValue(ColumnValues value) {return value.toString();}).
use this:
private enum ModelObject { NODE, SCENE, INSTANCE, URL_TO_FILE, URL_TO_MODEL,
ANIMATION_INTERPOLATION, ANIMATION_EVENT, ANIMATION_CLIP, SAMPLER, IMAGE_EMPTY,
BATCH, COMMAND, SHADER, PARAM, SKIN }
private static final ModelObject int2ModelObject[] = ModelObject.values();
If you're iterating through your enum values just to look for a specific value, you can statically map the enum values to integers. This pushes the performance impact on class load, and makes it easy/low impact to get specific enum values based on a mapped parameter.
public enum ExampleEnum {
value1(1),
value2(2),
valueUndefined(Integer.MAX_VALUE);
private final int enumValue;
private static Map enumMap;
ExampleEnum(int value){
enumValue = value;
}
static {
enumMap = new HashMap<Integer, ExampleEnum>();
for (ExampleEnum exampleEnum: ExampleEnum.values()) {
enumMap.put(exampleEnum.value, exampleEnum);
}
}
public static ExampleEnum getExampleEnum(int value) {
return enumMap.contains(value) ? enumMap.get(value) : valueUndefined;
}
}
I think yes. And it is more convenient to use Constants.
With a TreeMap it's trivial to provide a custom Comparator, thus overriding the semantics provided by Comparable objects added to the map. HashMaps however cannot be controlled in this manner; the functions providing hash values and equality checks cannot be 'side-loaded'.
I suspect it would be both easy and useful to design an interface and to retrofit this into HashMap (or a new class)? Something like this, except with better names:
interface Hasharator<T> {
int alternativeHashCode(T t);
boolean alternativeEquals(T t1, T t2);
}
class HasharatorMap<K, V> {
HasharatorMap(Hasharator<? super K> hasharator) { ... }
}
class HasharatorSet<T> {
HasharatorSet(Hasharator<? super T> hasharator) { ... }
}
The case insensitive Map problem gets a trivial solution:
new HasharatorMap(String.CASE_INSENSITIVE_EQUALITY);
Would this be doable, or can you see any fundamental problems with this approach?
Is the approach used in any existing (non-JRE) libs? (Tried google, no luck.)
EDIT: Nice workaround presented by hazzen, but I'm afraid this is the workaround I'm trying to avoid... ;)
EDIT: Changed title to no longer mention "Comparator"; I suspect this was a bit confusing.
EDIT: Accepted answer with relation to performance; would love a more specific answer!
EDIT: There is an implementation; see the accepted answer below.
EDIT: Rephrased the first sentence to indicate more clearly that it's the side-loading I'm after (and not ordering; ordering does not belong in HashMap).
.NET has this via IEqualityComparer (for a type which can compare two objects) and IEquatable (for a type which can compare itself to another instance).
In fact, I believe it was a mistake to define equality and hashcodes in java.lang.Object or System.Object at all. Equality in particular is hard to define in a way which makes sense with inheritance. I keep meaning to blog about this...
But yes, basically the idea is sound.
A bit late for you, but for future visitors, it might be worth knowing that commons-collections has an AbstractHashedMap (in 3.2.2 and with generics in 4.0). You can override these protected methods to achieve your desired behaviour:
protected int hash(Object key) { ... }
protected boolean isEqualKey(Object key1, Object key2) { ... }
protected boolean isEqualValue(Object value1, Object value2) { ... }
protected HashEntry createEntry(
HashEntry next, int hashCode, Object key, Object value) { ... }
An example implementation of such an alternative HashedMap is commons-collections' own IdentityMap (only up to 3.2.2 as Java has its own since 1.4).
This is not as powerful as providing an external "Hasharator" to a Map instance. You have to implement a new map class for every hashing strategy (composition vs. inheritance striking back...). But it's still good to know.
HashingStrategy is the concept you're looking for. It's a strategy interface that allows you to define custom implementations of equals and hashcode.
public interface HashingStrategy<E>
{
int computeHashCode(E object);
boolean equals(E object1, E object2);
}
You can't use a HashingStrategy with the built in HashSet or HashMap. GS Collections includes a java.util.Set called UnifiedSetWithHashingStrategy and a java.util.Map called UnifiedMapWithHashingStrategy.
Let's look at an example.
public class Data
{
private final int id;
public Data(int id)
{
this.id = id;
}
public int getId()
{
return id;
}
// No equals or hashcode
}
Here's how you might set up a UnifiedSetWithHashingStrategy and use it.
java.util.Set<Data> set =
new UnifiedSetWithHashingStrategy<>(HashingStrategies.fromFunction(Data::getId));
Assert.assertTrue(set.add(new Data(1)));
// contains returns true even without hashcode and equals
Assert.assertTrue(set.contains(new Data(1)));
// Second call to add() doesn't do anything and returns false
Assert.assertFalse(set.add(new Data(1)));
Why not just use a Map? UnifiedSetWithHashingStrategy uses half the memory of a UnifiedMap, and one quarter the memory of a HashMap. And sometimes you don't have a convenient key and have to create a synthetic one, like a tuple. That can waste more memory.
How do we perform lookups? Remember that Sets have contains(), but not get(). UnifiedSetWithHashingStrategy implements Pool in addition to Set, so it also implements a form of get().
Here's a simple approach to handle case-insensitive Strings.
UnifiedSetWithHashingStrategy<String> set =
new UnifiedSetWithHashingStrategy<>(HashingStrategies.fromFunction(String::toLowerCase));
set.add("ABC");
Assert.assertTrue(set.contains("ABC"));
Assert.assertTrue(set.contains("abc"));
Assert.assertFalse(set.contains("def"));
Assert.assertEquals("ABC", set.get("aBc"));
This shows off the API, but it's not appropriate for production. The problem is that the HashingStrategy constantly delegates to String.toLowerCase() which creates a bunch of garbage Strings. Here's how you can create an efficient hashing strategy for case-insensitive Strings.
public static final HashingStrategy<String> CASE_INSENSITIVE =
new HashingStrategy<String>()
{
#Override
public int computeHashCode(String string)
{
int hashCode = 0;
for (int i = 0; i < string.length(); i++)
{
hashCode = 31 * hashCode + Character.toLowerCase(string.charAt(i));
}
return hashCode;
}
#Override
public boolean equals(String string1, String string2)
{
return string1.equalsIgnoreCase(string2);
}
};
Note: I am a developer on GS collections.
Trove4j has the feature I'm after and they call it hashing strategies.
Their map has an implementation with different limitations and thus different prerequisites, so this does not implicitly mean that an implementation for Java's "native" HashMap would be feasible.
Note: As noted in all other answers, HashMaps don't have an explicit ordering. They only recognize "equality". Getting an order out of a hash-based data structure is meaningless, as each object is turned into a hash - essentially a random number.
You can always write a hash function for a class (and often times must), as long as you do it carefully. This is a hard thing to do properly because hash-based data structures rely on a random, uniform distribution of hash values. In Effective Java, there is a large amount of text devoted to properly implementing a hash method with good behaviour.
With all that being said, if you just want your hashing to ignore the case of a String, you can write a wrapper class around String for this purpose and insert those in your data structure instead.
A simple implementation:
public class LowerStringWrapper {
public LowerStringWrapper(String s) {
this.s = s;
this.lowerString = s.toLowerString();
}
// getter methods omitted
// Rely on the hashing of String, as we know it to be good.
public int hashCode() { return lowerString.hashCode(); }
// We overrode hashCode, so we MUST also override equals. It is required
// that if a.equals(b), then a.hashCode() == b.hashCode(), so we must
// restore that invariant.
public boolean equals(Object obj) {
if (obj instanceof LowerStringWrapper) {
return lowerString.equals(((LowerStringWrapper)obj).lowerString;
} else {
return lowerString.equals(obj);
}
}
private String s;
private String lowerString;
}
good question, ask josh bloch. i submitted that concept as an RFE in java 7, but it was dropped, i believe the reason was something performance related. i agree, though, should have been done.
I suspect this has not been done because it would prevent hashCode caching?
I attempted creating a generic Map solution where all keys are silently wrapped. It turned out that the wrapper would have to hold the wrapped object, the cached hashCode and a reference to the callback interface responsible for equality-checks. This is obviously not as efficient as using a wrapper class, where you'd only have to cache the original key plus one more object (see hazzens answer).
(I also bumped into a problem related to generics; the get-method accepts Object as input, so the callback interface responsible for hashing would have to perform an additional instanceof-check. Either that, or the map class would have to know the Class of its keys.)
This is an interesting idea, but it's absolutely horrendous for performance. The reason for this is quite fundamental to the idea of a hashtable: the ordering cannot be relied upon. Hashtables are very fast (constant time) because of the way in which they index elements in the table: by computing a pseudo-unique integer hash for that element and accessing that location in an array. It's literally computing a location in memory and directly storing the element.
This contrasts with a balanced binary search tree (TreeMap) which must start at the root and work its way down to the desired node every time a lookup is required. Wikipedia has some more in-depth analysis. To summarize, the efficiency of a tree map is dependent upon a consistent ordering, thus the order of the elements is predictable and sane. However, because of the performance hit imposed by the "traverse to your destination" approach, BSTs are only able to provide O(log(n)) performance. For large maps, this can be a significant performance hit.
It is possible to impose a consistent ordering on a hashtable, but to do so involves using techniques similar to LinkedHashMap and manually maintaining the ordering. Alternatively, two separate data structures can be maintained internally: a hashtable and a tree. The table can be used for lookups, while the tree can be used for iteration. The problem of course is this uses more than double the required memory. Also, insertions are only as fast as the tree: O(log(n)). Concurrent tricks can bring this down a bit, but that isn't a reliable performance optimization.
In short, your idea sounds really good, but if you actually tried to implement it, you would see that to do so would impose massive performance limitations. The final verdict is (and has been for decades): if you need performance, use a hashtable; if you need ordering and can live with degraded performance, use a balanced binary search tree. I'm afraid there's really no efficiently combining the two structures without losing some of the guarantees of one or the other.
There's such a feature in com.google.common.collect.CustomConcurrentHashMap, unfortunately, there's currently no public way how to set the Equivalence (their Hasharator). Maybe they're not yet done with it, maybe they don't consider the feature to be useful enough. Ask at the guava mailing list.
I wonder why it haven't happened yet, as it was mentioned in this talk over two years ago.