Java Hashcode gives integer overflow - java

Background information:
In my project I'm applying Reinforcement Learning (RL) to the Mario domain. For my state representation I chose to use a hashtable with custom objects as keys. My custom objects are immutable and have overwritten the .equals() and the .hashcode() (which were generated by the IntelliJ IDE).
This is the resulting .hashcode(), I've added the possible values in comments as extra information:
#Override
public int hashCode() {
int result = (stuck ? 1 : 0); // 2 possible values: 0, 1
result = 31 * result + (facing ? 1 : 0); // 2 possible values: 0, 1
result = 31 * result + marioMode; // 3 possible values: 0, 1, 2
result = 31 * result + (onGround ? 1 : 0); // 2 possible values: 0, 1
result = 31 * result + (canJump ? 1 : 0); // 2 possible values: 0, 1
result = 31 * result + (wallNear ? 1 : 0); // 2 possible values: 0, 1
result = 31 * result + nearestEnemyX; // 33 possible values: - 16 to 16
result = 31 * result + nearestEnemyY; // 33 possible values: - 16 to 16
return result;
}
The Problem:
The problem here is that the result in the above code can exceed Integer.MAX_VALUE. I've read online this doesn't have to be a problem, but in my case it is. This is partly due to algorithm used which is Q-Learning (an RL method) and depends on the correct Q-values stored inside the hashtable. Basically I cannot have conflicts when retrieving values. When running my experiments I see that the results are not good at all and I'm 95% certain the problem lies with the retrieval of the Q-values from the hashtable. (If needed I can expand on why I'm certain about this, but this requires some extra information on the project which isn't relevant for the question.)
The Question:
Is there a way to avoid the integer overflow, maybe I'm overlooking something here? Or is there another way (perhaps another datastructure) to get reasonably fast the values given my custom-key?
Remark:
After reading some comments I do realise that my choice for using a HashTable wasn't maybe the best one as I want unique keys that do not cause collisions. If I still want to use the HashTable I will probably need a proper encoding.

You need a dedicated Key Field to guarantee uniqueness
.hashCode() isn't designed for what you are using it for
.hashCode() is designed to give good general results in bucketing algorithms, which can tolerate minor collisions. It is not designed to provide a unique key. The default algorithm is a trade off of time and space and minor collisions, it isn't supposed to guarantee uniqueness.
Perfect Hash
What you need to implement is a perfect hash or some other unique key based on the contents of the object. This is possible within the boundries of an int but I wouldn't use .hashCode() for this representation. I would use an explicit key field on the object.
Unique Hashing
One way to use use SHA1 hashing that is built into the standard library which has an extremely low chance of collisions for small data sets. You don't have a huge combinational explosion in the values you posts to SHA1 will work.
You should be able to calculate a way to generate a minimal perfect hash with the limited values that you are showing in your question.
A minimal perfect hash function is a perfect hash function that maps n
keys to n consecutive integers—usually [0..n−1] or [1..n]. A more
formal way of expressing this is: Let j and k be elements of some
finite set K. F is a minimal perfect hash function iff F(j) =F(k)
implies j=k (injectivity) and there exists an integer a such that the
range of F is a..a+|K|−1. It has been proved that a general purpose
minimal perfect hash scheme requires at least 1.44 bits/key.2 The
best currently known minimal perfect hashing schemes use around 2.6
bits/key.[3]
A minimal perfect hash function F is order preserving if keys are
given in some order a1, a2, ..., an and for any keys aj and ak, j
A minimal perfect hash function F is monotone if it preserves the
lexicographical order of the keys. In this case, the function value is
just the position of each key in the sorted ordering of all of the
keys. If the keys to be hashed are themselves stored in a sorted
array, it is possible to store a small number of additional bits per
key in a data structure that can be used to compute hash values
quickly.[6]
Solution
Note where it talks about a URL it can be any byte[] representation of any String that you calculate from your object.
I usually override the toString() method to make it generate something unique, and then feed that into the UUID.nameUUIDFromBytes() method.
Type 3 UUID can be just as useful as well UUID.nameUUIDFromBytes()
Version 3 UUIDs use a scheme deriving a UUID via MD5 from a URL, a
fully qualified domain name, an object identifier, a distinguished
name (DN as used in Lightweight Directory Access Protocol), or on
names in unspecified namespaces. Version 3 UUIDs have the form
xxxxxxxx-xxxx-3xxx-yxxx-xxxxxxxxxxxx where x is any hexadecimal digit
and y is one of 8, 9, A, or B.
To determine the version 3 UUID of a given name, the UUID of the
namespace (e.g., 6ba7b810-9dad-11d1-80b4-00c04fd430c8 for a domain) is
transformed to a string of bytes corresponding to its hexadecimal
digits, concatenated with the input name, hashed with MD5 yielding 128
bits. Six bits are replaced by fixed values, four of these bits
indicate the version, 0011 for version 3. Finally, the fixed hash is
transformed back into the hexadecimal form with hyphens separating the
parts relevant in other UUID versions.
My preferred solution is Type 5 UUID ( SHA version of Type 3)
Version 5 UUIDs use a scheme with SHA-1 hashing; otherwise it is the
same idea as in version 3. RFC 4122 states that version 5 is preferred
over version 3 name based UUIDs, as MD5's security has been
compromised. Note that the 160 bit SHA-1 hash is truncated to 128 bits
to make the length work out. An erratum addresses the example in
appendix B of RFC 4122.
Key objects should be immutable
That way you can calculate toString(), .hashCode() and generate a unique primary key inside the Constructor and set them once and not calculate them over and over.
Here is a straw man example of an idiomatic immutable object and calculating a unique key based on the contents of the object.
package com.stackoverflow;
import javax.annotation.Nonnull;
import java.util.Date;
import java.util.UUID;
public class Q23633894
{
public static class Person
{
private final String firstName;
private final String lastName;
private final Date birthday;
private final UUID key;
private final String strRep;
public Person(#Nonnull final String firstName, #Nonnull final String lastName, #Nonnull final Date birthday)
{
this.firstName = firstName;
this.lastName = lastName;
this.birthday = birthday;
this.strRep = String.format("%s%s%d", firstName, lastName, birthday.getTime());
this.key = UUID.nameUUIDFromBytes(this.strRep.getBytes());
}
#Nonnull
public UUID getKey()
{
return this.key;
}
// Other getter/setters omitted for brevity
#Override
#Nonnull
public String toString()
{
return this.strRep;
}
#Override
public boolean equals(final Object o)
{
if (this == o) { return true; }
if (o == null || getClass() != o.getClass()) { return false; }
final Person person = (Person) o;
return key.equals(person.key);
}
#Override
public int hashCode()
{
return key.hashCode();
}
}
}

For a unique representation of your object's state, you would need 19 bits in total. Thus, it is possible to represent it by a "perfect hash" integer value (which can have up to 32 bits):
#Override
public int hashCode() {
int result = (stuck ? 1 : 0); // needs 1 bit (2 possible values)
result += (facing ? 1 : 0) << 1; // needs 1 bit (2 possible values)
result += marioMode << 2; // needs 2 bits (3 possible values)
result += (onGround ? 1 : 0) << 4; // needs 1 bit (2 possible values)
result += (canJump ? 1 : 0) << 5; // needs 1 bit (2 possible values)
result += (wallNear ? 1 : 0) << 6; // needs 1 bit (2 possible values)
result += (nearestEnemyX + 16) << 7; // needs 6 bits (33 possible values)
result += (nearestEnemyY + 16) << 13; // needs 6 bits (33 possible values)
}

Instead of using 31 as a your magic number, you need to use the number of possibilities (normalised to 0)
#Override
public int hashCode() {
int result = (stuck ? 1 : 0); // 2 possible values: 0, 1
result = 2 * result + (facing ? 1 : 0); // 2 possible values: 0, 1
result = 3 * result + marioMode; // 3 possible values: 0, 1, 2
result = 2 * result + (onGround ? 1 : 0); // 2 possible values: 0, 1
result = 2 * result + (canJump ? 1 : 0); // 2 possible values: 0, 1
result = 2 * result + (wallNear ? 1 : 0); // 2 possible values: 0, 1
result = 33 * result + (16 + nearestEnemyX); // 33 possible values: - 16 to 16
result = 33 * result + (16 + nearestEnemyY); // 33 possible values: - 16 to 16
return result;
}
This will give you 104544 possible hashCodes() BTW you can reverse this process to get the original values from the code by using a series of / and %

Try Guava's hashCode() method or JDK7's Objects.hash(). It's way better than writing your own. Don't repeat code yourself (and anyone else when you can use out of box solution):

Related

Bad Hash Function [duplicate]

The accepted answer in Best implementation for hashCode method gives a seemingly good method for finding Hash Codes. But I'm new to Hash Codes, so I don't quite know what to do.
For 1), does it matter what nonzero value I choose? Is 1 just as good as other numbers such as the prime 31?
For 2), do I add each value to c? What if I have two fields that are both a long, int, double, etc?
Did I interpret it right in this class:
public MyClass{
long a, b, c; // these are the only fields
//some code and methods
public int hashCode(){
return 37 * (37 * ((int) (a ^ (a >>> 32))) + (int) (b ^ (b >>> 32)))
+ (int) (c ^ (c >>> 32));
}
}
The value is not important, it can be whatever you want. Prime numbers will result in a better distribution of the hashCode values therefore they are preferred.
You do not necessary have to add them, you are free to implement whatever algorithm you want, as long as it fulfills the hashCode contract:
Whenever it is invoked on the same object more than once during an execution of a Java application, the hashCode method must consistently return the same integer, provided no information used in equals comparisons on the object is modified. This integer need not remain consistent from one execution of an application to another execution of the same application.
If two objects are equal according to the equals(Object) method, then calling the hashCode method on each of the two objects must produce the same integer result.
It is not required that if two objects are unequal according to the equals(java.lang.Object) method, then calling the hashCode method on each of the two objects must produce distinct integer results. However, the programmer should be aware that producing distinct integer results for unequal objects may improve the performance of hash tables.
There are some algorithms which can be considered as not good hashCode implementations, simple adding of the attributes values being one of them. The reason for that is, if you have a class which has two fields, Integer a, Integer b and your hashCode() just sums up these values then the distribution of the hashCode values is highly depended on the values your instances store. For example, if most of the values of a are between 0-10 and b are between 0-10 then the hashCode values are be between 0-20. This implies that if you store the instance of this class in e.g. HashMap numerous instances will be stored in the same bucket (because numerous instances with different a and b values but with the same sum will be put inside the same bucket). This will have bad impact on the performance of the operations on the map, because when doing a lookup all the elements from the bucket will be compared using equals().
Regarding the algorithm, it looks fine, it is very similar to the one that Eclipse generates, but it is using a different prime number, 31 not 37:
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + (int) (a ^ (a >>> 32));
result = prime * result + (int) (b ^ (b >>> 32));
result = prime * result + (int) (c ^ (c >>> 32));
return result;
}
A well-behaved hashcode method already exists for long values - don't reinvent the wheel:
int hashCode = Long.hashCode((a * 31 + b) * 31 + c); // Java 8+
int hashCode = Long.valueOf((a * 31 + b) * 31 + c).hashCode() // Java <8
Multiplying by a prime number (usually 31 in JDK classes) and cumulating the sum is a common method of creating a "unique" number from several numbers.
The hashCode() method of Long keeps the result properly distributed across the int range, making the hash "well behaved" (basically pseudo random).

Collision strength of Java's Arrays.hashCode

How strong is the hashing mechanism that is used in the Arrays.hashCode methods against collision? What is the possibility of two different arrays (of, say, double) to have an exact hash value calculated with these methods?
Arrays.hashCode(double[]) is specified to return the equivalent value of a List containing Double values representing the same numeric value.
List.hashCode in turn is specified with a fairly simple algorithm:
int hashCode = 1;
for (E e : list)
hashCode = 31*hashCode + (e==null ? 0 : e.hashCode());
In general the multiplication with a prime number is a good practice for general-purpose hash functions, but it's far from a cryptographically strong hash function.
This means that while collisions are unlikely in the general (effectively random) case, they can usually be constructed quite easily if you can influence (or select) the hashCode of the items in the List.
As a constructed example consider these two statements:
System.out.println(Arrays.hashCode(new double[] {4.753E-321d}));
System.out.println(Arrays.hashCode(new double[] {4.9E-324d, 4.9E-324d}));
Both of these will output 993, despite being clearly different arrays.
This is the implementation of Arrays.hashCode that you use
public static int hashCode(int a[]) {
if (a == null)
return 0;
int result = 1;
for (int element : a)
result = 31 * result + element;
return result;
}
If your values happen to be smaller then 31 they are treated like distinct numbers in the base 31, so each result in a different numbers (if we ignore overflows for now). Lets call those pure hashes
Now of course 31^11 is way larger then the number of integers in Java, so we will get tons of overflows. But since the powers of 31 and the maximum integer are "very different" you don't get a almost random distribution, but a very regular uniform distribution.
Lets consider a smaller example. I assume you have only 2 elements in your array and the range from 0 to 5 each. I try to create "hashCode" between 0 and 37 by taking the modulo 38 of the "pure hash" The result is that I get streaks of 5 integers with small gaps in between, and not a single collision.
val hashes = for {
i <- 0 to 4
j <- 0 to 4
} yield (i * 31 + j) % 38
enter code here
println(hashes.size) // prints 25
println(hashes.toSet.size) // prints 25
To verify if this is what happens to your numbers you might create a graph as follows: For each hash take the first 16 bits for x and and the second 16 bits for y, color that dot black. I bet you will see an extremely regular pattern.

Overriding hashCode() in Java

I created a class "Book":
public class Book {
public static int idCount = 1;
private int id;
private String title;
private String author;
private String publisher;
private int yearOfPublication;
private int numOfPages;
private Cover cover;
...
}
And then i need to override the hashCode() and equals() methods.
#Override
public int hashCode() {
int result = id; // !!!
result = 31 * result + (title != null ? title.hashCode() : 0);
result = 31 * result + (author != null ? author.hashCode() : 0);
result = 31 * result + (publisher != null ? publisher.hashCode() : 0);
result = 31 * result + yearOfPublication;
result = 31 * result + numOfPages;
result = 31 * result + (cover != null ? cover.hashCode() : 0);
return result;
}
It's no problem with equals(). I just wondering about one thing in hashCode() method.
Note: IntelliJ IDEA generated that hashCode() method.
So, is it OK to set the result variable to id, or should i use some prime number?
What is the better choice here?
Thanks!
Note that only the initial value of the result is set to id, not the final one. The final value is calculated by combining that initial value with hash codes of other parts of the object, multiplied by a power of a small prime number (i.e. 31). Using id rather than an arbitrary prime is definitely right in this context.
In general, there is no advantage to hash code being prime (it's the number of hash buckets that needs to be prime). Using an int as its own hash code (in your case, that's id and numOfPages) is a valid approach.
It helps to know what the hashCode is used for. It's supposed to help you map a theoretically infinite set of objects to fitting in a small number of "bins", with each bin having a number, and each object saying which bin it wants to go in based on its hashCode. The question is not whether it's okay to do one thing or another, but whether what you want to do matches what the hashCode function is for.
As per http://docs.oracle.com/javase/6/docs/api/java/lang/Object.html#hashCode(), it's not about the number you return, it's about how it behaves for different objects of the same class.
If the object doesn't change, the hashCode must be the same value every time you call the hashCode() function.
Two objects that are equal according to .equals, must have the same hashCode.
Two objects that are not equal may have the same hashCode. (if this wasn't the case, there would be no point in using the hashCode at all, because every object already has a unique object pointer)
If you're reimplementing the hashCode function, the most important thing is to either rely on a tool to generate it for you, or to use code you understand that obeys those rules. The basic Java hashCode function uses an incredibly well-researched, seemingly simple bit of code for String hashing, so the code you see is based on turning everything into Strings and falling back to that.
If you don't know why that works, don't touch it. Just rely on it working and move on. That 31 is ridiculously important and ensures an even hashing distribution. See Why does Java's hashCode() in String use 31 as a multiplier? for the why on that one.
However, this might also be way more than you need. You could use id, but then you're basically negating the reason to use a hashCode (because now every object will want to be in a bin on its own, turning any hashed collection into a flat array. Kind of silly).
If you know the distribution of your id values, there are far easier hashCodes to come up with. Say you know they are always between 0 and Interger.MAX_VALUE, and you know there are never any gaps between ids, you could simply generate a hashCode like
final int modulus = Intereger.MAX_VALUE / 255;
int hashCode() {
return this.id % modulus;
}
now, you have a hashCode optimised for 255 bins, fulfilling the necessary requirements for an acceptable hashCode function.
Note : In my answer I am assuming that you know how hash code is meant to be used. The following just talks about any potential optimization using a non-zero constant for the initial value of result may produce.
If id is rarely 0 then it's fine to use it. However, if it's 0 frequently you should use some constant instead (just using 1 should be fine). The reason you want for it to be non-zero is so that the 31 * result part always adds some value to the hash. That way say if object A has all fields null or 0 except for yearOfPublication = 1 and object B has all fields null or 0 except for numOfPages = 1 the hash codes will be:
A.hashCode() => initialValue * 31 ^ 4 + 1
B.hashCode() => initialValue * 31 ^ 5 + 1
As you can see if initialValue is 0 then both hash codes are the same, however if it's not 0 then they will be different. It is preferable for them to be different so as to reduce collisions in data structures that use the hash code like HashMap.
That said, in your example of the Book class it is likely that id will never be 0. In fact, if id uniquely identifies the Book then you can have the hashCode() method just return the id.

One-byte bool. Why?

In C++, why does a bool require one byte to store true or false where just one bit is enough for that, like 0 for false and 1 for true? (Why does Java also require one byte?)
Secondly, how much safer is it to use the following?
struct Bool {
bool trueOrFalse : 1;
};
Thirdly, even if it is safe, is the above field technique really going to help? Since I have heard that we save space there, but still compiler generated code to access them is bigger and slower than the code generated to access the primitives.
Why does a bool require one byte to store true or false where just one bit is enough
Because every object in C++ must be individually addressable* (that is, you must be able to have a pointer to it). You cannot address an individual bit (at least not on conventional hardware).
How much safer is it to use the following?
It's "safe", but it doesn't achieve much.
is the above field technique really going to help?
No, for the same reasons as above ;)
but still compiler generated code to access them is bigger and slower than the code generated to access the primitives.
Yes, this is true. On most platforms, this requires accessing the containing byte (or int or whatever), and then performing bit-shifts and bit-mask operations to access the relevant bit.
If you're really concerned about memory usage, you can use a std::bitset in C++ or a BitSet in Java, which pack bits.
* With a few exceptions.
Using a single bit is much slower and much more complicated to allocate. In C/C++ there is no way to get the address of one bit so you wouldn't be able to do &trueOrFalse as a bit.
Java has a BitSet and EnumSet which both use bitmaps. If you have very small number it may not make much difference. e.g. objects have to be atleast byte aligned and in HotSpot are 8 byte aligned (In C++ a new Object can be 8 to 16-byte aligned) This means saving a few bit might not save any space.
In Java at least, Bits are not faster unless they fit in cache better.
public static void main(String... ignored) {
BitSet bits = new BitSet(4000);
byte[] bytes = new byte[4000];
short[] shorts = new short[4000];
int[] ints = new int[4000];
for (int i = 0; i < 100; i++) {
long bitTime = timeFlip(bits) + timeFlip(bits);
long bytesTime = timeFlip(bytes) + timeFlip(bytes);
long shortsTime = timeFlip(shorts) + timeFlip(shorts);
long intsTime = timeFlip(ints) + timeFlip(ints);
System.out.printf("Flip time bits %.1f ns, bytes %.1f, shorts %.1f, ints %.1f%n",
bitTime / 2.0 / bits.size(), bytesTime / 2.0 / bytes.length,
shortsTime / 2.0 / shorts.length, intsTime / 2.0 / ints.length);
}
}
private static long timeFlip(BitSet bits) {
long start = System.nanoTime();
for (int i = 0, len = bits.size(); i < len; i++)
bits.flip(i);
return System.nanoTime() - start;
}
private static long timeFlip(short[] shorts) {
long start = System.nanoTime();
for (int i = 0, len = shorts.length; i < len; i++)
shorts[i] ^= 1;
return System.nanoTime() - start;
}
private static long timeFlip(byte[] bytes) {
long start = System.nanoTime();
for (int i = 0, len = bytes.length; i < len; i++)
bytes[i] ^= 1;
return System.nanoTime() - start;
}
private static long timeFlip(int[] ints) {
long start = System.nanoTime();
for (int i = 0, len = ints.length; i < len; i++)
ints[i] ^= 1;
return System.nanoTime() - start;
}
prints
Flip time bits 5.0 ns, bytes 0.6, shorts 0.6, ints 0.6
for sizes of 40000 and 400K
Flip time bits 6.2 ns, bytes 0.7, shorts 0.8, ints 1.1
for 4M
Flip time bits 4.1 ns, bytes 0.5, shorts 1.0, ints 2.3
and 40M
Flip time bits 6.2 ns, bytes 0.7, shorts 1.1, ints 2.4
If you want to store only one bit of information, there is nothing more compact than a char, which is the smallest addressable memory unit in C/C++. (Depending on the implementation, a bool might have the same size as a char but it is allowed to be bigger.)
A char is guaranteed by the C standard to hold at least 8 bits, however, it can also consist of more. The exact number is available via the CHAR_BIT macro defined in limits.h (in C) or climits (C++). Today, it is most common that CHAR_BIT == 8 but you cannot rely on it (see here). It is guaranteed to be 8, however, on POSIX compliant systems and on Windows.
Though it is not possible to reduce the memory footprint for a single flag, it is of course possible to combine multiple flags. Besides doing all bit operations manually, there are some alternatives:
If you know the number of bits at compile time
bitfields (as in your question). But beware, the ordering of fields is not guaranteed, which may result in portability issues.
std::bitset
If you know the size only at runtime
boost::dynamic_bitset
If you have to deal with large bitvectors, take a look at the BitMagic library. It supports compression and is heavily tuned.
As others have pointed out already, saving a few bits is not always a good idea. Possible drawbacks are:
Less readable code
Reduced execution speed because of the extra extraction code.
For the same reason, increases in code size, which may outweigh the savings in data consumption.
Hidden synchronization issues in multithreaded programs. For example, flipping two different bits by two different threads may result in a race condition. In contrast, it is always safe for two threads to modify two different objects of primitive types (e.g., char).
Typically, it makes sense when you are dealing with huge data because then you will benefit from less pressure on memory and cache.
Why don't you just store the state to a byte? Haven't actually tested the below, but it should give you an idea. You can even utilize a short or an int for 16 or 32 states. I believe I have a working JAVA example as well. I'll post this when I find it.
__int8 state = 0x0;
bool getState(int bit)
{
return (state & (1 << bit)) != 0x0;
}
void setAllOnline(bool online)
{
state = -online;
}
void reverseState(int bit)
{
state ^= (1 << bit);
}
Alright here's the JAVA version. I've stored it to an Int value since. If I remember correctly even using a byte would utilize 4 bytes anyways. And this obviously isn't be utilized as an array.
public class State
{
private int STATE;
public State() {
STATE = 0x0;
}
public State(int previous) {
STATE = previous;
}
/*
* #Usage - Used along side the #setMultiple(int, boolean);
* #Returns the value of a single bit.
*/
public static int valueOf(int bit)
{
return 1 << bit;
}
/*
* #Usage - Used along side the #setMultiple(int, boolean);
* #Returns the value of an array of bits.
*/
public static int valueOf(int... bits)
{
int value = 0x0;
for (int bit : bits)
value |= (1 << bit);
return value;
}
/*
* #Returns the value currently stored or the values of all 32 bits.
*/
public int getValue()
{
return STATE;
}
/*
* #Usage - Turns all bits online or offline.
* #Return - <TRUE> if all states are online. Otherwise <FALSE>.
*/
public boolean setAll(boolean online)
{
STATE = online ? -1 : 0;
return online;
}
/*
* #Usage - sets multiple bits at once to a specific state.
* #Warning - DO NOT SET BITS TO THIS! Use setMultiple(State.valueOf(#), boolean);
* #Return - <TRUE> if states were set to online. Otherwise <FALSE>.
*/
public boolean setMultiple(int value, boolean online)
{
STATE |= value;
if (!online)
STATE ^= value;
return online;
}
/*
* #Usage - sets a single bit to a specific state.
* #Return - <TRUE> if this bit was set to online. Otherwise <FALSE>.
*/
public boolean set(int bit, boolean online)
{
STATE |= (1 << bit);
if(!online)
STATE ^= (1 << bit);
return online;
}
/*
* #return = the new current state of this bit.
* #Usage = Good for situations that are reversed.
*/
public boolean reverse(int bit)
{
return (STATE ^= (1 << bit)) == (1 << bit);
}
/*
* #return = <TRUE> if this bit is online. Otherwise <FALSE>.
*/
public boolean online(int bit)
{
int value = 1 << bit;
return (STATE & value) == value;
}
/*
* #return = a String contains full debug information.
*/
#Override
public String toString()
{
StringBuilder sb = new StringBuilder();
sb.append("TOTAL VALUE: ");
sb.append(STATE);
for (int i = 0; i < 0x20; i++)
{
sb.append("\nState(");
sb.append(i);
sb.append("): ");
sb.append(online(i));
sb.append(", ValueOf: ");
sb.append(State.valueOf(i));
}
return sb.toString();
}
}
Also I should point out that you really shouldn't utilize a special class for this, but to just have the variable stored within the class that'll be most likely utilizing it. If you plan to have 100's or even 1000's of Boolean values consider an array of bytes.
E.g. the below example.
boolean[] states = new boolean[4096];
can be converted into the below.
int[] states = new int[128];
Now you're probably wondering how you'll access index 4095 from a 128 array. So what this is doing is if we simplify it. The 4095 is be shifted 5 bits to the right which is technically the same as divide by 32. So 4095 / 32 = rounded down (127). So we are at index 127 of the array. Then we perform 4095 & 31 which will cast it to a value between 0 and 31. This will only work with powers of two minus 1. E.g. 0,1,3,7,15,31,63,127,255,511,1023, etc...
So now we can access the bit at that position. As you can see this is very very compact and beats having 4096 booleans in a file :) This will also provide a much faster read/write to a binary file. I have no idea what this BitSet stuff is, but it looks like complete garbage and since byte,short,int,long are already in their bit forms technically you might as well use them as is. Then creating some complex class to access the individual bits from memory which is what I could grasp from reading a few posts.
boolean getState(int index)
{
return (states[index >> 5] & 1 << (index & 0x1F)) != 0x0;
}
Further information...
Basically if the above was a bit confusing here's a simplified version of what's happening.
The types "byte", "short", "int", "long" all are data types which have different ranges.
You can view this link: http://msdn.microsoft.com/en-us/library/s3f49ktz(v=vs.80).aspx
To see the data ranges of each.
So a byte is equal to 8 bits. So an int which is 4 bytes will be 32 bits.
Now there isn't any easy way to perform some value to the N power. However thanks to bit shifting we can simulate it somewhat. By performing 1 << N this equates to 1 * 2^N. So if we did 2 << 2^N we'd be doing 2 * 2^N. So to perform powers of two always do "1 << N".
Now we know that a int will have 32 bits so can use each bits so we can just simply index them.
To keep things simple think of the "&" operator as a way to check if a value contains the bits of another value. So let's say we had a value which was 31. To get to 31. we must add the following bits 0 through 4. Which are 1,2,4,8, and 16. These all add up to 31. Now when we performing 31 & 16 this will return 16 because the bit 4 which is 2^4 = 16. Is located in this value. Now let's say we performed 31 & 20 which is checking if bits 2 and 4 are located in this value. This will return 20 since both bits 2 and 4 are located here 2^2 = 4 + 2^4 = 16 = 20. Now let's say we did 31 & 48. This is checking for bits 4 and 5. Well we don't have bit 5 in 31. So this will only return 16. It will not return 0. So when performing multiple checks you must check that it physically equals that value. Instead of checking if it equals 0.
The below will verify if an individual bit is at 0 or 1. 0 being false, and 1 being true.
bool getState(int bit)
{
return (state & (1 << bit)) != 0x0;
}
The below is example of checking two values if they contain those bits. Think of it like each bit is represented as 2^BIT so when we do
I'll quickly go over some of the operators. We've just recently explained the "&" operator slightly. Now for the "|" operator.
When performing the following
int value = 31;
value |= 16;
value |= 16;
value |= 16;
value |= 16;
The value will still be 31. This is because bit 4 or 2^4=16 is already turned on or set to 1. So performing "|" returns that value with that bit turned on. If it's already turned on no changes are made. We utilize "|=" to actually set the variable to that returned value.
Instead of doing -> "value = value | 16;". We just do "value |= 16;".
Now let's look a bit further into how the "&" and "|" can be utilized.
/*
* This contains bits 0,1,2,3,4,8,9 turned on.
*/
const int CHECK = 1 | 2 | 4 | 8 | 16 | 256 | 512;
/*
* This is some value were we add bits 0 through 9, but we skip 0 and 8.
*/
int value = 2 | 4 | 8 | 16 | 32 | 64 | 128 | 512;
So when we perform the below code.
int return_code = value & CHECK;
The return code will be 2 + 4 + 8 + 16 + 512 = 542
So we were checking for 799, but we recieved 542 This is because bits o and 8 are offline we equal 256 + 1 = 257 and 799 - 257 = 542.
The above is great great great way to check if let's say we were making a video game and wanted to check if so and so buttons were pressed if any of them were pressed. We could simply check each of those bits with one check and it would be so many times more efficient than performing a Boolean check on every single state.
Now let's say we have Boolean value which is always reversed.
Normally you'd do something like
bool state = false;
state = !state;
Well this can be done with bits as well utilizing the "^" operator.
Just as we performed "1 << N" to choose the whole value of that bit. We can do the same with the reverse. So just like we showed how "|=" stores the return we will do the same with "^=". So what this does is if that bit is on we turn it off. If it's off we turn it on.
void reverseState(int bit)
{
state ^= (1 << bit);
}
You can even have it return the current state. If you wanted it to return the previous state just swap "!=" to "==". So what this does is performs the reversal then checks the current state.
bool reverseAndGet(int bit)
{
return ((state ^= (1 << bit)) & (1 << bit)) != 0x0;
}
Storing multiple non single bit aka bool values into a int can also be done. Let's say we normally write out our coordinate position like the below.
int posX = 0;
int posY = 0;
int posZ = 0;
Now let's say these never wen't passed 1023. So 0 through 1023 was the maximum distance on all of these. I'm choose 1023 for other purposes as previously mentioned you can manipulate the "&" variable as a way to force a value between 0 and 2^N - 1 values. So let's say your range was 0 through 1023. We can perform "value & 1023" and it'll always be a value between 0 and 1023 without any index parameter checks. Keep in mind as previously mentioned this only works with powers of two minus one. 2^10 = 1024 - 1 = 1023.
E.g. no more if (value >= 0 && value <= 1023).
So 2^10 = 1024, which requires 10 bits in order to hold a number between 0 and 1023.
So 10x3 = 30 which is still less than or equal to 32. Is sufficient for holding all these values in an int.
So we can perform the following. So to see how many bits we used. We do 0 + 10 + 20. The reason I put the 0 there is to show you visually that 2^0 = 1 so # * 1 = #. The reason we need y << 10 is because x uses up 10 bits which is 0 through 1023. So we need to multiple y by 1024 to have unique values for each. Then Z needs to be multiplied by 2^20 which is 1,048,576.
int position = (x << 0) | (y << 10) | (z << 20);
This makes comparisons fast.
We can now do
return this.position == position;
apposed to
return this.x == x && this.y == y && this.z == z;
Now what if we wanted the actual positions of each?
For the x we simply do the following.
int getX()
{
return position & 1023;
}
Then for the y we need to perform a left bit shift then AND it.
int getY()
{
return (position >> 10) & 1023;
}
As you may guess the Z is the same as the Y, but instead of 10 we use 20.
int getZ()
{
return (position >> 20) & 1023;
}
I hope whoever views this will find it worth while information :).
If you really want to use 1 bit, you can use a char to store 8 booleans, and bitshift to get the value of the one you want. I doubt it will be faster, and it's probably going to gives you a lot of headaches working that way, but technically it's possible.
On a side note, an attempt like this could prove useful for systems that don't have a lot of memory available for variables but do have some more processing power then what you need. I highly doubt you will ever need it though.

How to convert an 18 Character String into a Unique ID?

I have an 18 Character String that I need to convert into a unique long (in Java).
A sample String would be: AAA2aNAAAAAAADnAAA
My String is actually an Oracle ROWID, so it can be broken down if needs be, see:
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14220/datatype.htm#CNCPT713
The long number generated, (1) Must be unique, as no two results can point to the same database row and (2) Must be reversible, so I can get the ROWID String back from the long?
Any suggestions on an algorithm to use would be welcome.
Oracle forum question on this from a few years ago : http://forums.oracle.com/forums/thread.jspa?messageID=1059740
Ro
You can't, with those requirements.
18 characters of (assuming) upper and lower case letters has 5618 or about 2.93348915 × 10331 combinations. This is (way) more than the approximate 1.84467441 × 1019 combinations available among 64 bits.
UPDATE: I had the combinatorics wrong, heh. Same result though.
Just create a map (dictionary / hashtable) that maps ROWID strings to an (incremented) long. If you keep two such dictionaries and wrap them up in a nice class, you will have a bidirectional lookup between the strings and the long IDs.
Pseudocode:
class BidirectionalLookup:
dict<string, long> stringToLong
dict<long, string> longToString
long lastId
addString(string): long
newId = atomic(++lastId)
stringToLong[string] = newId
longToString[newId] = string
return newId
lookUp(string): long
return stringToLong[string]
lookUp(long): string
return longToString[long]
Your String of 18 characters representing a base 64 encoding represents a total of 108 bits of information, which is almost twice that of long's 64. We have a bit of a problem here if we want to represent every possible key and have the representation be reversible.
The string can be broken down into 4 numbers easily enough. Each of those 4 numbers represents something - a block number, an offset in that block, whatever. If you manage to establish upper limits on the underlying quantities such that you know larger numbers will not occur (i.e. if you find a way to identify at least 44 of those bits that will always be 0), then you can map the rest onto a long, reversibly.
Another possibility would be to relax the requirement that the equivalent be a long. How about a BigInteger? That would make it easy.
I'm assuming that's a case-insensitive alpha-numeric string, and so drawn from the set [a-zA-Z0-9]*
In that case you have
26 + 26 + 10 = 62
possible values for each character.
62 < 64 = 2^6
In other words you need (at least) 6 bits to store each of the 18 characters of the key.
6 * 18 = 108 bits
to store the entire string uniquely.
108 bits = (108 / 8) = 13.5 bytes.
Therefore as long as your data type can store at least 13.5 bytes then you can fairly simply define a mapping:
Map from raw ASCII for each character to a representation using only 6 bits
Concatenate all 18 reduced representations to a sinlde 14 byte value
Cast this to your final data value
Obviously Java has nothing more than an 8 byte long. So if you have to use a long then it is NOT possible to uniquely map the strings, unless there is something else which reduces the space of valid input strings.
Theoretically, you can't represent ROWID in a long (8 bytes). However, depending on the size of your databases (the whole server, not only your table), you might be able to encode it into a long.
Here is the layout of ROWID,
OOOOOO-FFF-BBBBBB-RRR
Where O is ObjectID. F is FileNo. B is Block and R is Row Number. All of them are Base64-encoded. As you can see O & B can have 36-bits and B&R can have 18.
If your database is not huge, you can use 2 byte for each part. Basically, your ObjectId and block number will be limited to 64K. Our DBA believes our database has to be several magnitude bigger for us to get close to these limits.
I would suggest you find max of each part in your database and see if you are close. I wouldn't use long if they are anywhere near the limit.
Found a way to extract the ROWID in a different manner from the database....
SQL> select DBMS_ ROWID.ROWID_ TO_RESTRICTED( ROWID, 1 ) FROM MYTABLE;
0000EDF4.0001.0000
0000EDF4.0002.0000
0000EDF4.0004.0000
0000EDF4.0005.0000
0000EDF4.0007.0000
0000EDF5.0000.0000
0000EDF5.0002.0000
0000EDF5.0003.0000
Then convert it to a number like so :
final String hexNum = rowid.replaceAll( "\.", "" );
final long lowerValue = Long.parseLong( hexNum.substring( 1 ), 16 );
long upperNibble = Integer.parseInt( hexNum.substring( 0, 1 ), 16 );
if ( upperNibble >= 8 ) {
//Catch Case where ROWID > 8F000000.0000.0000
upperNibble -= 8;
return -( 9223372036854775807L - ( lowerValue - 1 + ( upperNibble << 60 ) ) );
} else {
return ( lowerValue + ( upperNibble << 60 ) );
}
Then reverse that number back to String format like so:
String s = Long.toHexString( featureID );
//Place 0's at the start of the String making a Strnig of size 16
s = StringUtil.padString( s, 16, '0', true );
StringBuffer sb = new StringBuffer( s );
sb.insert( 8, '.' );
sb.insert( 13, '.' );
return sb.toString();
Cheers for all the responses.
This sounds ... icky, but I don't know your context so trying not to pass judgement. 8)
Have you considered converting the characters in the string into their ASCII equivalents?
ADDENDUM: Of course required truncating out semi-superflous characters to fit, which sounds like an option you may have from comments.

Categories