We are using stripe lock for one of our implementation. We take readlock on some final constant and a writelock on key. We noticed that we ran into a deadlock because hash code of two different keys turned out to be same. And hence it is like upgrading a read lock to write lock and we ran into deadlock. Below is the code used by Stripe library to generate a hashcode. What is the best way to handle this deadlock?
Stripe code:
static int smear(int hashCode)
{
hashCode ^= hashCode >>> 20 ^ hashCode >>> 12;
return hashCode ^ hashCode >>> 7 ^ hashCode >>> 4;
}
static final int indexFor(Object key)
{
int hash = smear(key.hashCode());
int mask = ceilToPowerOfTwo(2003) -1;
return hash & mask;
}
static int ceilToPowerOfTwo(int x)
{
return 1 << IntMath.log2(x, RoundingMode.CEILING);
}
public static void main(String[] args) {
String publicKey = "$public";
int hash = indexFor(publicKey );
for(int i=0;i<1000;i++) {
String key = "key"+i;
if(indexFor(key) == hash) {
System.out.println("Hash of "+key + " is same as hash of public");
}
}
}
Our Logic:
Take readLock on publicKey
Take writeLock on key
release the write lock
release the read lock
Related
Each entity in my game has a Tag object, and there needs to be a way to Add and Remove collisions between Tag's.
This is my code:
public final class CollisionMatrix {
// TODO: Longs have at most 64 bits, so the current implementation fails
// when there are more than 64 tags.
private Map<Integer, Long> matrix = new HashMap<Integer, Long>();
public CollisionMatrix add(Tag tag1, Tag tag2) {
int id1 = tag1.id;
int id2 = tag2.id;
matrix.put(id1, matrix.getOrDefault(id1, 0L) | (1 << id2));
matrix.put(id2, matrix.getOrDefault(id2, 0L) | (1 << id1));
return this;
}
public CollisionMatrix remove(Tag tag1, Tag tag2) {
int id1 = tag1.id;
int id2 = tag2.id;
matrix.put(id1, matrix.getOrDefault(id1, 0L) & ~(1 << id2));
matrix.put(id2, matrix.getOrDefault(id2, 0L) & ~(1 << id1));
return this;
}
public boolean collidesWith(Tag tag1, Tag tag2) {
return 0 != (matrix.getOrDefault(tag1.id, 0L) & (1 << tag2.id));
}
}
This is a very ugly implementation of what I'm trying to achieve. But it working (If the number of tags are no more than 64).
I'm looking for a solution that needs to be efficient and not anti-pattern.
Tag could have a list of tags that indicate collision:
public void add(Tag tag1, Tag tag2) {
tag1.collisions.Add(tag2);
tag2.collisions.Add(tag1);
}
public void remove(Tag tag1, Tag tag2) {
if (collidesWith(tag1,tag2)) {
tag1.collisions.remove(tag2);
tag2.collisions.remove(tag1);
}
}
public boolean collidesWith(Tag tag1, Tag tag2) {
if (tag1.collisions.Contains(tag2) && tag2.collisions.Contains(tag1)) {
return true;
}
return false;
}
I wonder, is it just me or are bitwise operators very illegible? Actually I never used them and haven't really seen them either.
To the topic: What about a simple two-dimensional symmetrical array that stores booleans? array[x][y] represents whether or not x collides with y (those could be IDs of two objects assuming they are not random and start from 0).
Somehow I have a feeling that you're trying too hard to be smart there. I'd never come with an idea to represent an array of booleans as a long and I assume that's what you are trying to there.
So, I am attempting to create an RSA algorithm from scratch.
So far, I have successfully created the ability to select two primes (which I have as 11 and 13 in my current example. Then, I calculate N by doing p x q. Which gets me 143.
Then, I move on to my public BigInteger findZ() method which calculates ϕ which is (p-1)(q-1).
Using this newly calculated ϕ, I want to find a number, or rather create an e variable that follows 1<(e)<ϕ, or simple gcd(e,ϕ) = 1 Thus, I initially set temp to equal my constant ONE (which is equal to one) + 1 to satisfy the range. However, after continuous debugging attempts, the loop never finds a value that has a GCD that is equal to one, which i've created a constant to represent since I am required to use BigInteger. Is there a reason for this?
Here is my code.
import java.math.BigInteger;
public class RSA
{
//Intialize the variables.
private BigInteger p;
private BigInteger q;
private BigInteger n;
private BigInteger z;
final private BigInteger ONE = BigInteger.valueOf(1);
public BigInteger getP()
{
return p;
}
public BigInteger getQ()
{
return q;
}
//Computes N, which is just p*q.
public BigInteger findN()
{
n = p.multiply(q);
return p.multiply(q);
}
public BigInteger findZ()
{
long pMinusOne = p.intValue() - 1;
long qMinusOne = q.intValue() - 1;
z = BigInteger.valueOf(pMinusOne * qMinusOne);
return z;
}
public BigInteger getE()
{
int temp = ONE.intValue() + 1;
BigInteger GCD = BigInteger.valueOf(temp);
while (GCD.gcd(z).compareTo(ONE) != 0)
{
temp++;
}
e = BigInteger.valueOf(temp);
return e;
}
}
Any help is greatly appreciated.
Thanks!
Since you asked for any help, I'll answer your question and give other tips.
How to get e
One tip is to use equals() instead of compareTo() when you're just checking for equality. Sometimes that can reduce the amount of work being done, and it's easier to read as well.
The biggest error in your code is that temp is used to set the original value of GCD, but that doesn't link temp to GCD. They stay disconnected. If you change temp later, GCD won't know about it and won't change. You need to add one to GCD directly. Here's some example code:
BigInteger e = BigInteger.valueOf(3);
while (! phi.gcd(e).equals(BigInteger.ONE)) {
e = e.add(BigInteger.ONE);
}
Look over BigInteger's methods
Get a sense of what you can easily do with BigInteger's by using your favorite search engine and searching for BigInteger 8 API. The 8 is for the version of Java you're using, so that might change. The API is for a list of methods.
Early on in the search results, you should find this API page. BigInteger has a lot of nice and convenient methods, so check them out. It even has a constructor that'll give you a BigInteger of whatever size you want that's very likely to be a prime, which is nice for generating the primes for a new random RSA key.
Use BigInteger's built-in constants
Don't recreate the following constants (which show up in the API page above):
BigInteger.ZERO
BigInteger.ONE
BigInteger.TEN
Never convert BigInteger to long unless you're sure it'll fit
You're converting BigIntegers to long, which is a bad idea, since there are a lot of BigIntegers that won't fit in a long, giving you incorrect results. For correctness (which is more important than speed), do arithmetic directly with BigIntegers.
You also use intValue() a lot when you're getting a long. Use longValueExact(). For that matter, use intValueExact() when you're getting an int.
So, to calculate ϕ:
BigInteger pMinusOne = p.subtract(BigInteger.ONE);
BigInteger qMinusOne = q.subtract(BigInteger.ONE);
BigInteger phi = pMinusOne.multiply(qMinusOne);
Now you know that it will give correct results, even for larger BigIntegers. It's also not that hard to read, which is good for maintaining the code later.
What to store
You should also store just n and e (and d but only if it's a private key) Never store p, q, or ϕ with RSA because those allow you to easily figure out the private key from the public key.
In general, don't calculate in getZZZ methods
You should figure out n and e (and d but only if it's a private key) in the constructor method(s) and store only those in instance variables. Then, you can have a getN() and getE() method to get the precomputed instance variables. For example (and you don't have to use this code, it's just to give an idea):
public class RSA {
private final BigInteger n;
private final BigInteger e;
private final BigInteger d;
public RSA(final BigInteger p, final BigInteger q) {
this.n = p.multiply(q);
// Calculate phi
final BigInteger pMinusOne = p.subtract(BigInteger.ONE);
final BigInteger qMinusOne = q.subtract(BigInteger.ONE);
final BigInteger phi = pMinusOne.multiply(qMinusOne);
// Calculate e
BigInteger e = BigInteger.valueOf(3L);
while (! phi.gcd(e).equals(BigInteger.ONE)) {
e = e.add(BigInteger.ONE);
}
this.e = e;
// Calculate d
this.d = e.modInverse(phi);
}
public BigInteger getN() {
return n;
}
public BigInteger getE() {
return e;
}
public BigInteger getD() {
return d;
}
}
I had a question when I learned the HashMap source code in java8。
Source code is so complicated, how much efficiency?
So I wrote a code about the hash conflict。
public class Test {
final int i;
public Test(int i) {
this.i = i;
}
public static void main(String[] args) {
java.util.HashMap<Test, Test> set = new java.util.HashMap<Test, Test>();
long time;
Test last;
Random random = new Random(0);
int i = 0;
for (int max = 1; max < 200000; max <<= 1) {
long c1 = 0, c2 = 0;
int t = 0;
for (; i < max; i++, t++) {
last = new Test(random.nextInt());
time = System.nanoTime();
set.put(last, last);
c1 += (System.nanoTime() - time);
last = new Test(random.nextInt());
time = System.nanoTime();
set.get(last);
c2 += (System.nanoTime() - time);
}
System.out.format("%d\t%d\t%d\n", max, (c1 / t), (c2 / t));
}
}
public int hashCode() {
return 0;
}
public boolean equals(Object obj) {
if (obj == null)
return false;
if (!(obj instanceof Test))
return false;
Test t = (Test) obj;
return t.i == this.i;
}
}
I show the results in Excel。
enter image description here
I am using java6u45 java7u80 java8u131。
I do not understand why the performance of java8 will be so bad
I'm trying to write my own HashMap.
I would like to learn HashMap in java8 which is better, but I did not find it.
Your test scenario is non-optimal for Java 8 HashMap. HashMap in Java 8 optimizes collisions by using binary trees for any hash chains longer than a given threshold. However, this only works if the key type is comparable. If it isn't then the overhead of testing to see if the optimization is possible actually makes Java 8 HashMap slower. (The slow-down is more than I expected ... but that's another topic.)
Change your Test class to implement Comparable<Test> ... and you should see that Java 8 performs better than the others when the proportion of hash collisions is large enough.
Note that the tree optimization should be considered as a defensive measure for the case where the hash function doesn't perform. The optimization turns O(N) worst-case performance to O(logN) worst-case.
If you want your HashMap instances to have O(1) lookup, you should make sure that you use a good hash function for the key type. If the probability of collision is minimized, the optimization is moot.
Source code is so complicated, how much efficiency?
It is explained in the comments in the source code. And probably other places that Google can find for you :-)
I'm currently using Hibernate as JPA provider and I want to switch IDs generation from the persistence layer to the application layer.
My schema (on MySQL 5.7) is currently using a BIGINT(20) data-type for IDs, but I don't want to refactor to go for UUIDs.
So I thought that something like a "System" UID should be enough:
public static long getUID()
{
long key = getSystemKey() << 56; // like 0x0100000000000000L;
long ts = System.currentTimeMillis() << 12;
int r = new Random().nextInt(0xFFF);
long id = key + ts + r;
return id;
}
The generated id is in the form
KK TTTTTTTTTTT RRR
where getSystemKey() [K] returns a unique fixed byte for each "machine" the application is running on (it's declared inside the configuration file).
timestamp ts [T] is using 11 nybbles, ensuring enough millis to 2527-06-23 08:20:44.415
random r [R] is used to add randomness per machine per millis (last 3 nybbles).
So I'm wondering if this way is consistent enough, what are pros and cons and if there's a better way.
Thanks
UPDATE
I tested this method with 100 threads and 10,000 executions:
public static void main(String[] args) throws Exception
{
List<Callable<Long>> runners = new ArrayList<>();
for(int i = 0; i < 10000; i++)
{
runners.add(SUID::random);
}
ExecutorService pool = Executors.newFixedThreadPool(100);
List<Future<Long>> results = pool.invokeAll(runners);
pool.shutdown();
int dups = 0;
Set<Long> ids = new HashSet<>();
for(Future<Long> future : results)
{
if(!ids.add(future.get()))
{
dups++;
}
}
System.out.println(dups);
}
I got around 6% of collisions.
So, the only way seems to use some synchronization:
public final class SUID
{
private static final AtomicLong SEQUENCE = new AtomicLong(Config.getSystemKey() << 56 | System.currentTimeMillis() << 12);
private SUID()
{
super();
}
public static long generate()
{
return SEQUENCE.incrementAndGet();
}
}
I have updated a Java application to Java 8. The application heavily relies on HashMaps.
When I run the benchmarks, I see unpredictable behavoir. For some inputs, the application runs faster than before, but for larger inputs, it's constantly slower.
I've checked the profiler and the most time consuming operation is HashMap.get. I suspect the changes
are due to the HashMap modification in Java 8, but it may not be true, as I have changed some other parts.
Is there an easy way that I hook in the original Java 7 HashMap into my Java 8 application so that I only change the hashmap implementation to see if I still observe the change in performance.
The following is a minimal program that tries to simulate what my application is doing.
The basic idea is that i need to share nodes in the application. At some runtime point, a node
should be retrieved or created if it already does not exist based on some integer properties. The following only uses two integer, but in the real application I have one, two and three integer keys.
import java.util.HashMap;
import java.util.Map;
import java.util.Random;
public class Test1 {
static int max_k1 = 500;
static int max_k2 = 500;
static Map<Node, Node> map;
static Random random = new Random();
public static void main(String[] args) {
for (int i = 0; i < 15; i++) {
long start = System.nanoTime();
run();
long end = System.nanoTime();
System.out.println((end - start) / 1000_000);
}
}
private static void run() {
map = new HashMap<>();
for (int i = 0; i < 10_000_000; i++) {
Node key = new Node(random.nextInt(max_k1), random.nextInt(max_k2));
Node val = getOrElseUpdate(key);
}
}
private static Node getOrElseUpdate(Node key) {
Node val;
if ((val = map.get(key)) == null) {
val = key;
map.put(key, val);
}
return val;
}
private static class Node {
private int k1;
private int k2;
public Node(int k1, int k2) {
this.k1 = k1;
this.k2 = k2;
}
#Override
public int hashCode() {
int result = 17;
result = 31 * result + k1;
result = 31 * result + k2;
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (!(obj instanceof Node))
return false;
Node other = (Node) obj;
return k1 == other.k1 && k2 == other.k2;
}
}
}
The benchmarking is primitive, but still, this is the result of 15 runs on Java 8:
8143
7919
7984
7973
7948
7984
7931
7992
8038
7975
7924
7995
6903
7758
7627
and this is for Java 7:
7247
6955
6510
6514
6577
6489
6510
6570
6497
6482
6540
6462
6514
4603
6270
The benchmarking is primitive, so I appreciate if someone who is familiar with JMH or other benchmarking tools run it, but from what I observe the results are better for Java 7. Any ideas?
Your hashCode() is very poor. In example you posted you have 250000 unique values but only 15969 unique hash codes. Because of lot of collisions, Java 8 swaps lists with trees. In your case it only adds overhead, because many elements not only have the same position in hash table but also the same hash code. The tree ends up as a linked list anyway.
There are couple of ways to fix this:
Improve your hashCode. return k1 * 500 + k2; resolves the issue.
Use THashMap. Open addressing should work better in case of collisions.
Make Node implement Comparable. This will be used by HashMap to construct balanced tree in case of conflicts.