I am trying to create a function that generates a hash key based upon where in the hash table I want the value to go.
My hash function is (a + b * (key) ) % c = hash value. I've seen a similar question to this on SO, and what I tried is replacing b * (key) with d and just doing:
private int ReverseModulus(int a, int b, int c, int hashValue)
{
if(hashValue >= c)
return -1;
if(a < hashValue)
return (hashValue - a) / b;
return (c + hashValue - a) / b;
}
but it seems that most of the time hashValue != Hash(ReverseModulus(a,b,c, hashValue)).
I was wondering if the approach is wrong or if there is just an error in the code.
You're using the wrong kind of division. You're doing integer division, but you need to be doing modular division. In Java you can use BigInteger:
bh = new BigInteger(hashValue);
ba = new BigInteger(a);
bc = new BigInteger(c);
bn = bh.subtract(ba);
return bn.modInverse(bc).intValue();
and C# presumably has similar library functions.
Related
I got this problem from an online course and here I had to write a small program to find quadratic roots, and the return type should be Set<Integer>. I am still learning Java and still not familiar working with those types.
I think everything is not wrong until this part,
if(discriminant > 0) {
root1 = (int)(-b + Math.sqrt(discriminant)) / (2 * a);
root2 = (int)(-b - Math.sqrt(discriminant)) / (2 * a);
result.add(root1);
result.add(root2);
}
As I have to return the final roots as a Set<Integer> type I had to force convert double to int returned by 'Math.sqrt'. I am not sure if this is what causing the issues. And if so I am not sure if how to solve this, because I can't add double values to a set<Integer>.
I tested this code with few test cases, and it failed when using really big values, like ~2,000,000,000 for c.
And this is the code I came up with so far.
public class Quadratic {
public static Set<Integer> roots(int a, int b, int c) {
int root1;
int root2;
int discriminant = b * b - 4 * a * c;
Set<Integer> result = new HashSet<Integer>();
if(discriminant < 0) {
String rootsAreImaginary = "Roots are imaginary";
System.out.println(rootsAreImaginary);
}
if(discriminant == 0) {
root1 = (-b) / (2 * a);
root2 = root1;
result.add(root1);
result.add(root2);
}
if(discriminant > 0) {
root1 = (int)(-b + Math.sqrt(discriminant)) / (2 * a);
root2 = (int)(-b - Math.sqrt(discriminant)) / (2 * a);
result.add(root1);
result.add(root2);
}
return result;
}
If there are better ways to do this, please feel free to show me. Thank you so much in advance.
You can use BigDecimal or BigInteger to do the computations. These would have to be stored in a Set of the proper type, e.g. Set<BigDecimal>.
Both of those classes have methods to return the value of the related primitive (BigDecimal#doubleValue() and BigInteger#longValue()). But precision and size concerns still apply as you may not be able to fit the result into the class's related primitive.
I am trying to multiply two numbers using karatsuba multiplication. My java code is not working. I have used string as parameters and arguments so that we can multiply two n digit numbers (n is even). Also, I don't want to use long or BigInteger. Please help me to figure out my code mistake.
class karat{
public static String karatsuba(String first, String second){
if(first.length() <= 1 || second.length() <= 1)
return String.valueOf(Long.parseLong(first)*Long.parseLong(second));
String a = karatsuba(first.substring(0, first.length()/2), second.substring(0, second.length()/2));
String b = karatsuba(first.substring(first.length() - first.length()/2, first.length()), second.substring(second.length() - second.length()/2, second.length()));
String c = karatsuba(String.valueOf(Long.parseLong(first.substring(0, first.length()/2)) + Long.parseLong(first.substring(first.length() - first.length()/2, first.length()))), String.valueOf(Long.parseLong(second.substring(0, second.length()/2)) + Long.parseLong(second.substring(second.length() - second.length()/2, second.length()))));
String d = String.valueOf(Long.parseLong(c) - Long.parseLong(b) - Long.parseLong(a));
return String.valueOf(((int)Math.pow(10, first.length()))*(Long.parseLong(a)) + (((int)Math.pow(10, first.length()/2))*Long.parseLong(d)) + (Long.parseLong(c)));
}
public static void main(String[] args){
String result = karatsuba("1234", "5678");
System.out.println(result); }
}
Can you also please refine my code.
Numbers passed for multiplication - 1234 and 5678
Output is - 6655870 (Incorrect)
Output should be - 7006652 (Correct)
Thank you
First of all I tried look at your code, it gets a programmer to get lost, few things before we go into solution.
General advice. It is not good practice to convert string to value and back and forward like you do, it does not work like this. I tried as well to debug your code, it is just devil circle.
So I would start with check if value length and the maximum one.
Than if one of the values is less than 2 of length mean every thing less than 10 do multiplication otherwise do karatsuba recursion algorithm.
Here is the solution:
public static long karatsuba(long num1, long num2) {
int m = Math.max(
String.valueOf(num1).length(),
String.valueOf(num2).length()
);
if (m < 2)
return num1 * num2;
m = (m / 2) + (m % 2);
long b = num1 >> m;
long a = num1 - (b << m);
long d = num2 >> m;
long c = num2 - (d << m);
long ac = karatsuba(a, c);
long bd = karatsuba(b, d);
long abcd = karatsuba(a + b, c + d);
return ac + (abcd - ac - bd << m) + (bd << 2 * m);
}
Some test;
public static void main(String[] args) {
System.out.println(karatsuba(1, 9));
System.out.println(karatsuba(1234, 5678));
System.out.println(karatsuba(12345, 6789));
}
The output would be
9
7006652
83810205
It is less pain than your Stringish code. Btw, the solution is inspired from the pesudo in wiki and this class.
Interesting algorithm. One mistake is in
return String.valueOf(((int)Math.pow(10, first.length()))*(Long.parseLong(a)) + (((int)Math.pow(10, first.length()/2))*Long.parseLong(d)) + (Long.parseLong(c)));
At the end, it should be Long.parseLong(b) instead of Long.parseLong(c).
And in intermediate calculations, it can happen that the two strings are of different lengths. That also doesn't work correctly.
Please, allow some comments to improve the implementation. The idea to use strings seems to allow for big numbers, but then you introduce things like Long.parseLong() or (int)Math.pow(10, first.length()), limiting you to the long or int range.
If you really want to do big numbers, write your own String-based addition and power-of-ten multiplication (that one being trivial by appending some zeroes).
And, try to avoid names like a, b, c, or d - it's too easy to forget what they mean, as was your original mistake. E.g. the names from Wikipedia are a little bit better (using z0, z1 and z2), but still not perfect...
I can calculate the multiplication of two BigIntegers (say a and b) modulo n.
This can be done by:
a.multiply(b).mod(n);
However, assuming that a and b are of the same order of n, it implies that during the calculation, a new BigInteger is being calculated, and its length (in bytes) is ~ 2n.
I wonder whether there is more efficient implementation that I can use. Something like modMultiply that is implemented like modPow (which I believe does not calculate the power and then the modulo).
I can only think of
a.mod(n).multiply(b.mod(n)).mod(n)
and you seem already to be aware of this.
BigInteger has a toByteArray() but internally ints are used. hence n must be quite large to have an effect. Maybe in key generation cryptographic code there might be such work.
Furhtermore, if you think of short-cutting the multiplication, you'll get something like the following:
public static BigInteger multiply(BigInteger a, BigInteger b, int mod) {
if (a.signum() == -1) {
return multiply(a.negate(), b, mod).negate();
}
if (b.signum() == -1) {
return multiply(a, b.negate(), mod).negate();
}
int n = (Integer.bitCount(mod - 1) + 7) / 8; // mod in bytes.
byte[] aa = a.toByteArray(); // Highest byte at [0] !!
int na = Math.min(n, aa.length); // Heuristic.
byte[] bb = b.toByteArray();
int nb = Math.min(n, bb.length); // Heuristic.
byte[] prod = new byte[n];
for (int ia = 0; ia < na; ++ia) {
int m = ia + nb >= n ? n - ia - 1 : nb; // Heuristic.
for (int ib = 0; ib < m; ++ib) {
int p = (0xFF & aa[aa.length - 1 - ia]) * (0xFF & bb[bb.length - 1 - ib]);
addByte(prod, ia + ib, p & 0xFF);
if (ia + ib + 1 < n) {
addByte(prod, ia + ib + 1, (p >> 8) & 0xFF);
}
}
}
// Still need to do an expensive mod:
return new BigInteger(prod).mod(BigInteger.valueOf(mod));
}
private static void addByte(byte[] prod, int i, int value) {
while (value != 0 && i < prod.length) {
value += prod[prod.length - 1 - i] & 0xFF;
prod[prod.length - 1 - i] = (byte) value;
value >>= 8;
++i;
}
}
That code does not look appetizing. BigInteger has the problem of exposing the internal value only as big-endian byte[] where the first byte is the most significant one.
Much better would be to have the digits in base N. That is not unimaginable: if N is a power of 2 some nice optimizations are feasible.
(BTW the code is untested - as it does not seem convincingly faster.)
First, the bad news: I couldn't find any existing Java libraries that provided this functionality.
I couldn't find any pure-Java big integer libraries ... apart from java.math.BigInteger.
There are Java / JNI wrappers for the GMP library, but GMP doesn't implement this either.
So what are your options?
Maybe there is some pure-Java library that I missed.
Maybe there some other native (C / C++) big integer library supports this operation ... though you may need to write your own JNI wrappers.
You should be able to implement such a method for yourself, by copying the source code of java.math.BigInteger and adding an extra custom method. Alternatively, it looks like you could extend it.
Having said that, I'm not sure that there is a "substantially faster" algorithm for computing a * b mod n in Java, or any other language. (Apart from special cases; e.g. when n is a power of 2).
Specifically, the "Montgomery Reduction" approach wouldn't help for a single multiplication step. (The Wikipedia page says: "Because numbers have to be converted to and from a particular form suitable for performing the Montgomery step, a single modular multiplication performed using a Montgomery step is actually slightly less efficient than a "naive" one.")
So maybe the most effective way to speedup the computation would be to use the JNI wrappers for GMP.
You can use generic maths, like:
(A*B) mod N = ((A mod N) * (B mod N)) mod N
It may be more CPU intensive, but one should choose between CPU and memory, right?
If we are talking about modular arithmetic then indeed Montgomery reduction may be what you need. Don't know any out of box solutions though.
You can write a BigInteger multiplication as a standard long multiplication in a very large base -- for example, in base 2^32. It is fairly straightforward. If you want only the result modulo n, then it is advantageous to choose a base that is a factor of n or of which n is a factor. Then you can ignore all but one or a few of the lowest-order result (Big)digits as you perform the computation, saving space and maybe time.
That's most practical if you know n in advance, of course, but such pre-knowledge is not essential. It's especially nice if n is a power of two, and it's fairly messy if n is neither a power of 2 nor smaller than the maximum operand handled directly by the system's arithmetic unit, but all of those cases can be handled in principle.
If you must do this specifically with Java BigInteger instances, however, then be aware that any approach not provided by the BigInteger class itself will incur overhead for converting between internal and external representations.
Maybe this:
static BigInteger multiply(BigInteger c, BigInteger x)
{
BigInteger sum = BigInteger.ZERO;
BigInteger addOperand;
for (int i=0; i < FIELD_ELEMENT_BIT_SIZE; i++)
{
if (c.testBit(i))
addOperand = x;
else
addOperand = BigInteger.ZERO;
sum = add(sum, addOperand);
x = x.shiftRight(1);
}
return sum;
}
with the following helper functions:
static BigInteger add(BigInteger a, BigInteger b)
{
return modOrder(a.add(b));
}
static BigInteger modOrder(BigInteger n)
{
return n.remainder(FIELD_ORDER);
}
To be honest though, I'm not sure if this is really efficient at all since none of these operations are performed in-place.
Is it better to write
int primitive1 = 3, primitive2 = 4;
Integer a = new Integer(primitive1);
Integer b = new Integer(primitive2);
int compare = a.compareTo(b);
or
int primitive1 = 3, primitive2 = 4;
int compare = (primitive1 > primitive2) ? 1 : 0;
if(compare == 0){
compare = (primitive1 == primitive2) ? 0 : -1;
}
I think the second one is better, should be faster and more memory optimized. But aren't they equal?
For performance, it usually best to make the code as simple and clear as possible and this will often perform well (as the JIT will optimise this code best). In your case, the simplest examples are also likely to be the fastest.
I would do either
int cmp = a > b ? +1 : a < b ? -1 : 0;
or a longer version
int cmp;
if (a > b)
cmp = +1;
else if (a < b)
cmp = -1;
else
cmp = 0;
or
int cmp = Integer.compare(a, b); // in Java 7
int cmp = Double.compare(a, b); // before Java 7
It's best not to create an object if you don't need to.
Performance wise, the first is best.
If you know for sure that you won't get an overflow you can use
int cmp = a - b; // if you know there wont be an overflow.
you won't get faster than this.
Use Integer.compare(int, int). And don'try to micro-optimize your code unless you can prove that you have a performance issue.
May I propose a third
((Integer) a).compareTo(b)
Wrapping int primitive into Integer object will cost you some memory, but the difference will be only significant in very rare(memory demand) cases (array with 1000+ elements). I will not recommend using new Integer(int a) constructor this way. This will suffice :
Integer a = 3;
About comparision there is Math.signum(double d).
compare= (int) Math.signum(a-b);
They're already ints. Why not just use subtraction?
compare = a - b;
Note that Integer.compareTo() doesn't necessarily return only -1, 0 or 1 either.
For pre 1.7 i would say an equivalent to Integer.compare(x, y) is:
Integer.valueOf(x).compareTo(y);
If you are using java 8, you can create Comparator by this method:
Comparator.comparingInt(i -> i);
if you would like to compare with reversed order:
Comparator.comparingInt(i -> -i);
If you need just logical value (as it almost always is), the following one-liner will help you:
boolean ifIntsEqual = !((Math.max(a,b) - Math.min(a, b)) > 0);
And it works even in Java 1.5+, maybe even in 1.1 (i don't have one). Please tell us, if you can test it in 1.5-.
This one will do too:
boolean ifIntsEqual = !((Math.abs(a-b)) > 0);
I've recently encountered an odd situation when computing the hash Code of tuples of doubles in java. Suppose that you have the two tuples (1.0,1.0) and (Double.POSITIVE_INFINITY,Double.POSITIVE_INFINITY). Using the idiom stated in Joshua Bloch's Effective Java(Item 7), these two tuples would not be considered equal (Imagine that these tuples are objects). However, using the formula stated in Item 8 to compute hashCode() of each tuple evaluates to the same value.
So my question is: is there something strange about this formula that I missed out on when I was writing my formulas, or is it just an odd case of hash-code collisions?
Here is my short, comparative method to illustrate the situation (I wrote it as a JUnit4 test, but it should be pretty easily converted to a main method).
#Test
public void testDoubleHashCodeAndInfinity(){
double a = 1.0;
double b = 1.0;
double c = Double.POSITIVE_INFINITY;
double d = Double.POSITIVE_INFINITY;
int prime = 31;
int result1 = 17;
int result2 = 17;
long temp1 = Double.doubleToLongBits(a);
long temp2 = Double.doubleToLongBits(c);
//this assertion passes successfully
assertTrue("Double.doubleToLongBits(Double.POSITIVE_INFINITY" +
"==Double.doubleToLongBits(1.0)",temp1!=temp2);
result1 = prime*result1 + (int)(temp1^(temp1>>>32));
result2 = prime*result2 + (int)(temp2^(temp2>>>32));
//this assertion passes successfully
assertTrue("Double.POSITIVE_INFINITY.hashCode()" +
"==(1.0).hashCode()",result1!=result2);
temp1 = Double.doubleToLongBits(b);
temp2 = Double.doubleToLongBits(d);
//this assertion should pass successfully
assertTrue("Double.doubleToLongBits(Double.POSITIVE_INFINITY" +
"==Double.doubleToLongBits(1.0)",temp1!=temp2);
result1 = prime*result1+(int)(temp1^(temp1>>>32));
result2 = prime*result2+(int)(temp2^(temp2>>>32));
//this assertion fails!
assertTrue("(1.0,1.0).hashCode()==" +
"(Double.POSITIVE_INFINITY,Double.POSITIVE_INFINITY).hashCode()",
result1!=result2);
}
It's just a coincidence. However, it's an interesting one. Try this:
Double d1 = 1.0;
Double d2 = Double.POSITIVE_INFINITY;
int hash1 = d1.hashCode();
int hash2 = d2.hashCode();
// These both print -1092616192
// This was me using the wrong hash combinator *and*
// the wrong tuples... but it's interesting
System.out.println(hash1 * 17 + hash2);
System.out.println(hash2 * 17 + hash1);
// These both print -33554432
System.out.println(hash1 * 31 + hash1);
System.out.println(hash2 * 31 + hash2);
Basically the bit patterns of the hash determine this. hash1 (1.0's hash code) is 0x3ff00000 and hash2 (infinity's hash code) is 0x7ff00000. That sort of hash and those sort of multipliers produces that sort of effect...
Executive summary: it's a coincidence, but don't worry about it :)
It may be a coincidence, but that sure does not help when you are trying to use the hashCode in a Map to cache objects that have doubles in tuples. I ran into this when creating a map of Thermostat temp settings classes. Then other tests were failing because I was getting the wrong object out of the Map when using the hashCode as the key.
The solution I found to fix this was to create an appended String of the 2 double parameters and called hashCode() on the String. To avoid the String overhead I cached the hashcode.
private volatile hashCode;
#Override public int hashCode()
{
int result = hashCode;
if (result == 0) {
String value = new StringBuilder().append(d1).append(d2).toString();
result = value.hashCode();
hashCode = result;
}
return result;
}