So, I am attempting to create an RSA algorithm from scratch.
So far, I have successfully created the ability to select two primes (which I have as 11 and 13 in my current example. Then, I calculate N by doing p x q. Which gets me 143.
Then, I move on to my public BigInteger findZ() method which calculates ϕ which is (p-1)(q-1).
Using this newly calculated ϕ, I want to find a number, or rather create an e variable that follows 1<(e)<ϕ, or simple gcd(e,ϕ) = 1 Thus, I initially set temp to equal my constant ONE (which is equal to one) + 1 to satisfy the range. However, after continuous debugging attempts, the loop never finds a value that has a GCD that is equal to one, which i've created a constant to represent since I am required to use BigInteger. Is there a reason for this?
Here is my code.
import java.math.BigInteger;
public class RSA
{
//Intialize the variables.
private BigInteger p;
private BigInteger q;
private BigInteger n;
private BigInteger z;
final private BigInteger ONE = BigInteger.valueOf(1);
public BigInteger getP()
{
return p;
}
public BigInteger getQ()
{
return q;
}
//Computes N, which is just p*q.
public BigInteger findN()
{
n = p.multiply(q);
return p.multiply(q);
}
public BigInteger findZ()
{
long pMinusOne = p.intValue() - 1;
long qMinusOne = q.intValue() - 1;
z = BigInteger.valueOf(pMinusOne * qMinusOne);
return z;
}
public BigInteger getE()
{
int temp = ONE.intValue() + 1;
BigInteger GCD = BigInteger.valueOf(temp);
while (GCD.gcd(z).compareTo(ONE) != 0)
{
temp++;
}
e = BigInteger.valueOf(temp);
return e;
}
}
Any help is greatly appreciated.
Thanks!
Since you asked for any help, I'll answer your question and give other tips.
How to get e
One tip is to use equals() instead of compareTo() when you're just checking for equality. Sometimes that can reduce the amount of work being done, and it's easier to read as well.
The biggest error in your code is that temp is used to set the original value of GCD, but that doesn't link temp to GCD. They stay disconnected. If you change temp later, GCD won't know about it and won't change. You need to add one to GCD directly. Here's some example code:
BigInteger e = BigInteger.valueOf(3);
while (! phi.gcd(e).equals(BigInteger.ONE)) {
e = e.add(BigInteger.ONE);
}
Look over BigInteger's methods
Get a sense of what you can easily do with BigInteger's by using your favorite search engine and searching for BigInteger 8 API. The 8 is for the version of Java you're using, so that might change. The API is for a list of methods.
Early on in the search results, you should find this API page. BigInteger has a lot of nice and convenient methods, so check them out. It even has a constructor that'll give you a BigInteger of whatever size you want that's very likely to be a prime, which is nice for generating the primes for a new random RSA key.
Use BigInteger's built-in constants
Don't recreate the following constants (which show up in the API page above):
BigInteger.ZERO
BigInteger.ONE
BigInteger.TEN
Never convert BigInteger to long unless you're sure it'll fit
You're converting BigIntegers to long, which is a bad idea, since there are a lot of BigIntegers that won't fit in a long, giving you incorrect results. For correctness (which is more important than speed), do arithmetic directly with BigIntegers.
You also use intValue() a lot when you're getting a long. Use longValueExact(). For that matter, use intValueExact() when you're getting an int.
So, to calculate ϕ:
BigInteger pMinusOne = p.subtract(BigInteger.ONE);
BigInteger qMinusOne = q.subtract(BigInteger.ONE);
BigInteger phi = pMinusOne.multiply(qMinusOne);
Now you know that it will give correct results, even for larger BigIntegers. It's also not that hard to read, which is good for maintaining the code later.
What to store
You should also store just n and e (and d but only if it's a private key) Never store p, q, or ϕ with RSA because those allow you to easily figure out the private key from the public key.
In general, don't calculate in getZZZ methods
You should figure out n and e (and d but only if it's a private key) in the constructor method(s) and store only those in instance variables. Then, you can have a getN() and getE() method to get the precomputed instance variables. For example (and you don't have to use this code, it's just to give an idea):
public class RSA {
private final BigInteger n;
private final BigInteger e;
private final BigInteger d;
public RSA(final BigInteger p, final BigInteger q) {
this.n = p.multiply(q);
// Calculate phi
final BigInteger pMinusOne = p.subtract(BigInteger.ONE);
final BigInteger qMinusOne = q.subtract(BigInteger.ONE);
final BigInteger phi = pMinusOne.multiply(qMinusOne);
// Calculate e
BigInteger e = BigInteger.valueOf(3L);
while (! phi.gcd(e).equals(BigInteger.ONE)) {
e = e.add(BigInteger.ONE);
}
this.e = e;
// Calculate d
this.d = e.modInverse(phi);
}
public BigInteger getN() {
return n;
}
public BigInteger getE() {
return e;
}
public BigInteger getD() {
return d;
}
}
Related
I had a question when I learned the HashMap source code in java8。
Source code is so complicated, how much efficiency?
So I wrote a code about the hash conflict。
public class Test {
final int i;
public Test(int i) {
this.i = i;
}
public static void main(String[] args) {
java.util.HashMap<Test, Test> set = new java.util.HashMap<Test, Test>();
long time;
Test last;
Random random = new Random(0);
int i = 0;
for (int max = 1; max < 200000; max <<= 1) {
long c1 = 0, c2 = 0;
int t = 0;
for (; i < max; i++, t++) {
last = new Test(random.nextInt());
time = System.nanoTime();
set.put(last, last);
c1 += (System.nanoTime() - time);
last = new Test(random.nextInt());
time = System.nanoTime();
set.get(last);
c2 += (System.nanoTime() - time);
}
System.out.format("%d\t%d\t%d\n", max, (c1 / t), (c2 / t));
}
}
public int hashCode() {
return 0;
}
public boolean equals(Object obj) {
if (obj == null)
return false;
if (!(obj instanceof Test))
return false;
Test t = (Test) obj;
return t.i == this.i;
}
}
I show the results in Excel。
enter image description here
I am using java6u45 java7u80 java8u131。
I do not understand why the performance of java8 will be so bad
I'm trying to write my own HashMap.
I would like to learn HashMap in java8 which is better, but I did not find it.
Your test scenario is non-optimal for Java 8 HashMap. HashMap in Java 8 optimizes collisions by using binary trees for any hash chains longer than a given threshold. However, this only works if the key type is comparable. If it isn't then the overhead of testing to see if the optimization is possible actually makes Java 8 HashMap slower. (The slow-down is more than I expected ... but that's another topic.)
Change your Test class to implement Comparable<Test> ... and you should see that Java 8 performs better than the others when the proportion of hash collisions is large enough.
Note that the tree optimization should be considered as a defensive measure for the case where the hash function doesn't perform. The optimization turns O(N) worst-case performance to O(logN) worst-case.
If you want your HashMap instances to have O(1) lookup, you should make sure that you use a good hash function for the key type. If the probability of collision is minimized, the optimization is moot.
Source code is so complicated, how much efficiency?
It is explained in the comments in the source code. And probably other places that Google can find for you :-)
Just to put my question in context: I have a class that sorts a list in its constructor, based on some calculated score per element. Now I want to extend my code to a version of the class that does not sort the list. The easiest (but obviously not clean, I'm fully aware, but time is pressing and I don't have time to refactor my code at the moment) solution would be to just use a score calculator that assigns the same score to every element.
Which double value should I pick? I was personally thinking +Infinity or -Infinity since I assume these have a special representation, meaning they can be compared fast. Is this a correct assumption? I do not know enough about the low level implementation of java to figure out if I am correct.
In general avoid 0.0, -0.0 and NaN. Any other number would be fine. You may look into Double.compare implementation to see that they are handled specially:
if (d1 < d2)
return -1; // Neither val is NaN, thisVal is smaller
if (d1 > d2)
return 1; // Neither val is NaN, thisVal is larger
// Cannot use doubleToRawLongBits because of possibility of NaNs.
long thisBits = Double.doubleToLongBits(d1);
long anotherBits = Double.doubleToLongBits(d2);
return (thisBits == anotherBits ? 0 : // Values are equal
(thisBits < anotherBits ? -1 : // (-0.0, 0.0) or (!NaN, NaN)
1)); // (0.0, -0.0) or (NaN, !NaN)
However that depends on how your sorting comparator is implemented. If you don't use Double.compare, then probably it doesn't matter.
Note that except these special cases with 0.0/-0.0/NaN double numbers comparison is wired inside the CPU and really fast, thus you are unlikely to get any significant comparison overhead compared to the other code you already have.
No sure how this would fit in but have you considered writing your own?
It just seems a little concerning that you are looking for an object with specific performance characteristics that are unlikely to consistently appear in a general implementation. Even if you find a perfect candidate by experiment or even from source code you could not guarantee the contract.
static class ConstDouble extends Number implements Comparable<Number> {
private final Double d;
private final int intValue;
private final long longValue;
private final float floatValue;
public ConstDouble(Double d) {
this.d = d;
this.intValue = d.intValue();
this.longValue = d.longValue();
this.floatValue = d.floatValue();
}
public ConstDouble(long i) {
this((double) i);
}
// Implement Number
#Override
public int intValue() {
return intValue;
}
#Override
public long longValue() {
return longValue;
}
#Override
public float floatValue() {
return floatValue;
}
#Override
public double doubleValue() {
return d;
}
// Implement Comparable<Number> fast.
#Override
public int compareTo(Number o) {
// Core requirement - comparing with myself will always be fastest.
if (o == this) {
return 0;
}
return Double.compare(d, o.doubleValue());
}
}
// Special constant to use appropriately.
public static final ConstDouble ZERO = new ConstDouble(0);
public void test() {
// Will use ordinary compare.
int d1 = new ConstDouble(0).compareTo(new Double(0));
// Will use fast compare.
int d2 = ZERO.compareTo(new Double(0));
// Guaranteed to return 0 in the shortest time.
int d3 = ZERO.compareTo(ZERO);
}
Obviously you would need to use Comparable<Number> rather than Double in your collections but that may not be a bad thing. You could probably craft a mechanism to ensure that the fast-track compare is always used in preference (depends on your usage).
I have updated a Java application to Java 8. The application heavily relies on HashMaps.
When I run the benchmarks, I see unpredictable behavoir. For some inputs, the application runs faster than before, but for larger inputs, it's constantly slower.
I've checked the profiler and the most time consuming operation is HashMap.get. I suspect the changes
are due to the HashMap modification in Java 8, but it may not be true, as I have changed some other parts.
Is there an easy way that I hook in the original Java 7 HashMap into my Java 8 application so that I only change the hashmap implementation to see if I still observe the change in performance.
The following is a minimal program that tries to simulate what my application is doing.
The basic idea is that i need to share nodes in the application. At some runtime point, a node
should be retrieved or created if it already does not exist based on some integer properties. The following only uses two integer, but in the real application I have one, two and three integer keys.
import java.util.HashMap;
import java.util.Map;
import java.util.Random;
public class Test1 {
static int max_k1 = 500;
static int max_k2 = 500;
static Map<Node, Node> map;
static Random random = new Random();
public static void main(String[] args) {
for (int i = 0; i < 15; i++) {
long start = System.nanoTime();
run();
long end = System.nanoTime();
System.out.println((end - start) / 1000_000);
}
}
private static void run() {
map = new HashMap<>();
for (int i = 0; i < 10_000_000; i++) {
Node key = new Node(random.nextInt(max_k1), random.nextInt(max_k2));
Node val = getOrElseUpdate(key);
}
}
private static Node getOrElseUpdate(Node key) {
Node val;
if ((val = map.get(key)) == null) {
val = key;
map.put(key, val);
}
return val;
}
private static class Node {
private int k1;
private int k2;
public Node(int k1, int k2) {
this.k1 = k1;
this.k2 = k2;
}
#Override
public int hashCode() {
int result = 17;
result = 31 * result + k1;
result = 31 * result + k2;
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (!(obj instanceof Node))
return false;
Node other = (Node) obj;
return k1 == other.k1 && k2 == other.k2;
}
}
}
The benchmarking is primitive, but still, this is the result of 15 runs on Java 8:
8143
7919
7984
7973
7948
7984
7931
7992
8038
7975
7924
7995
6903
7758
7627
and this is for Java 7:
7247
6955
6510
6514
6577
6489
6510
6570
6497
6482
6540
6462
6514
4603
6270
The benchmarking is primitive, so I appreciate if someone who is familiar with JMH or other benchmarking tools run it, but from what I observe the results are better for Java 7. Any ideas?
Your hashCode() is very poor. In example you posted you have 250000 unique values but only 15969 unique hash codes. Because of lot of collisions, Java 8 swaps lists with trees. In your case it only adds overhead, because many elements not only have the same position in hash table but also the same hash code. The tree ends up as a linked list anyway.
There are couple of ways to fix this:
Improve your hashCode. return k1 * 500 + k2; resolves the issue.
Use THashMap. Open addressing should work better in case of collisions.
Make Node implement Comparable. This will be used by HashMap to construct balanced tree in case of conflicts.
I am doing calculations with BigIntegers that uses a loop that calls multiply() about 100 billion times, and the new object creation from the BigInteger is making it very slow. I was hoping somebody had written or found a MutableBigInteger class. I found the MutableBigInteger in the java.math package, but it is private and when I copy the code into a new class, many errors come up, most of which I don't know how to fix.
What implementations exist of a Java class like MutableBigInteger that allows modifying the value in place?
Is their any particular reason you cannot use reflection to gain access to the class?
I was able to do so without any problems, here is the code:
public static void main(String[] args) throws Exception {
Constructor<?> constructor = Class.forName("java.math.MutableBigInteger").getDeclaredConstructor(int.class);
constructor.setAccessible(true);
Object x = constructor.newInstance(new Integer(17));
Object y = constructor.newInstance(new Integer(19));
Constructor<?> constructor2 = Class.forName("java.math.MutableBigInteger").getDeclaredConstructor(x.getClass());
constructor2.setAccessible(true);
Object z = constructor.newInstance(new Integer(0));
Object w = constructor.newInstance(new Integer(0));
Method m = x.getClass().getDeclaredMethod("multiply", new Class[] { x.getClass(), x.getClass()});
Method m2 = x.getClass().getDeclaredMethod("mul", new Class[] { int.class, x.getClass()});
m.setAccessible(true);
m2.setAccessible(true);
// Slightly faster than BigInteger
for (int i = 0; i < 200000; i++) {
m.invoke(x, y, z);
w = z;
z = x;
x = w;
}
// Significantly faster than BigInteger and the above loop
for (int i = 0; i < 200000; i++) {
m2.invoke(x, 19, x);
}
BigInteger n17 = new BigInteger("17");
BigInteger n19 = new BigInteger("19");
BigInteger bigX = n17;
// Slowest
for (int i = 0; i < 200000; i++) {
bigX = bigX.multiply(n19);
}
}
Edit:
I decided to play around with a bit more, it does appear that java.math.MutableBigInteger doesn't behave exactly as you would expect.
It operates differently when you multiply and it will throw a nice exception when it has to increase the size of the internal array when assigning to itself. Something I guess is fairly expected. Instead I have to swap around the objects so that it is always placing the result into a different MutableBigInteger. After a couple thousand calculations the overhead from reflection becomes negligible. MutableBigInteger does end up pulling ahead and offers increasingly better performance as the number of operations increases. If you use the 'mul' function with an integer primitive as the value to multiply with, the MutableBigInteger runs almost 10 times faster than using BigInteger. I guess it really boils down to what value you need to multiply with. Either way if you ran this calculation "100 billion times" using reflection with MutableBigInteger, it would run faster than BigInteger because there would be "less" memory allocation and it would cache the reflective operations, removing overhead from reflection.
JScience has a class call LargeInteger, which is also immutable, but which they claim has significantly improved perfomance compared to BigInteger.
http://jscience.org/
APFloat's Apint might be worth checking out too. http://www.apfloat.org/apfloat_java/
I copied MutableBigInteger, then commented out some methods' bodies I dind't need, adding a nice
throw new UnsupportedOperationException("...");
when invoked.
here is how it looks.
In Revisions you can see what's changed from the original java.math.MutableBigInteger.
I also added some convenience methods,
public void init(long val) {};
public MutableBigInteger(long val) {};
// To save previous value before modifying.
public void addAndBackup(MutableBigInteger addend) {}
// To restore previous value after modifying.
public void restoreBackup() {}
Here is how I used it:
private BigInteger traverseToFactor(BigInteger offset, BigInteger toFactorize, boolean forward) {
MutableBigInteger mbiOffset = new MutableBigInteger(offset);
MutableBigInteger mbiToFactorize = new MutableBigInteger(toFactorize);
MutableBigInteger blockSize = new MutableBigInteger(list.size);
if (! MutableBigInteger.ZERO.equals(mbiOffset.remainder(blockSize))) {
throw new ArithmeticException("Offset not multiple of blockSize");
}
LongBigArrayBigList pattern = (LongBigArrayBigList) list.getPattern();
while (true) {
MutableBigInteger divisor = new MutableBigInteger(mbiOffset);
for (long i = 0; i < pattern.size64(); i++) {
long testOperand = pattern.getLong(i);
MutableBigInteger.UNSAFE_AUX_VALUE.init(testOperand);
divisor.addAndBackup(MutableBigInteger.UNSAFE_AUX_VALUE);
if (MutableBigInteger.ZERO.equals(mbiToFactorize.remainder(divisor))) {
return divisor.toBigInteger();
}
divisor.restoreBackup();
}
if (forward) {
mbiOffset.add(blockSize);
} else {
mbiOffset.subtract(blockSize);
}
System.out.println(mbiOffset);
}
}
What's the most idiomatic way in Java to verify that a cast from long to int does not lose any information?
This is my current implementation:
public static int safeLongToInt(long l) {
int i = (int)l;
if ((long)i != l) {
throw new IllegalArgumentException(l + " cannot be cast to int without changing its value.");
}
return i;
}
A method was added in Java 8:
import static java.lang.Math.toIntExact;
long foo = 10L;
int bar = toIntExact(foo);
Will throw an ArithmeticException in case of overflow.
See: Math.toIntExact(long)
Several other overflow safe methods have been added to Java 8. They end with exact.
Examples:
Math.incrementExact(long)
Math.subtractExact(long, long)
Math.decrementExact(long)
Math.negateExact(long),
Math.subtractExact(int, int)
I think I'd do it as simply as:
public static int safeLongToInt(long l) {
if (l < Integer.MIN_VALUE || l > Integer.MAX_VALUE) {
throw new IllegalArgumentException
(l + " cannot be cast to int without changing its value.");
}
return (int) l;
}
I think that expresses the intent more clearly than the repeated casting... but it's somewhat subjective.
Note of potential interest - in C# it would just be:
return checked ((int) l);
With Google Guava's Ints class, your method can be changed to:
public static int safeLongToInt(long l) {
return Ints.checkedCast(l);
}
From the linked docs:
checkedCast
public static int checkedCast(long value)
Returns the int value that is equal to value, if possible.
Parameters:
value - any value in the range of the int type
Returns:
the int value that equals value
Throws:
IllegalArgumentException - if value is greater than Integer.MAX_VALUE or less than Integer.MIN_VALUE
Incidentally, you don't need the safeLongToInt wrapper, unless you want to leave it in place for changing out the functionality without extensive refactoring of course.
With BigDecimal:
long aLong = ...;
int anInt = new BigDecimal(aLong).intValueExact(); // throws ArithmeticException
// if outside bounds
here is a solution, in case you don't care about value in case it is bigger then needed ;)
public static int safeLongToInt(long l) {
return (int) Math.max(Math.min(Integer.MAX_VALUE, l), Integer.MIN_VALUE);
}
DONT: This is not a solution!
My first approach was:
public int longToInt(long theLongOne) {
return Long.valueOf(theLongOne).intValue();
}
But that merely just casts the long to an int, potentially creating new Long instances or retrieving them from the Long pool.
The drawbacks
Long.valueOf creates a new Long instance if the number is not within Long's pool range [-128, 127].
The intValue implementation does nothing more than:
return (int)value;
So this can be considered even worse than just casting the long to int.
I claim that the obvious way to see whether casting a value changed the value would be to cast and check the result. I would, however, remove the unnecessary cast when comparing. I'm also not too keen on one letter variable names (exception x and y, but not when they mean row and column (sometimes respectively)).
public static int intValue(long value) {
int valueInt = (int)value;
if (valueInt != value) {
throw new IllegalArgumentException(
"The long value "+value+" is not within range of the int type"
);
}
return valueInt;
}
However, really I would want to avoid this conversion if at all possible. Obviously sometimes it's not possible, but in those cases IllegalArgumentException is almost certainly the wrong exception to be throwing as far as client code is concerned.
Java integer types are represented as signed. With an input between 231 and 232 (or -231 and -232) the cast would succeed but your test would fail.
What to check for is whether all of the high bits of the long are all the same:
public static final long LONG_HIGH_BITS = 0xFFFFFFFF80000000L;
public static int safeLongToInt(long l) {
if ((l & LONG_HIGH_BITS) == 0 || (l & LONG_HIGH_BITS) == LONG_HIGH_BITS) {
return (int) l;
} else {
throw new IllegalArgumentException("...");
}
}
(int) (longType + 0)
but Long can not exceed the maximum :)
One other solution can be:
public int longToInt(Long longVariable)
{
try {
return Integer.valueOf(longVariable.toString());
} catch(IllegalArgumentException e) {
Log.e(e.printstackstrace());
}
}
I have tried this for cases where the client is doing a POST and the server DB understands only Integers while the client has a Long.