What am I trying to do
I have a huge text file of size 8.5 GB containing 3 million lines in the format of a word, followed by 300 numbers, like this:
word 0.056646 -0.0256464 0.05246 (and so on)
The 300 numbers behind the word form a vector that represent the word. I have 3 words with which I must find the vector that represents the 4th word most closely, using an analogy model (I'm using addition, multiplication and direction).
For addition, it would look like this:
Say you have the word vectors a, b and c, then I would do c - a + b. I will then iterate through all 3 million lines and use the cosine similarity to find the fourth word d by looking for the maximum result. So it looks like this: d = max(cos(d', c-a+b)) where d' stands for the word at the current line.
What is the problem
The example stated above represents one query. I have to perform a total of 20000 queries. And I'm not just performing it for the addition analogy model, but for multiplication and direction as well. When I run my program, it's still trying to calculate the 4th word for the first analogy model (addition) for the first query, after a total of 30 seconds! I'm in dire need of optimizations in my program.
First, I'm doing a simple iteration over the 3 million lines (3 times) to find the vector I need for the word vectors a, b and c. Using System.nanoTime() I learn that for each of these vectors it takes about 1.5 milliseconds to find a vector. That's about 5 milliseconds to find all 3.
Next, I do a calculation between vectors, using classes I wrote myself (I did not seem to find any standard API that handles vector calculations):
public class VectorCalculation {
public static List<Double> plus(List<Double> v1, List<Double> v2){
return operation(new Plus(), v1, v2);
}
public static List<Double> minus(List<Double> v1, List<Double> v2){
return operation(new Minus(), v1, v2);
}
public static List<Double> operation(Operator op, List<Double> v1, List<Double> v2){
if(v1.size() != v2.size()) throw new IllegalArgumentException("The dimension of the given lists are not the same.");
List<Double> resultVector = new ArrayList<Double>();
for(int i = 0; i < v1.size(); i++){
resultVector.add(op.calculate(v1.get(i), v2.get(i)));
}
return resultVector;
}
}
public interface Operator {
public Double calculate(Double e1, Double e2);
}
public class Plus implements Operator {
#Override
public Double calculate(Double e1, Double e2) {
return e1+e2;
}
}
public class Minus implements Operator {
#Override
public Double calculate(Double e1, Double e2) {
return e1-e2;
}
}
The calculation of the vector is here:
public class Addition extends AnalogyModel {
#Override
double calculateWordVector(List<Double> a, List<Double> b, List<Double> c, List<Double> d) {
//long startTime1 = System.nanoTime();
List<Double> result = VectorCalculation.plus(VectorCalculation.minus(c, a), b);
//long endTime1 = System.nanoTime() - startTime1;
double result2 = cosineSimilarity(d, result);
//long endTime2 = System.nanoTime() - startTime1;
//System.out.println(endTime1 + " | " + endTime2);
return result2;
}
Double cosineSimilarity(List<Double> v1, List<Double> v2){
if(v1.size() != v2.size()) throw new IllegalArgumentException("Vector dimensions are not the same.");
// find the dividend
Double dividend = dotProduct(v1, v2);
// find the divisor
Double divisor = dotProduct(v1, v1) * dotProduct(v2, v2);
if(divisor == 0) divisor = 0.0001; // safety net against dividing by 0.
return dividend/divisor;
}
/**
* #return Returns the dot product of two vectors.
*/
Double dotProduct(List<Double> v1, List<Double> v2){
System.out.println(v1);
Double result = 0.0;
for(int i = 0; i < v1.size(); i++){
result += v1.get(i)*v2.get(i);
}
return result;
}
}
The time it takes to calculate result starts out rough (at about 0.1 milliseconds) but soon drops to about 0.025 milliseconds. The time it takes to calculate result2 is usually pretty modest as well around 0.005 milliseconds. d' is found by iterating through the 3 million lines and saving the vector list. This operation takes about 0.06 milliseconds.
To summarize: the estimated time it takes to finish one query, for one analogy model, it takes 5 + 3000000*(0.025 + 0.005 + 0.06) = 270005 milliseconds or 270 seconds or 4.5 minutes to finish ONE query... Considering I need to do this two more times for the other analogy models and I need to do that a total of 20000 times, this is clearly not sufficient.
The words in the text file are not ordered. It seems like the vector computation is too heavy, but the time it takes to find the vector of a word in the text file must be shortened as well. Would it help if the text file were split up in smaller ones?
Update - Code to reading file
/**
* #param vocabularyPath The path of the vector text file.
* #param word The word to find the vector for.
* #return Returns the vector of the given word as an array list.
*/
List<Double> getStringVector(String vocabularyPath, String word) throws IOException{
BufferedReader br = new BufferedReader(new FileReader(vocabularyPath));
String input = br.readLine();
boolean found = false;
while(!found && input != null){
if(input.contains(word)) found = true;
else input = br.readLine();
}
br.close();
if(input == null) return null;
else return getVector(input);
}
/**
* #param inputLine A line from the vector text file.
* #return Returns the vector of the given line as an array list.
*/
List<Double> getVector(String inputLine){
String[] splitString = inputLine.split("\\s+");
List<String> stringList = new ArrayList<>(Arrays.asList(splitString));
stringList.remove(0); // remove the word at the front
stringList.remove(stringList.size()-1); // remove the empty string at the end
List<Double> vectorList = new ArrayList<>();
for(String s : stringList){
vectorList.add(Double.parseDouble(s));
}
return vectorList;
}
There are two obvious problems: List<Double> and Operator.
The first means that instead of using 8 bytes for a double (btw. float would most probably do), you need more than twice as much (an object containing the value and a reference). What's worse: You lose space locality as your number may anywhere in the memory.
The second means that you for each dot product perform N virtual calls. This mayn't be a current problem, but when you switch between operators, it may slow you down a lot.
Recommendation
I guess all your vectors are equally long, so use a double[]. You save tons of memory and get a nice speedup.
Rewrite your operation to something like
public static void operationTo(double[] result, Operator op, double[] v1, double[] v2){
int length = result.length;
if(v1.length != length || v2.length != length) {
throw new IllegalArgumentException("The dimension of the given lists are not the same.");
}
switch (op) { // use an enum
case PLUS:
for(int i = 0; i < length; i++) {
result[i] = v1[i] + v2[i];
}
break;
...
}
}
Word lookup
The fastest way is a HashMap<String, double[]>, assuming it all fits into memory. Otherwise, a database (as already suggested) could be the way to go. A sorted file with a binary search would do as well. However, note that any other solution than a Map is 10+ times slower.
Word lookup in case memory is tight
You have 3M words only, which fits into memory nicely. Place them into an ArrayList and sort it. Write the vectors into a binary file ordered the the words. Now, to find a vector, all you need to do is
long index = Arrays.binarySeach(wordList, word);
randomAccessFile.seek(index * vectorLength * Double.SIZE / Byte.SIZE)
So you are trying to answer 20000 nearest neighbor searches in a set of 3 million coordinates in a 300-dimensional space?
Iterating over the entire dataset for each query is bound to be rather slow. You'll probably get the biggest speedup by inserting the dataset into a data structure that can answer nearest neighbor queries efficiently, such as a Ball Tree.
Related
Sometimes when you do calculations with very small probabilities using common data types such as doubles, numerical inaccuracies cascade over multiple calculations and lead to incorrect results. Because of this it is recommended to use log probabilities, which improve numerical stability. I have implemented log probabilities in Java and my implementation works, but it has worse numerical stability than using raw doubles. What is wrong with my implementation? What is an accurate and efficient way to perform many consecutive calculations with small probabilities in Java?
I'm unable to provide a neatly contained demonstration of this problem because the inaccuracies cascade over many calculations. However, here is proof that a problem exists: this submission to a CodeForces contest fails due to numerical accuracy. Running test #7 and adding debug prints clearly show that from day 1774, numerical errors begin cascading until the sum of probabilities drops to 0 (when it should be 1). After replacing my Prob class with a simple wrapper over doubles the exact same solution passes tests.
My implementation of multiplying probabilities:
a * b = Math.log(a) + Math.log(b)
My implementation of addition:
a + b = Math.log(a) + Math.log(1 + Math.exp(Math.log(b) - Math.log(a)))
The stability problem is most likely contained within those 2 lines, but here is my entire implementation:
class Prob {
/** Math explained: https://en.wikipedia.org/wiki/Log_probability
* Quick start:
* - Instantiate probabilities, eg. Prob a = new Prob(0.75)
* - add(), multiply() return new objects, can perform on nulls & NaNs.
* - get() returns probability as a readable double */
/** Logarithmized probability. Note: 0% represented by logP NaN. */
private double logP;
/** Construct instance with real probability. */
public Prob(double real) {
if (real > 0) this.logP = Math.log(real);
else this.logP = Double.NaN;
}
/** Construct instance with already logarithmized value. */
static boolean dontLogAgain = true;
public Prob(double logP, boolean anyBooleanHereToChooseThisConstructor) {
this.logP = logP;
}
/** Returns real probability as a double. */
public double get() {
return Math.exp(logP);
}
#Override
public String toString() {
return ""+get();
}
/***************** STATIC METHODS BELOW ********************/
/** Note: returns NaN only when a && b are both NaN/null. */
public static Prob add(Prob a, Prob b) {
if (nullOrNaN(a) && nullOrNaN(b)) return new Prob(Double.NaN, dontLogAgain);
if (nullOrNaN(a)) return copy(b);
if (nullOrNaN(b)) return copy(a);
double x = a.logP;
double y = b.logP;
double sum = x + Math.log(1 + Math.exp(y - x));
return new Prob(sum, dontLogAgain);
}
/** Note: multiplying by null or NaN produces NaN (repping 0% real prob). */
public static Prob multiply(Prob a, Prob b) {
if (nullOrNaN(a) || nullOrNaN(b)) return new Prob(Double.NaN, dontLogAgain);
return new Prob(a.logP + b.logP, dontLogAgain);
}
/** Returns true if p is null or NaN. */
private static boolean nullOrNaN(Prob p) {
return (p == null || Double.isNaN(p.logP));
}
/** Returns a new instance with the same value as original. */
private static Prob copy(Prob original) {
return new Prob(original.logP, dontLogAgain);
}
}
Problem was caused by the way Math.exp(z) was used in this line:
a + b = Math.log(a) + Math.log(1 + Math.exp(Math.log(b) - Math.log(a)))
When z reaches extreme values, numerical accuracy of double is not enough for the output of Math.exp(z). This causes us to lose information, produce an inaccurate result, and then these results cascade over multiple calculations.
When z >= 710 then Math.exp(z) = Infinity
When z <= -746 then Math.exp(z) = 0
In the original code I was calling Math.exp with y - x and arbitrarily choosing which is x and which is why. Let's instead choose y and x based on which is larger, so that z is negative rather than positive. The point where we get overflow is further on the negative side (746 rather than 710) and more importantly, when we overflow, we end up at 0 rather than infinity. Which is what we want with a low probability.
double x = Math.max(a.logP, b.logP);
double y = Math.min(a.logP, b.logP);
double sum = x + Math.log(1 + Math.exp(y - x));
I have an array of operations and a target number.
The operations could be
+ 3
- 3
* 4
/ 2
I want to find out how close I can get to the target number by using those operations.
I start from 0 and I need to iterate through the operations in that order, and I can choose to either use the operation or not use it.
So if the target number is 13, I can use + 3 and * 4 to get 12 which is the closest I can get to the target number 13.
I guess I need to compute all possible combinations (I guess the number of calculations is thus 2^n where n is the number of operations).
I have tried to do this in java with
import java.util.*;
public class Instruction {
public static void main(String[] args) {
// create scanner
Scanner sc = new Scanner(System.in);
// number of instructions
int N = sc.nextInt();
// target number
int K = sc.nextInt();
//
String[] instructions = new String[N];
// N instructions follow
for (int i=0; i<N; i++) {
//
instructions[i] = sc.nextLine();
}
//
System.out.println(search(instructions, 0, N, 0, K, 0, K));
}
public static int search(String[] instructions, int index, int length, int progressSoFar, int targetNumber, int bestTarget, int bestDistance) {
//
for (int i=index; i<length; i++) {
// get operator
char operator = instructions[i].charAt(0);
// get number
int number = Integer.parseInt(instructions[i].split("\\s+")[1]);
//
if (operator == '+') {
progressSoFar += number;
} else if (operator == '*') {
progressSoFar *= number;
} else if (operator == '-') {
progressSoFar -= number;
} else if (operator == '/') {
progressSoFar /= number;
}
//
int distance = Math.abs(targetNumber - progressSoFar);
// if the absolute distance between progress so far
// and the target number is less than what we have
// previously accomplished, we update best distance
if (distance < bestDistance) {
bestTarget = progressSoFar;
bestDistance = distance;
}
//
if (true) {
return bestTarget;
} else {
return search(instructions, index + 1, length, progressSoFar, targetNumber, bestTarget, bestDistance);
}
}
}
}
It doesn't work yet, but I guess I'm a little closer to solving my problem. I just don't know how to end my recursion.
But maybe I don't use recursion, but should instead just list all combinations. I just don't know how to do this.
If I, for instance, have 3 operations and I want to compute all combinations, I get the 2^3 combinations
111
110
101
011
000
001
010
100
where 1 indicates that the operation is used and 0 indicates that it is not used.
It should be rather simple to do this and then choose which combination gave the best result (the number closest to the target number), but I don't know how to do this in java.
In pseudocode, you could try brute-force back-tracking, as in:
// ops: list of ops that have not yet been tried out
// target: goal result
// currentOps: list of ops used so far
// best: reference to the best result achieved so far (can be altered; use
// an int[1], for example)
// opsForBest: list of ops used to achieve best result so far
test(ops, target, currentOps, best, opsForBest)
if ops is now empty,
current = evaluate(currentOps)
if current is closer to target than best,
best = current
opsForBest = a copy of currentOps
otherwise,
// try including next op
with the next operator in ops,
test(opsAfterNext, target,
currentOps concatenated with next, best, opsForBest)
// try *not* including next op
test(opsAfterNext, target, currentOps, best, opsForBest)
This is guaranteed to find the best answer. However, it will repeat many operations once and again. You can save some time by avoiding repeat calculations, which can be achieved using a cache of "how does this subexpression evaluate". When you include the cache, you enter the realm of "dynamic programming" (= reusing earlier results in later computation).
Edit: adding a more OO-ish variant
Variant returning the best result, and avoiding the use of that best[] array-of-one. Requires the use of an auxiliary class Answer with fields ops and result.
// ops: list of ops that have not yet been tried out
// target: goal result
// currentOps: list of ops used so far
Answer test(ops, target, currentOps, opsForBest)
if ops is now empty,
return new Answer(currentOps, evaluate(currentOps))
otherwise,
// try including next op
with the next operator in ops,
Answer withOp = test(opsAfterNext, target,
currentOps concatenated with next, best, opsForBest)
// try *not* including next op
Answer withoutOp = test(opsAfterNext, target,
currentOps, best, opsForBest)
if withOp.result closer to target than withoutOp.target,
return withOp
else
return withoutOp
Dynamic programming
If the target value is t, and there are n operations in the list, and the largest absolute value you can create by combining some subsequence of them is k, and the absolute value of the product of all values that appear as an operand of a division operation is d, then there's a simple O(dkn)-time and -space dynamic programming algorithm that determines whether it's possible to compute the value i using some subset of the first j operations and stores this answer (a single bit) in dp[i][j]:
dp[i][j] = dp[i][j-1] || dp[invOp(i, j)][j-1]
where invOp(i, j) computes the inverse of the jth operation on the value i. Note that if the jth operation is a multiplication by, say, x, and i is not divisible by x, then the operation is considered to have no inverse, and the term dp[invOp(i, j)][j-1] is deemed to evaluate to false. All other operations have unique inverses.
To avoid loss-of-precision problems with floating point code, first multiply the original target value t, as well as all operands to addition and subtraction operations, by d. This ensures that any division operation / x we encounter will only ever be applied to a value that is known to be divisible by x. We will essentially be working throughout with integer multiples of 1/d.
Because some operations (namely subtractions and divisions) require solving subproblems for higher target values, we cannot in general calculate dp[i][j] in a bottom-up way. Instead we can use memoisation of the top-down recursion, starting at the (scaled) target value t*d and working outwards in steps of 1 in each direction.
C++ implementation
I've implemented this in C++ at https://ideone.com/hU1Rpq. The "interesting" part is canReach(i, j); the functions preceding this are just plumbing to handle the memoisation table. Specify the inputs on stdin with the target value first, then a space-separated list of operations in which operators immediately preceed their operand values, e.g.
10 +8 +11 /2
or
10 +4000 +5500 /1000
The second example, which should give the same answer (9.5) as the first, seems to be around the ideone (and my) memory limits, although this could be extended somewhat by using long long int instead of int and a 2-bit table for _m[][][] instead of wasting a full byte on each entry.
Exponential worst-case time and space complexity
Note that in general, dk or even just k by itself could be exponential in the size of the input: e.g. if there is an addition, followed by n-1 multiplication operations, each of which involves a number larger than 1. It's not too difficult to compute k exactly via a different DP that simply looks for the largest and smallest numbers reachable using the first i operations for all 1 <= i <= n, but all we really need is an upper bound, and it's easy enough to get a (somewhat loose) one: simply discard the signs of all multiplication operands, convert all - operations to + operations, and then perform all multiplication and addition operations (i.e., ignoring divisions).
There are other optimisations that could be applied, for example dividing through by any common factor.
Here's a Java 8 example, using memoization. I wonder if annealing can be applied...
public class Tester {
public static interface Operation {
public int doOperation(int cur);
}
static Operation ops[] = { // lambdas for the opertions
(x -> x + 3),
(x -> x - 3),
(x -> x * 4),
(x -> x / 2),
};
private static int getTarget(){
return 2;
}
public static void main (String args[]){
int map[];
int val = 0;
int MAX_BITMASK = (1 << ops.length) - 1;//means ops.length < 31 [int overflow]
map = new int[MAX_BITMASK];
map[0] = val;
final int target = getTarget();// To get rid of dead code warning
int closest = val, delta = target < 0? -target: target;
int bestSeq = 0;
if (0 == target) {
System.out.println("Winning sequence: Do nothing");
}
int lastBitMask = 0, opIndex = 0;
int i = 0;
for (i = 1; i < MAX_BITMASK; i++){// brute force algo
val = map[i & lastBitMask]; // get prev memoized value
val = ops[opIndex].doOperation(val); // compute
map[i] = val; //add new memo
//the rest just logic to find the closest
// except the last part
int d = val - target;
d = d < 0? -d: d;
if (d < delta) {
bestSeq = i;
closest = val;
delta = d;
}
if (val == target){ // no point to continue
break;
}
//advance memo mask 0b001 to 0b011 to 0b111, etc.
// as well as the computing operation.
if ((i & (i + 1)) == 0){ // check for 2^n -1
lastBitMask = (lastBitMask << 1) + 1;
opIndex++;
}
}
System.out.println("Winning sequence: " + bestSeq);
System.out.println("Closest to \'" + target + "\' is: " + closest);
}
}
Worth noting, the "winning sequence" is the bit representation (displayed as decimal) of what was used and what wasn't, as the OP has done in the question.
For Those of you coming from Java 7, this is what I was referencing for lambdas: Lambda Expressionsin GUI Applications. So if you're constrained to 7, you can still make this work quite easily.
I have been thinking of it but have ran out of idea's. I have 10 arrays each of length 18 and having 18 double values in them. These 18 values are features of an image. Now I have to apply k-means clustering on them.
For implementing k-means clustering I need a unique computational value for each array. Are there any mathematical or statistical or any logic that would help me to create a computational value for each array, which is unique to it based upon values inside it. Thanks in advance.
Here is my array example. Have 10 more
[0.07518284315321135
0.002987851573676068
0.002963866526639678
0.002526139418225552
0.07444872939213325
0.0037219653347541617
0.0036979802877177715
0.0017920256571474585
0.07499695903867931
0.003477831820276616
0.003477831820276616
0.002036159171625004
0.07383539747505984
0.004311312204791184
0.0043352972518275745
0.0011786937400740452
0.07353130134299131
0.004339580295941216]
Did you checked the Arrays.hashcode in Java 7 ?
/**
* Returns a hash code based on the contents of the specified array.
* For any two <tt>double</tt> arrays <tt>a</tt> and <tt>b</tt>
* such that <tt>Arrays.equals(a, b)</tt>, it is also the case that
* <tt>Arrays.hashCode(a) == Arrays.hashCode(b)</tt>.
*
* <p>The value returned by this method is the same value that would be
* obtained by invoking the {#link List#hashCode() <tt>hashCode</tt>}
* method on a {#link List} containing a sequence of {#link Double}
* instances representing the elements of <tt>a</tt> in the same order.
* If <tt>a</tt> is <tt>null</tt>, this method returns 0.
*
* #param a the array whose hash value to compute
* #return a content-based hash code for <tt>a</tt>
* #since 1.5
*/
public static int hashCode(double a[]) {
if (a == null)
return 0;
int result = 1;
for (double element : a) {
long bits = Double.doubleToLongBits(element);
result = 31 * result + (int)(bits ^ (bits >>> 32));
}
return result;
}
I dont understand why #Marco13 mentioned " this is not returning unquie for arrays".
UPDATE
See #Macro13 comment for the reason why it cannot be unquie..
UPDATE
If we draw a graph using your input points, ( 18 elements) has one spike and 3 low values and the pattern goes..
if that is true.. you can find the mean of your Peak ( 1, 4, 8,12,16 ) and find the low Mean from remaining values.
So that you will be having Peak mean and Low mean . and you find the unquie number to represent these two also preserve the values using bijective algorithm described in here
This Alogirthm also provides formulas to reverse i.e take the Peak and Low mean from the unquie value.
To find unique pair < x; y >= x + (y + ( (( x +1 ) /2) * (( x +1 ) /2) ) )
Also refer Exercise 1 in pdf page 2 to reverse x and y.
For finding Mean and find paring value.
public static double mean(double[] array){
double peakMean = 0;
double lowMean = 0;
for (int i = 0; i < array.length; i++) {
if ( (i+1) % 4 == 0 || i == 0){
peakMean = peakMean + array[i];
}else{
lowMean = lowMean + array[i];
}
}
peakMean = peakMean / 5;
lowMean = lowMean / 13;
return bijective(lowMean, peakMean);
}
public static double bijective(double x,double y){
double tmp = ( y + ((x+1)/2));
return x + ( tmp * tmp);
}
for test
public static void main(String[] args) {
double[] arrays = {0.07518284315321135,0.002963866526639678,0.002526139418225552,0.07444872939213325,0.0037219653347541617,0.0036979802877177715,0.0017920256571474585,0.07499695903867931,0.003477831820276616,0.003477831820276616,0.002036159171625004,0.07383539747505984,0.004311312204791184,0.0043352972518275745,0.0011786937400740452,0.07353130134299131,0.004339580295941216};
System.out.println(mean(arrays));
}
You can use this the peak and low values to find the similar images.
You can simply sum the values, using double precision, the result value will unique most of the times. On the other hand, if the value position is relevant, then you can apply a sum using the index as multiplier.
The code could be as simple as:
public static double sum(double[] values) {
double val = 0.0;
for (double d : values) {
val += d;
}
return val;
}
public static double hash_w_order(double[] values) {
double val = 0.0;
for (int i = 0; i < values.length; i++) {
val += values[i] * (i + 1);
}
return val;
}
public static void main(String[] args) {
double[] myvals =
{ 0.07518284315321135, 0.002987851573676068, 0.002963866526639678, 0.002526139418225552, 0.07444872939213325, 0.0037219653347541617, 0.0036979802877177715, 0.0017920256571474585, 0.07499695903867931, 0.003477831820276616,
0.003477831820276616, 0.002036159171625004, 0.07383539747505984, 0.004311312204791184, 0.0043352972518275745, 0.0011786937400740452, 0.07353130134299131, 0.004339580295941216 };
System.out.println("Computed value based on sum: " + sum(myvals));
System.out.println("Computed value based on values and its position: " + hash_w_order(myvals));
}
The output for that code, using your list of values is:
Computed value based on sum: 0.41284176550504803
Computed value based on values and its position: 3.7396448842464496
Well, here's a method that works for any number of doubles.
public BigInteger uniqueID(double[] array) {
final BigInteger twoToTheSixtyFour =
BigInteger.valueOf(Long.MAX_VALUE).add(BigInteger.ONE);
BigInteger count = BigInteger.ZERO;
for (double d : array) {
long bitRepresentation = Double.doubleToRawLongBits(d);
count = count.multiply(twoToTheSixtyFour);
count = count.add(BigInteger.valueOf(bitRepresentation));
}
return count;
}
Explanation
Each double is a 64-bit value, which means there are 2^64 different possible double values. Since a long is easier to work with for this sort of thing, and it's the same number of bits, we can get a 1-to-1 mapping from doubles to longs using Double.doubleToRawLongBits(double).
This is awesome, because now we can treat this like a simple combinations problem. You know how you know that 1234 is a unique number? There's no other number with the same value. This is because we can break it up by its digits like so:
1234 = 1 * 10^3 + 2 * 10^2 + 3 * 10^1 + 4 * 10^0
The powers of 10 would be "basis" elements of the base-10 numbering system, if you know linear algebra. In this way, base-10 numbers are like arrays consisting of only values from 0 to 9 inclusively.
If we want something similar for double arrays, we can discuss the base-(2^64) numbering system. Each double value would be a digit in a base-(2^64) representation of a value. If there are 18 digits, there are (2^64)^18 unique values for a double[] of length 18.
That number is gigantic, so we're going to need to represent it with a BigInteger data-structure instead of a primitive number. How big is that number?
(2^64)^18 = 61172327492847069472032393719205726809135813743440799050195397570919697796091958321786863938157971792315844506873509046544459008355036150650333616890210625686064472971480622053109783197015954399612052812141827922088117778074833698589048132156300022844899841969874763871624802603515651998113045708569927237462546233168834543264678118409417047146496
There are that many unique configurations of 18-length double arrays and this code lets you uniquely describe them.
I'm going to suggest three methods, with different pros and cons which I will outline.
Hash Code
This is the obvious "solution", though it has been correctly pointed out that it will not be unique. However, it will be very unlikely that any two arrays will have the same value.
Weighted Sum
Your elements appear to be bounded; perhaps they range from a minimum of 0 to a maximum of 1. If this is the case, you can multiply the first number by N^0, the second by N^1, the third by N^2 and so on, where N is some large number (ideally the inverse of your precision). This is easily implemented, particularly if you use a matrix package, and very fast. We can make this unique if we choose.
Euclidean Distance from Mean
Subtract the mean of your arrays from each array, square the results, sum the squares. If you have an expected mean, you can use that. Again, not unique, there will be collisions, but you (almost) can't avoid that.
The difficulty of uniqueness
It has already been explained that hashing will not give you a unique solution. A unique number is possible in theory, using the Weighted Sum, but we have to use numbers of a very large size. Let's say your numbers are 64 bits in memory. That means that there are 2^64 possible numbers they can represent (slightly less using floating point). Eighteen such numbers in an array could represent 2^(64*18) different numbers. That's huge. If you use anything less, you will not be able to guarantee uniqueness due to the pigeonhole principle.
Let's look at a trivial example. If you have four letters, a, b, c and d, and you have to number them each uniquely using the numbers 1 to 3, you can't. That's the pigeonhole principle. You have 2^(18*64) possible numbers. You can't number them uniquely with less than 2^(18*64) numbers, and hashing doesn't give you that.
If you use BigDecimal, you can represent (almost) arbitrarily large numbers. If the largest element you can get is 1 and the smallest 0, then you can set N = 1/(precision) and apply the Weighted Sum mentioned above. This will guarantee uniqueness. The precision for doubles in Java is Double.MIN_VALUE. Note that the array of weights needs to be stored in _Big Decimal_s!
That satisfies this part of your question:
create a computational value for each array, which is unique to it
based upon values inside it
However, there is a problem:
1 and 2 suck for K Means
I am assuming from your discussion with Marco 13 that you are performing the clustering on the single values, not the length 18 arrays. As Marco has already mentioned, Hashing sucks for K means. The whole idea is that the smallest change in the data will result in a large change in Hash Values. That means that two images which are similar, produce two very similar arrays, produce two very different "unique" numbers. Similarity is not preserved. The result will be pseudo random!!!
Weighted Sums are better, but still bad. It will basically ignore all the elements except for the last one, unless the last element is the same. Only then will it look at the next to last, and so on. Similarity is not really preserved.
Euclidean distance from the mean (or at least some point) will at least group things together in a sort of sensible way. Direction will be ignored, but at least things that are far from the mean won't be grouped with things that are close. Similarity of one feature is preserved, the other features are lost.
In summary
1 is very easy, but is not unique and doesn't preserve similarity.
2 is easy, can be unique and doesn't preserve similarity.
3 is easy, but is not unique and preserves some similarity.
Implementatio of Weighted Sum. Not really tested.
public class Array2UniqueID {
private final double min;
private final double max;
private final double prec;
private final int length;
/**
* Used to provide a {#code BigInteger} that is unique to the given array.
* <p>
* This uses weighted sum to guarantee that two IDs match if and only if
* every element of the array also matches. Similarity is not preserved.
*
* #param min smallest value an array element can possibly take
* #param max largest value an array element can possibly take
* #param prec smallest difference possible between two array elements
* #param length length of each array
*/
public Array2UniqueID(double min, double max, double prec, int length) {
this.min = min;
this.max = max;
this.prec = prec;
this.length = length;
}
/**
* A convenience constructor which assumes the array consists of doubles of
* full range.
* <p>
* This will result in very large IDs being returned.
*
* #see Array2UniqueID#Array2UniqueID(double, double, double, int)
* #param length
*/
public Array2UniqueID(int length) {
this(-Double.MAX_VALUE, Double.MAX_VALUE, Double.MIN_VALUE, length);
}
public BigDecimal createUniqueID(double[] array) {
// Validate the data
if (array.length != length) {
throw new IllegalArgumentException("Array length must be "
+ length + " but was " + array.length);
}
for (double d : array) {
if (d < min || d > max) {
throw new IllegalArgumentException("Each element of the array"
+ " must be in the range [" + min + ", " + max + "]");
}
}
double range = max - min;
/* maxNums is the maximum number of numbers that could possibly exist
* between max and min.
* The ID will be in the range 0 to maxNums^length.
* maxNums = range / prec + 1
* Stored as a BigDecimal for convenience, but is an integer
*/
BigDecimal maxNums = BigDecimal.valueOf(range)
.divide(BigDecimal.valueOf(prec))
.add(BigDecimal.ONE);
// For convenience
BigDecimal id = BigDecimal.valueOf(0);
// 2^[ (el-1)*length + i ]
for (int i = 0; i < array.length; i++) {
BigDecimal num = BigDecimal.valueOf(array[i])
.divide(BigDecimal.valueOf(prec))
.multiply(maxNums).pow(i);
id = id.add(num);
}
return id;
}
As I understand, you are going to make k-clustering, based on the double values.
Why not just wrap double value in an object, with array and position identifier, so you would know in which cluster it ended up?
Something like:
public class Element {
final public double value;
final public int array;
final public int position;
public Element(double value, int array, int position) {
this.value = value;
this.array = array;
this.position = position;
}
}
If you need to cluster array as a whole,
You can transform original arrays of length 18 to array of length 19 with last or first element being unique id, that you will ignore during clustering, but, to which you could refer after clustering finished. That way this have a small memory footprint - of 8 additional bytes for an array, and easy association with the original value.
If space is absolutely a problem, and you have all values of an array lesser than 1, you can add unique id, greater or equal to 1 to each array, and cluster, based on reminder of division to 1, 0.07518284315321135 stays 0.07518284315321135 for the 1st, and 0.07518284315321135 becomes 1.07518284315321135 for the 2nd, although this increases complexity of computation during clustering.
First of all, let's try to understand what you need mathematically:
Uniquely mapping an array of m real numbers to a single number is in fact a bijection between R^m and R, or at least N.
Since floating points are in fact rational numbers, your problem is to find a bijection between Q^m and N, which can be transformed to N^n to N, because you know your values will always be greater than 0 (just multiply your values by the precision).
Thus you need to map N^m to N. Take a look at the Cantor Pairing Function for some ideas
A guaranteed way to generate a unique result based on the array is to convert it to one big string, and use that for your computational value.
It may be slow, but it will be unique based on the array's values.
Implementation examples:
Best way to convert an ArrayList to a string
I'm trying to create a partition function that accepts three parameters: a text string, a pattern string, and an integer k.
The goal is to store the contents of the pattern of length m in a string array of k+1 fragments, where each fragment is of length m/k+1 (or as close to).
For instance if the string "ABCDEFGHIJKLMNOPQRSTUVWXYZ" is parsed to the method where k = 2
The array should look something like this [ABCDEFGHI, JKLMNOPQ, RSTUVWXYZ]
The program runs fine when m/k+1 is divided evenly, however when result produces a remainder the results are off. I’ve noticed that the errors seems to correspond with the remainder of m/k+1
This is the part of the code I'm having problems with:
public static String[] partition(String text, String pattern, int k) {
String[] fragment = new String[k+1];
int f = k+1;
int m = pattern.length();
int fragmentSize = (int)Math.floor(m/f);
int lastCharIndex;
// cannot partition evenly{
int i = 0;
while(i < f) {
// set the first partition as the largest
if(fragment[i] == fragment[0]) {
fragmentSize = (int)Math.ceil(m/f);
lastCharIndex = i * fragmentSize;
fragment[i] = pattern.substring(lastCharIndex, lastCharIndex+fragmentSize);
}
else {
fragmentSize = (int)Math.floor(m/f);
lastCharIndex = i * fragmentSize;
fragment[i] = pattern.substring(lastCharIndex, lastCharIndex+fragmentSize);
}
i++;
}
return fragment;
Using the example above the output I’m currently receiving is [ABCDEFGHI, IJKLMNOP, QRSTUVWX]
I have a feeling it has something to do with the explicit cast of fragmentSize, but I can't figure out a way around it.
Any help would be much appreciated.
Your logic is incorrect. Let's say you have 26 letters, and want 3 fragments. That makes a first fragment of 9 elements, a second of 9 elements, and a last one of 8 elements.
Your logic makes each fragments of length 8 (floor(26 / 3)), except the first one, which is of length 9 (ceil(26 / 3)). Not only that, but you add the additional letter of the first fragment to the second one.
Side note: the test if(fragment[i] == fragment[0]) should in fact be if (i == 0). And you should make your numbers double to avoid losing the decimal part.
All your operations are made with int, which means two things :
they also produce int, and you loose the decimal part
your calls to Math.ceiland Math.floor are useless : they need a double as argument but already get an int (as you pass the result of an operation involving only int), there is no floor or ceil to be made.
You should start use double when declaring f and m :
double f = (double) k+1;
double m = (double) pattern.length();
I have a list of items. Each of these items has its own probability.
Can anyone suggest an algorithm to pick an item based on its probability?
Generate a uniformly distributed random number.
Iterate through your list until the cumulative probability of the visited elements is greater than the random number
Sample code:
double p = Math.random();
double cumulativeProbability = 0.0;
for (Item item : items) {
cumulativeProbability += item.probability();
if (p <= cumulativeProbability) {
return item;
}
}
So with each item store a number that marks its relative probability, for example if you have 3 items one should be twice as likely to be selected as either of the other two then your list will have:
[{A,1},{B,1},{C,2}]
Then sum the numbers of the list (i.e. 4 in our case).
Now generate a random number and choose that index.
int index = rand.nextInt(4);
return the number such that the index is in the correct range.
Java code:
class Item {
int relativeProb;
String name;
//Getters Setters and Constructor
}
...
class RandomSelector {
List<Item> items = new List();
Random rand = new Random();
int totalSum = 0;
RandomSelector() {
for(Item item : items) {
totalSum = totalSum + item.relativeProb;
}
}
public Item getRandom() {
int index = rand.nextInt(totalSum);
int sum = 0;
int i=0;
while(sum < index ) {
sum = sum + items.get(i++).relativeProb;
}
return items.get(Math.max(0,i-1));
}
}
pretend that we have the following list
Item A 25%
Item B 15%
Item C 35%
Item D 5%
Item E 20%
Lets pretend that all the probabilities are integers, and assign each item a "range" that calculated as follows.
Start - Sum of probability of all items before
End - Start + own probability
The new numbers are as follows
Item A 0 to 25
Item B 26 to 40
Item C 41 to 75
Item D 76 to 80
Item E 81 to 100
Now pick a random number from 0 to 100. Lets say that you pick 32. 32 falls in Item B's range.
mj
You can try the Roulette Wheel Selection.
First, add all the probabilities, then scale all the probabilities in the scale of 1, by dividing each one by the sum. Suppose the scaled probabilities are A(0.4), B(0.3), C(0.25) and D(0.05). Then you can generate a random floating-point number in the range [0, 1). Now you can decide like this:
random number in [0.00, 0.40) -> pick A
in [0.40, 0.70) -> pick B
in [0.70, 0.95) -> pick C
in [0.95, 1.00) -> pick D
You can also do it with random integers - say you generate a random integer between 0 to 99 (inclusive), then you can make decision like the above.
Algorithm described in Ushman's, Brent's and #kaushaya's answers are implemented in Apache commons-math library.
Take a look at EnumeratedDistribution class (groovy code follows):
def probabilities = [
new Pair<String, Double>("one", 25),
new Pair<String, Double>("two", 30),
new Pair<String, Double>("three", 45)]
def distribution = new EnumeratedDistribution<String>(probabilities)
println distribution.sample() // here you get one of your values
Note that sum of probabilities doesn't need to be equal to 1 or 100 - it will be normalized automatically.
My method is pretty simple. Generate a random number. Now since the probabilities of your items are known,simply iterate through the sorted list of probability and pick the item whose probability is lesser than the randomly generated number.
For more details,read my answer here.
A slow but simple way to do it is to have every member to pick a random number based on its probability and pick the one with highest value.
Analogy:
Imagine 1 of 3 people needs to be chosen but they have different probabilities. You give them die with different amount of faces. First person's dice has 4 face, 2nd person's 6, and the third person's 8. They roll their die and the one with the biggest number wins.
Lets say we have the following list:
[{A,50},{B,100},{C,200}]
Pseudocode:
A.value = random(0 to 50);
B.value = random(0 to 100);
C.value = random (0 to 200);
We pick the one with the highest value.
This method above does not exactly map the probabilities. For example 100 will not have twice the chance of 50. But we can do it in a by tweaking the method a bit.
Method 2
Instead of picking a number from 0 to the weight we can limit them from the upper limit of previous variable to addition of the current variable.
[{A,50},{B,100},{C,200}]
Pseudocode:
A.lowLimit= 0; A.topLimit=50;
B.lowLimit= A.topLimit+1; B.topLimit= B.lowLimit+100
C.lowLimit= B.topLimit+1; C.topLimit= C.lowLimit+200
resulting limits
A.limits = 0,50
B.limits = 51,151
C.limits = 152,352
Then we pick a random number from 0 to 352 and compare it to each variable's limits to see whether the random number is in its limits.
I believe this tweak has better performance since there is only 1 random generation.
There is a similar method in other answers but this method does not require the total to be 100 or 1.00.
Brent's answer is good, but it doesn't account for the possibility of erroneously choosing an item with a probability of 0 in cases where p = 0. That's easy enough to handle by checking the probability (or perhaps not adding the item in the first place):
double p = Math.random();
double cumulativeProbability = 0.0;
for (Item item : items) {
cumulativeProbability += item.probability();
if (p <= cumulativeProbability && item.probability() != 0) {
return item;
}
}
A space-costly way is to clone each item the number of times its probability. Selection will be done in O(1).
For example
//input
[{A,1},{B,1},{C,3}]
// transform into
[{A,1},{B,1},{C,1},{C,1},{C,1}]
Then simply pick any item randomly from this transformed list.
Adapted the code from https://stackoverflow.com/a/37228927/11257746 into a general extention method. This will allow you to get a weighted random value from a Dictionary with the structure <TKey, int>, where int is a weight value.
A Key that has a value of 50 is 10 times more likely to be chosen than a key with the value of 5.
C# code using LINQ:
/// <summary>
/// Get a random key out of a dictionary which has integer values treated as weights.
/// A key in the dictionary with a weight of 50 is 10 times more likely to be chosen than an element with the weight of 5.
///
/// Example usage to get 1 item:
/// Dictionary<MyType, int> myTypes;
/// MyType chosenType = myTypes.GetWeightedRandomKey<MyType, int>().First();
///
/// Adapted into a general extention method from https://stackoverflow.com/a/37228927/11257746
/// </summary>
public static IEnumerable<TKey> GetWeightedRandomKey<TKey, TValue>(this Dictionary<TKey, int> dictionaryWithWeights)
{
int totalWeights = 0;
foreach (KeyValuePair<TKey, int> pair in dictionaryWithWeights)
{
totalWeights += pair.Value;
}
System.Random random = new System.Random();
while (true)
{
int randomWeight = random.Next(0, totalWeights);
foreach (KeyValuePair<TKey, int> pair in dictionaryWithWeights)
{
int weight = pair.Value;
if (randomWeight - weight > 0)
randomWeight -= weight;
else
{
yield return pair.Key;
break;
}
}
}
}
Example usage:
public enum MyType { Thing1, Thing2, Thing3 }
public Dictionary<MyType, int> MyWeightedDictionary = new Dictionary<MyType, int>();
public void MyVoid()
{
MyWeightedDictionary.Add(MyType.Thing1, 50);
MyWeightedDictionary.Add(MyType.Thing2, 25);
MyWeightedDictionary.Add(MyType.Thing3, 5);
// Get a single random key
MyType myChosenType = MyWeightedDictionary.GetWeightedRandomKey<MyType, int>().First();
// Get 20 random keys
List<MyType> myChosenTypes = MyWeightedDictionary.GetWeightedRandomKey<MyType, int>().Take(20).ToList();
}
If you don't mind adding a third party dependency in your code you can use the MockNeat.probabilities() method.
For example:
String s = mockNeat.probabilites(String.class)
.add(0.1, "A") // 10% chance to pick A
.add(0.2, "B") // 20% chance to pick B
.add(0.5, "C") // 50% chance to pick C
.add(0.2, "D") // 20% chance to pick D
.val();
Disclaimer: I am the author of the library, so I might be biased when I am recommending it.
All mentioned solutions have linear effort. The following has only logarithmic effort and deals also with unnormalized probabilities. I'd reccommend to use a TreeMap rather than a List:
import java.util.*;
import java.util.stream.IntStream;
public class ProbabilityMap<T> extends TreeMap<Double,T>{
private static final long serialVersionUID = 1L;
public static Random random = new Random();
public double sumOfProbabilities;
public Map.Entry<Double,T> next() {
return ceilingEntry(random.nextDouble()*sumOfProbabilities);
}
#Override public T put(Double key, T value) {
return super.put(sumOfProbabilities+=key, value);
}
public static void main(String[] args) {
ProbabilityMap<Integer> map = new ProbabilityMap<>();
map.put(0.1,1); map.put(0.3,3); map.put(0.2,2);
IntStream.range(0, 10).forEach(i->System.out.println(map.next()));
}
}
You could use the Julia code:
function selrnd(a::Vector{Int})
c = a[:]
sumc = c[1]
for i=2:length(c)
sumc += c[i]
c[i] += c[i-1]
end
r = rand()*sumc
for i=1:length(c)
if r <= c[i]
return i
end
end
end
This function returns the index of an item efficiently.