I am trying to perform PCA reducing 900 dimensions to 10. So far I have:
covariancex = cov(labels);
[V, d] = eigs(covariancex, 40);
pcatrain = (trainingData - repmat(mean(traingData), 699, 1)) * V;
pcatest = (test - repmat(mean(trainingData), 225, 1)) * V;
Where labels are 1x699 labels for chars (1-26). trainingData is 699x900, 900-dimensional data for the images of 699 chars. test is 225x900, 225 900-dimensional chars.
Basically I want to reduce this down to 225x10 i.e. 10 dimensions but am kind of stuck at this point.
The covariance is supposed to implemented in your trainingData:
X = bsxfun(#minus, trainingData, mean(trainingData,1));
covariancex = (X'*X)./(size(X,1)-1);
[V D] = eigs(covariancex, 10); % reduce to 10 dimension
Xtest = bsxfun(#minus, test, mean(trainingData,1));
pcatest = Xtest*V;
From your code it seems like you are taking the covariance of the labels, not the trainingData. I believe the point of PCA is in determining the greatest variance in some N (N = 10 here) number of subspaces of your data.
Your covariance matrix should be 900x900 (if 900 is the dimension of each image, a result of having 30x30 pixel images I assume.) Where the diagonal elements [i,i] of covaraincex gives the variance of that pixel for all training samples, and off diagonal [i,j] give the covariance between pixel i and pixel j. This should be a diagonal matrix as [i,j] == [j,i].
Furthermore when calling eigs(covariancex,N), N should be 10 instead of 40 if you want to reduce the dimension to 10.
Related
I need a fast way to find maximum value when intervals are overlapping, unlike finding the point where got overlap the most, there is "order". I would have int[][] data that 2 values in int[], where the first number is the center, the second number is the radius, the closer to the center, the larger the value at that point is going to be. For example, if I am given data like:
int[][] data = new int[][]{
{1, 1},
{3, 3},
{2, 4}};
Then on a number line, this is how it's going to looks like:
x axis: -2 -1 0 1 2 3 4 5 6 7
1 1: 1 2 1
3 3: 1 2 3 4 3 2 1
2 4: 1 2 3 4 5 4 3 2 1
So for the value of my point to be as large as possible, I need to pick the point x = 2, which gives a total value of 1 + 3 + 5 = 9, the largest possible value. It there a way to do it fast? Like time complexity of O(n) or O(nlogn)
This can be done with a simple O(n log n) algorithm.
Consider the value function v(x), and then consider its discrete derivative dv(x)=v(x)-v(x-1). Suppose you only have one interval, say {3,3}. dv(x) is 0 from -infinity to -1, then 1 from 0 to 3, then -1 from 4 to 6, then 0 from 7 to infinity. That is, the derivative changes by 1 "just after" -1, by -2 just after 3, and by 1 just after 6.
For n intervals, there are 3*n derivative changes (some of which may occur at the same point). So find the list of all derivative changes (x,change), sort them by their x, and then just iterate through the set.
Behold:
intervals = [(1,1), (3,3), (2,4)]
events = []
for mid, width in intervals:
before_start = mid - width - 1
at_end = mid + width
events += [(before_start, 1), (mid, -2), (at_end, 1)]
events.sort()
prev_x = -1000
v = 0
dv = 0
best_v = -1000
best_x = None
for x, change in events:
dx = x - prev_x
v += dv * dx
if v > best_v:
best_v = v
best_x = x
dv += change
prev_x = x
print best_x, best_v
And also the java code:
TreeMap<Integer, Integer> ts = new TreeMap<Integer, Integer>();
for(int i = 0;i<cows.size();i++) {
int index = cows.get(i)[0] - cows.get(i)[1];
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) + 1);
}else {
ts.put(index, 1);
}
index = cows.get(i)[0] + 1;
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) - 2);
}else {
ts.put(index, -2);
}
index = cows.get(i)[0] + cows.get(i)[1] + 2;
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) + 1);
}else {
ts.put(index, 1);
}
}
int value = 0;
int best = 0;
int change = 0;
int indexBefore = -100000000;
while(ts.size() > 1) {
int index = ts.firstKey();
value += (ts.get(index) - indexBefore) * change;
best = Math.max(value, best);
change += ts.get(index);
ts.remove(index);
}
where cows is the data
Hmmm, a general O(n log n) or better would be tricky, probably solvable via linear programming, but that can get rather complex.
After a bit of wrangling, I think this can be solved via line intersections and summation of function (represented by line segment intersections). Basically, think of each as a triangle on top of a line. If the inputs are (C,R) The triangle is centered on C and has a radius of R. The points on the line are C-R (value 0), C (value R) and C+R (value 0). Each line segment of the triangle represents a value.
Consider any 2 such "triangles", the max value occurs in one of 2 places:
The peak of one of the triangle
The intersection point of the triangles or the point where the two triangles overall. Multiple triangles just mean more possible intersection points, sadly the number of possible intersections grows quadratically, so O(N log N) or better may be impossible with this method (unless some good optimizations are found), unless the number of intersections is O(N) or less.
To find all the intersection points, we can just use a standard algorithm for that, but we need to modify things in one specific way. We need to add a line that extends from each peak high enough so it would be higher than any line, so basically from (C,C) to (C,Max_R). We then run the algorithm, output sensitive intersection finding algorithms are O(N log N + k) where k is the number of intersections. Sadly this can be as high as O(N^2) (consider the case (1,100), (2,100),(3,100)... and so on to (50,100). Every line would intersect with every other line. Once you have the O(N + K) intersections. At every intersection, you can calculate the the value by summing the of all points within the queue. The running sum can be kept as a cached value so it only changes O(K) times, though that might not be posible, in which case it would O(N*K) instead. Making it it potentially O(N^3) (in the worst case for K) instead :(. Though that seems reasonable. For each intersection you need to sum up to O(N) lines to get the value for that point, though in practice, it would likely be better performance.
There are optimizations that could be done considering that you aim for the max and not just to find intersections. There are likely intersections not worth pursuing, however, I could also see a situation where it is so close you can't cut it down. Reminds me of convex hull. In many cases you can easily reduce 90% of the data, but there are cases where you see the worst case results (every point or almost every point is a hull point). For example, in practice there are certainly causes where you can be sure that the sum is going to be less than the current known max value.
Another optimization might be building an interval tree.
The following code produces a curve that should fit fit the points
1, 1
150, 250
10000, 500
100000, 750
100000, 1000
I built this code based off the documentation here, however, I am not entirely sure how to use the data correctly for further calcuations and whether PolynomialCurveFitter.create(3) will affect the answers in these future calcuations.
For example, how would I use the data outputted to calculate what is the x value if the y value is 200 and how would the value differ if I had PolynomialCurveFitter.create(2) instead of PolynomialCurveFitter.create(3)?
import java.util.ArrayList;
import java.util.Arrays;
import org.apache.commons.math3.fitting.PolynomialCurveFitter;
import org.apache.commons.math3.fitting.WeightedObservedPoints;
public class MyFuncFitter {
public static void main(String[] args) {
ArrayList<Integer> keyPoints = new ArrayList<Integer>();
keyPoints.add(1);
keyPoints.add(150);
keyPoints.add(10000);
keyPoints.add(100000);
keyPoints.add(1000000);
WeightedObservedPoints obs = new WeightedObservedPoints();
if(keyPoints != null && keyPoints.size() != 1) {
int size = keyPoints.size();
int sectionSize = (int) (1000 / (size - 1));
for(int i = 0; i < size; i++) {
if(i != 0)
obs.add(keyPoints.get(i), i * sectionSize);
else
obs.add(keyPoints.get(0), 1);
}
} else if(keyPoints.size() == 1 && keyPoints.get(0) >= 1) {
obs.add(1, 1);
obs.add(keyPoints.get(0), 1000);
}
PolynomialCurveFitter fitter = PolynomialCurveFitter.create(3);
fitter.withStartPoint(new double[] {keyPoints.get(0), 1});
double[] coeff = fitter.fit(obs.toList());
System.out.println(Arrays.toString(coeff));
}
}
About what the consequences of changing d for your function
PolynomialCurveFitter.create takes the degree of the polynomial as a parameter.
Very (very) roughly speaking, the polynomial degree will describe the "complexity" of the curve you want to fit. A low-level degree will produce simple curves (just a parabola for d=2), whereas higher degrees will produce more intricate curves, with lots of peaks and valleys, of highly varying sizes, therefore more able to perfectly "fit" all your data points, at the expense of not necessarily being a good "prediction" of all other values.
Like the blue curve on this graphic:
You can see how the straight line would be a better "approximation", while not fitting the data point properly.
How to compute x for any y values in the computed function
You "simply" need to solve the polynomial ! Using the very same library. Add the inverted y value to your coefficents list, and find its root.
Let's say you chose a degree of 2.
Your coefficients array coeffs will contains 3 factors {a0, a1, a2} which describes the equation as such:
If you want to solve this for a particular value, like y= 600, you need to solve :
So, basically,
So, just substract 600 to a0:
coeffs[0] -= 600
and find the root of the polynomial using the dedicated function:
PolynomialFunction polynomial = new PolynomialFunction(coeffs);
LaguerreSolver laguerreSolver = new LaguerreSolver();
double x = laguerreSolver.solve(100, polynomial, 0, 1000000);
System.out.println("For y = 600, we found x = " + x);
Whats the difference between JAMA: Matrix.times() vs Matrix.arrayTimes() in JAMA (a java library for matrix calculations)
If I have a d dimension vector x and a k dimension vector z and I want to get the xz^T (x into z transpose) should I use Matrix.times or Matrix.arrayTimes?
How can I calculate this multiplication using JAMA?
arrayTimes is simply element by element multiplication
C[i][j] = A[i][j] * B[i][j];
(treated as corresponding individual numbers)
while times is the matrix multiplication
where each element of the product is the sum of the products of corresponding row-columns.
The dimensions must match as per what you want to achieve.
Given your problem of x z^T the only viable solution is to turn these into dx1 and kx1 matrices respectively and perform x.times(z.transpose()). The result will be a matrix of d x k dimensions.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Pack squares into a rectangle
i need to calculate the most efficient size of squares will fill the the screen,
if you look at the below images, there are different screen sizes and square count.
I need a algorithm to calculate x axis square count and y axis square count which will fill the screen most efficiently (minimum empty area will be left after filling with squares).
i looked at the below post but it is not the answer that solves my question
Pack squares into a rectangle
1 - Square count can be changed (3-5-10 so on ...)
2 - Screen size can be different
For examples ,
on 1280 x 800 with 15 square ?
on 800 x 480 with 12 square ?
on 600x1024 with 9 square ?
on 720x1280 with 45 square ?
** I need a algorith which calculate the squares width (height is same with width) **
If you look at differencies beetween image 3 and Image 3-1 you will see that Image 3-1 uses the screen more efective because there are less unused area.
Image 3
Or maybe this is a better way to fill:
Image 3-1
If you look at differencies beetween image 4 and Image 4-1 you will see that Image 4-1 uses the screen more efective because there are less unused area.
Image 4
** 4. Image must be like below , because there are less unused area on the screen **
Image 4-1
I believe what you suggest by "efficient" is the larger the area covered by the squares the better.
let :
a : x axis square count
b : y axis square count
s : size of a square (length of one side)
w : width of screen
h : height of screen
c : number of squares to put
then we have
a * s <= w
b * s <= h
a * b >= c
With these inequalities it is possible to find an upper bound for s.
Examining the forth example given where c = 20, w = 1280 and h = 800
a * s <= 1280
b * s <= 800
a * b >= 20
a * b = (1280 / s) * (800 / s) >= 20 ---> s^2 <= (1280*800) / 20 ---> s <= 226,27..
With an upper bound for s, we can estimate a and b as;
a * s <= 1280 ---> a ~= 5,6568
b * s <= 800 ---> b ~= 3,53
with these values the inequality a * b >= 20 does not hold.
But both a and b must be whole numbers. Then we try the 4 possibilities that a and b can get :
a = 5, b = 3 // round down both
a = 5, b = 4 // one down, one up
a = 6, b = 3 // one down, one up
a = 6, b = 4 // round up both
since a * b >= 20 the first and third cases are eliminated to be a valid answer.
Choosing the answer where a = 5, b = 4 follows as the next step since their product is more close to the desired number of squares.
What you're looking for is the greatest common factor between the width and the height of the display.
Since most displays have a ratio of 4:3 or 16:9, the greatest common factor will give you the biggest square that you can use to fill the display area.
In your 400 x 400 pixel display, the greatest common factor is 400, and one square will fill the display.
In your 1280 X 800 pixel display, the greatest common factor is 160. You'll need 40 squares (8 x 5) to fill the display.
if you want to calculate one greatest common factor for all display sizes, the answer is 1. Every pixel is a square. You should calculate a separate greatest common factor for each display size you want to support.
I know how to rotate an entire 2d array by 90 degrees around the center(My 2d array lengths are always odd numbers), but I need to find an algorithm that rotates specific indices of a 2d array of known length. For example I know that the 2d array is a 17 by 17 grid and I want the method to rotate the indices [4][5] around the center by 90 degrees and return the new indices as two separate ints(y,x); Please point me in the right direction or if your feeling charitable I would very much appreciate some bits of code - preferably in java. Thanks!
Assuming cartesian coordinates (i.e. x points right, and y points up) and that your coordinates are in the form array[y][x] the center [cx, cy] of your 17x17 grid is [8, 8].
Calculate the offset [dx, dy] of your point [px, py] being [4, 5] from there, i.e. [-4, -3]
For a clockwise rotation, the new location will be [cx - dy, cy + dx]
If your array uses the Y axis pointing "downwards" then you will need to reverse some of the signs in the formulae.
For a non-geometric solution, consider that the element [0][16] needs to get mapped to [16][16], and [0][0] mapped to [0][16]. i.e. the first row maps to the last column, the second row maps to the second last column, etc.
If n is one less than the size of the grid (i.e. 16) that just means that point [y][x] will map to [x][n - y]
In theory, the geometric solution should provide the same answer - here's the equivalence:
n = 17 - 1;
c = n / 2;
dx = x - c;
dy = y - c;
nx = c - dy = c - (y - c) = 2 * c - y = n - y
ny = c + dx = c + (x - c) = x
If you have a square array with N elements in each row/column a 90deg turn anti-/counter-clockwise sends (x,y) to (N+1-y,x) doesn't it ?
That is, if like me, you think that the top-left element in a square array is (1,1) and row numbers increase down and column numbers to the right. I guess someone who counts from 0 will have to adjust the formula somewhat.
The point in Cartesian space x,y rotated 90 degrees counterclockwise maps to -y,x.
An array with N columns and M rows would map to an array of M columns and N rows. The new "x" index will be non-positive, and will be made zero-based by adding M:
a[x][y] maps to a[M-y][x]