Color Logic Algorithm - java

We are building a sports application and would like to incorporate team colors in various portions of the app.
Now each team can be represented using several different colors.
What I would like to do is to perform a check to verify whether the two team colors are within a certain range of each other, so that I do not display two similar colors.
So, if team 1's primary team color has a value of rgb(255,0,0) (or #FF0000), and team 2's primary color is similar, say rgb(250,0,0), then we would choose a different color for one of the teams.
If possible, what approach could I take to perform the check?
Thanks

Here is a theoretical explanation
And the algo in C:
typedef struct {
unsigned char r, g, b;
} RGB;
double ColourDistance(RGB e1, RGB e2)
{
long rmean = ( (long)e1.r + (long)e2.r ) / 2;
long r = (long)e1.r - (long)e2.r;
long g = (long)e1.g - (long)e2.g;
long b = (long)e1.b - (long)e2.b;
return sqrt((((512+rmean)*r*r)>>8) + 4*g*g + (((767-rmean)*b*b)>>8));
}

Here is pgras' algorithm in Java:
public double ColourDistance(Color c1, Color c2)
{
double rmean = ( c1.getRed() + c2.getRed() )/2;
int r = c1.getRed() - c2.getRed();
int g = c1.getGreen() - c2.getGreen();
int b = c1.getBlue() - c2.getBlue();
double weightR = 2 + rmean/256;
double weightG = 4.0;
double weightB = 2 + (255-rmean)/256;
return Math.sqrt(weightR*r*r + weightG*g*g + weightB*b*b);
}

Most answers for this question will suggest calculating the distance between two colors when mapping the RGB values into a 3D space. The problem with this technique is that two colors with similar hues, but different saturation or brightness levels may map further away from each other in 3D RGB space than two colors with different hues, but very similar saturation and brightness levels. In other words, a blue and a green may be closer in 3D RGB space than two shades of a Red. In this application, ensuring team colors differ, hue differences should weigh much more heavily than brightness and saturation.
So I would convert the color mapping from RGB to hue, saturation, and brightness levels, and then check just the hue values for sufficient distance.
Wikipedia has an explanation for converting RGB to HSV. LiteratePrograms has some sample code.

I would use 3d distance between two colors where x,y,z are R,G,B values.
Take a look at this Perl Library:
http://metacpan.org/pod/Color::Similarity::RGB
This is easy to implement yourself.
Just make sure that (R1-R2)^2 + (G1-G2)^2 + (B1-B2)^2 >= threshold^2

Wikipedia has details on a number of algorithms which can be used for this.
There is also this previous StackOverflow question: Finding an accurate “distance” between colors

From an algorithm viewpoint, this is fairly simple. Each color represents a point in a 3D space, and the difference between colors is the distance between those points.
Presumably the point here is to ensure that the colors are visibly different. If that's the case, deciding on the minimum distance is probably going to be fairly difficult. The problem is that (at least for people with normal vision) some differences are easier to see than others. For example, most people are more sensitive to small differences in shades of green than equally small changes in shades of red or blue. There are algorithms to take this into account, but they're based on average human vision, so none of them is guaranteed to be precisely correct for any one person.
Just for fun, you might want to take a look at X-rite's online color vision test.

I have used the algorithms given in the first replies although results did not meet expectations until I have found DeltaE library, which calculates the distance between colors much better.
It's for Node.js, check it here.

Related

Least squares Levenburg Marquardt with Apache commons

I'm using the non linear least squares Levenburg Marquardt algorithm in java to fit a number of exponential curves (A+Bexp(Cx)). Although the data is quite clean and has a good approximation to the model the algorithm is not able to model the majority of them even with a excessive number of iterations(5000-6000). For the curves it can model, it does so in about 150 iterations.
LeastSquaresProblem problem = new LeastSquaresBuilder()
.start(start).model(jac).target(dTarget)
.lazyEvaluation(false).maxEvaluations(5000)
.maxIterations(6000).build();
LevenbergMarquardtOptimizer optimizer = new LevenbergMarquardtOptimizer();
LeastSquaresOptimizer.Optimum optimum = optimizer.optimize(problem);}
My question is how would I define a convergence criteria in apache commons in order to stop it hitting a max number of iterations?
I don't believe Java is your problem. Let's address the mathematics.
This problem is easier to solve if you change your function.
Your assumed equation is:
y = A + B*exp(C*x)
It'd be easier if you could do this:
y-A = B*exp(C*x)
Now A is just a constant that can be zero or whatever value you need to shift the curve up or down. Let's call that variable z:
z = B*exp(C*x)
Taking the natural log of both sides:
ln(z) = ln(B*exp(C*x))
We can simplify that right hand side to get the final result:
ln(z) = ln(B) + C*x
Transform your (x, y) data to (x, z) and you can use least squares fitting of a straight line where C is the slope in (x, z) space and ln(B) is the intercept. Lots of software available to do that.

How to generate some identically distributed pseudo-random vectors with a fixed length r in a programming language?

For example, for the dimension d=2, it means that we could generate a random angle 0<=a<2*pi, and then we could just use
(x_1,x_2)=(r*cos(a),r*sin(a)) as the random vector.
However, for the dimension d>=3, we could not just generate an angle and use it to represent the vector. Then how could we generate such a vector (x_1,...,x_d), which is identically distributed on x_1^2+x_2^2+...+x_d^2=r^2?
I have just come up with a new idea, that we could generate a vector (x_1,...,x_d) such that -r<=x_i<r for all i, normalize it if x_1^2+x_2^2+...+x_d^2<=r^2 and abondon it if x_1^2+x_2^2+...+x_d^2>r^2.
However, there is a drawback that the probability that x_1^2+x_2^2+...+x_d^2<=r^2 would become very small if d is very large. Does there exist some better solutions?
Generate random variables (X_1, X_2, ... X_d) that are independent and have standard normal distributions, and then normalize by dividing by sqrt(X_1^2+...+X_d^2)/r.
That the joint distribution of independent normal distributions is rotationally symmetric is not just true, it characterizes normal distributions.
You can generate pairs of independent variables with a standard normal distribution efficiently from uniform random variables using the Box-Muller transform.
I see two ways around it.
The straightforward way is, in pseudo-code:
1. build n-dimensional vector x[0] through x[n-1] with random components
2. find radius
3. go to step 1 if radius > r; otherwise, normalize to radius r
This is non-deterministic, because there is no way to know how many times you will need to cycle before you find an acceptable sphere. Additionally, the probability of finding a bad point goes up with the number of dimensions.
To understand why (thanks commenters!), imagine a 1x1 square. Add a r=1 circle inside. Fill the square with random points. All the points between the center and the circle are evenly distributed when projected on the circle. All the points between the circle and the square's border are not - because there's too many at, say, 45º; and none at, say, 90º.
The non-straightforward version is a generalization of your 2-dimensional approach:
1. assume that we are on an n-sphere; generate angles phi[0], ...phi[n-2]
for a polar-coordinates point
2. convert to cartesian coordinates x[0] through x[n-1]
According to the n-sphere page in wikipedia, the formula is
x[0] = r*cos(phi[0]);
x[1] = r*sin(phi[0])*cos(phi[1]);
x[2] = r*sin(phi[0])*sin(phi[1]);
...
x[n-2] = r*sin(phi[0])*sin(phi[1])* /*...*/ sin(phi[n-3])*sin(phi[n-2])
x[n-1] = r*sin(phi[0])*cos(phi[1])* /*...*/ sin(phi[n-3])*cos(phi[n-2])
The actual algorithm can be implemented a lot more efficiently (sin(phi[0]) is getting calculated a lot, for example)
To avoid non-determinism, I recommend the second approach.
Edit
The recommended approach, not listed above, is in Douglas's answer and many reference sites:
https://mathoverflow.net/questions/136314/what-is-a-good-method-to-find-random-points-on-the-n-sphere-when-n-is-large
http://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform
http://mathworld.wolfram.com/HyperspherePointPicking.html

Arrangement-of-rectangles algorithm and investigating it on a per-coordinate basis

(I am not sure if this qualifies as CodeGolf, as I do actually have a use for this despite its abstract-sounding nature... xD If it does, let me know and I'll add it to the tags...)
This problem consists out of two joined-at-the-hip parts: one about finding a suitable formula/algorithm, and the other about using that algorithm to deal with the basic needs that I am eventually interested in.
Suppose you have a sufficiently large two-dimensional field that needs to be filled with rectangles. Pre-generating the locations of these non-overlapping rectangles (of various sizes, which are subject to a minimum and maximum size) is unfeasible due to memory constraints. These rectangles themselves are not subject to the 'grid' implied by the field; a random rectangle can be rotated to any degree: 30, 45 or 87 or whatever degree angle, it is fine as long as it does not overlap with any other rectangles. The 'density' of the rectangles would be rather low, thus leaving a lot of empty space between them. (If the algorithm could be easily tweaked for as far this density is concerned, that would be very interesting!) Generating the positions of the rectangles are the result of a single input 64-bit 'seed', of which varying inputs should lead to different results in the arranged layout of rectangles. Finally, the output should not be overly grid-like or recognizably regular; visually speaking, it should be a pretty chaotic arrangement if plotted.
// Note: This algorithm would not actually generate or produce to start
// off with. The logic is concentrated in the questions that follow below.
// Please note: there is NO initialization phase of any sort.
Visual aid:
Now, given only the knowledge of a certain (x, y) coordinate and the generating 'seed', how can I determine whether this coordinate is inside of a rectangle? How do I determine the corners of said rectangle? And finally, how do I get the center coordinate of the nearest Rectangle?
public Boolean isInsideRectangle(int x, int y, long seed) { ... }
public Boolean isCornerOfRectangle(int x, int y, long seed) { ... }
public Point getCenterOfNearestRectangle(int x, int y, long seed) { ... }
I simply have no clue how to approach this problem, or I would include some code. The world of math is one I am not well-versed in. So while I most definitely welcome code, explanations are even more welcome. :-)
(Pre-emptive 'No, it is not homework or anything of the sort.' needs to be included: this is just one facet of a personal project.)

Coloring the Barnsley Fern fractal

While rendering the Barnsley fern fractal I come up with single color images or at most four color images i.e. the bottom left, bottom right, bottom stem and the rest of the leaves. Here is the image I get for example:
What I want however is to bring shades in the leaves and making stem thicker and of different color like:
I digged a bit about the algorithms that can be used, then I read in Draves's paper about fractal flames that during iteration of Iterated Function Systems a single point may be rendered many times if we use a single color which results in a loss of information so we need to create a histogram of how many times a point was to be rendered and then perform a rendering pass using the histogram with shades of colors log-density coloring.
I have brought myself to the point where I have the histogram but don't know how to use it to render the shades or using the log-density render technique. Can someone help me with such type of rendering or at least direct me to a source where I can read more about this with practical examples.
Here is what I have tried:
AffineTransformation f1 = new AffineTransformation(0,0,0,0.25,0,-0.4);
AffineTransformation f2 = new AffineTransformation(0.95,0.005,-0.005,0.93,-0.002,0.5);
AffineTransformation f3 = new AffineTransformation(0.035,-0.2,0.16,0.04,-0.09,0.02);
AffineTransformation f4 = new AffineTransformation(-0.04,0.2,0.16,0.04,0.083,0.12);
int N=Width*Height;
int pixelhistogram[] = new int[N];
for(int i=0;i< N*25;i++)
{
Point newpoint = new Point();
double probability = Math.random();
if(probability < 0.01)
{
newpoint = f1.transform(point);
}
else if(probability < 0.94)
{
newpoint = f2.transform(point);
}
else if(probability < 0.97)
{
newpoint = f3.transform(point);
}
else
{
newpoint = f4.transform(point);
}
point = newpoint;
// Translating the point to pixel in the image and
// incrementing that index in the pixelHistogram array by 1
// W and H are the Width and Height
int X=((int)(point.getX()*W/3)+W/2)/2 + W/4-1;
int Y=H-((int)(point.getY()*H/8) + H/9) -1;
pixelhistogram[W*Y+X]++;
}
// Now that I have the pixelhistogram
// I don't know how to render the shades using this
AffineTransformation is a simple class which performs Affine Transformation on a point. I omitted the code because otherwise the question would have become too lengthy.
A simple coloring would be to render pixel (X,Y) light green, green, or brown according to whether pixels[W*Y+X] is less than n1, between n1 and n2, or greater than n2. To determine n1 and n2, trial and error would probably be the simplest solution, but you could make an actual histogram of the log of the pixel counts that you have recorded to help judge where to put the cuts (or more generally you could use clustering algorithms to do it automatically).
PS: In the image that you show it looks like the stem is rendered with an L-system and the fronds are rendered using the three leaf transforms only (i.e. omit the fourth "stem-transform"); I would guess they are using the log pixel counts to shade the level of green but not to shade the stem.
Addition: I was asked, below, to discuss log-histograms. To avoid getting bogged down, I'd recommend first using a full featured data analysis software like R to see if this gets you what you want. Write out the pixels array to a text file with one number per line, then start R and run:
ct=scan('pixels_data.txt')
hist(log(ct))
If you see a a multimodal histogram (i.e. if it has clear peaks and valleys), that will suggest how to choose n1 and n2: put them in the valleys (i.e. if the valley on the plot is at y, set n1=exp(y)).
If you wind up plotting histograms in Java, it can apparently be done with the Jfreechart software. Just create an array with the logs of the values in the pixels array and create the histogram out of that.
At best I expect you to see only one valley in the histogram, if you use the standard 3-transform Barnsley fern, separating the really high stem values from the fronds. To color the fronds, if n is the cut between frond and stem, and pixels[W*Y+X] is less than n, you could color it using, say:
v=128.0*(log(n)-log(pixels[W*Y+X]))/log(n);
RGB=(v,255,v)
PS: Getting thick stems using the random iteration algorithm only is going to be a problem. If you change the 3rd transform to be less singular, your stems will wind up looking like thin ferns and not sticks. E.g.
{"title":"Thick Stem Fern","alist":[[
[0.11378443003074948,-0.005060836319767042,0.013131296101198788,0.21863066144310556,0.44540023470694723,0.01726296943557673],
[0.15415337683611596,-0.17449052243042712,0.23850452316465576,0.2090228040695959,0.3652068203134602,0.11052918709831461],
[-0.09216684947824424,0.20844742602316002,0.2262266208270773,0.22553569847678284,0.6389950926444947,-0.008256440681230735],
[0.8478159879190097,0.027115858923993468,-0.05918196850293869,0.8521840120809901,0.08189073762585078,0.1992198482087391]
]],"va":[1,0,0,1,0,0],"word_length":6,"level_max":40,"rect_size":1}
Is the json data to describe:

how to discover an angle between two objects?

what I want to do is the following: I have an object (blue point) and I want to point it to other object no matter where it is located around it (green point). So I need to know the angle between these two objects to do what I want right?
http://s13.postimage.org/6jeuphcdj/android_angle.jpg
The problem is, I don't know what to do to achieve this. I've already used atan, math.tan and so many other functions but without any good results.
Could you help me? Thanks in advance.
Calculate a dot product of object vectors. Use Math.acos on the value you get. That will give you an angle in radians.
So, say your blue dot is at vec1 = (50, 100) and green one at vec2 = (100, 400).
A tuple (x, y) as a two dimensional vector describes object's position and distance from (0, 0) on your screen. To find the angle between these two vectors, you do a standard, binary dot product operation on them. This will get you a scalar (a value, cos(Theta)), but you want the inverse of it (acos) which is the angle you're looking for.
You can get a better understanding on the matter here
Suppose the coordinates of the blue and green points are (xblue, yblue) and (xgreen, ygreen) respectively.
The angle at which the blue point sees the green point is:
double angleRadians = Math.atan2(ygreen-yblue, xgreen-xblue);
If you want the angle in degrees:
double angleDegrees = Math.toDegrees(angleRadians);

Categories