Generating unique random colors (in Android/Java) - java

My goal is to generate some unique random colors that can be easily seen by people. It means i try to avoid having both of ocean blue and sky blue. I prefer having violet red and ocean blue.
I think i need to have a variable that will be a "gap" between any generated RGB. For example, if the gap is 60 then the first and second color cant be RGB(12,27,38) and RGB(45,102,177) because the difference in red value is lesser than 60. (45-12=33).
This is what i tried so far :
int temp;
for(int i = 0; i < size; i++)
{
Rectangle r = new Rectangle();
temp = COLOR_LIMIT - colourGap;
if(i == 0 || temp <= 0)
colour = Color.argb(255, randomColour.nextInt(COLOR_LIMIT), randomColour.nextInt(COLOR_LIMIT), randomColour.nextInt(COLOR_LIMIT));
else
colour = Color.argb(255, randomColour.nextInt(temp), randomColour.nextInt(temp), randomColour.nextInt(temp));
temp-=colourGap;
}
With the above code, i realize the color generated is more unique compared to the basic random. However, my code above is not flexible and just a temporary fix (i need to play with the iteration size and gap variable every time). I believe there should be any clever way to do this. Basically, i just keep decreasing the limit by 60 and if the limit is lesser or equal to 0 then i just generate a basic random color.
Any help is appreciated, and sorry English is not my mother language.
Thanks very much.

You need to think of the "colour space" as a 3 dimensional space and look at the distance between your selected space in that.
i.e. the distance between r1,g1,b1 and r2,g2,b2 is sqrt((r1-r2)^2+(g1-g2)^2+(b1-b2)^2)
You can then check how close things are together and define a threshold.
A much easier way though will be just to define a "grid" of acceptable colours spaced far enough way from each other to look different.
i.e. r = random.nextInt(4)*32, g = random.nextInt(4)*32, b = random.nextInt(4)*32
Now you just have to check that none of your other generated colors exactly matches and you know there is a minimum distance between the selected. To guarantee a bit more distance you could check if any 2 or any 1 components exactly match although by doing that you reduce your number of possible colours massively.

Possibly this question may give you a clue.
The accepted answer suggests to change the Hue value of HSV color scheme with some interval, while making other two parameters random. This is how it would look like in Android:
for (int i = 0; i < 360; i += 360 / colorsNumber) {
// hsv[0] is Hue [0 .. 360) hsv[1] is Saturation [0...1] hsv[2] is Value [0...1]
float[] hsv = new float[3];
hsv[0] = i;
hsv[1] = (float) Math.random(); // Some restrictions here?
hsv[2] = (float) Math.random();
colors[i] = Color.HSVToColor(hsv);
}

Related

Image Resizing Bilinear Interpolation and Nearest Neighbour

I am currently having problems coming up with an algorithm for re-scaling and image.
I currently want to implement both Bilinear interpolation and Nearest Neighbour. I understand how both of them work conceptually but, I can not seem to record it into code. That I am still stuck on Nearest Neighbour
I have wrote some pseudo-code for it below (based on what I know):
Resizing Images: Nearest Neighbour
Use a loop:for j=0 to Yb-1
for i=0 to Xb-1
for c=0 to 2
(floor) y=j*Ya/Yb
(floor) x=i*Xa/Xb
Ib[j][i][c] = Ia[y][x][c]
My original data set (where I get my volume of data) is stored in a 3D array with [x][y][z] with (x, y, z).I read each pixel separately and can calculate the colors for each pixel using Color.color in java. I however, need to know how I can get the color (c = [0,1,2] ) for each pixel position x and y (x,y) excluding z(for one view's) to convert 1 old pixel for each new pixel into my new data set containing the new width and Height. I have written most of the code I have translated above in Java. But I still can not understand how to finalise a working implementation.
Thanks in Advance😊
I am not very familiar with java. But here is a working code for python.
import cv2
import numpy as np
img = cv2.imread("image.png")
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
scaleX = 0.5
scaleY = 0.5
newImg = np.zeros((int(img.shape[0]*scaleX),int(img.shape[1]*scaleY))).astype(np.uint8)
for y in range(newImg.shape[0]):
for x in range(newImg.shape[1]):
samplex = x/scaleX
sampley = y/scaleY
dx = samplex - np.floor(samplex)
dy = sampley - np.floor(sampley)
val = img[int(sampley-dy),int(samplex-dx)]*(1-dx)*(1-dy)
val += img[int(sampley + 1 - dy),int(samplex-dx)]*(1-dx)*(dy)
val += img[int(sampley-dy),int(samplex + 1 - dx)]*(dx)*(1-dy)
val += img[int(sampley + 1 -dy),int(samplex + 1 - dx)]*(dx)*(dy)
newImg[y,x] = val.astype(np.uint8)
cv2.imshow("img",newImg)
cv2.waitKey(0)
You could simply add one more for loop inside they for and x for loops to account for channels.
if I get it right you are interpolating volumes (voxels) instead of pixels in such case:
Lets have source volume vol1[xs1][ys1][zs1] and target vol0[xs0][ys0][zs0] where xs,ys,zs are the resolutions then nearest neighbor would be:
// vol0 <- vol1
for ( x0=0; x0<xs0; x0++)
for (x1=(x*x1)/x0, y0=0; y0<ys0; y0++)
for (y1=(y*y1)/y0, z0=0; z0<zs0; z0++)
{ z1=(z*z1)/z0;
vol0[x0][y0][z0]=vol1[x1][y1][z1];
}
The color stays the same for nearest neighbor. In case vol0 has smaller resolutions than vol1 you can do the for loops at vol1 resolution and compute x0,y0,z0 from x1,y1,z1 instead to speed up. Btw. all the variables are integers no floats needed for this...
Now for the color encoding in case your voxels are 1D array ({r,g,b}) instead of scalar integral type:
vol0[xs0][ys0][zs0][3]
vol1[xs1][ys1][zs1][3]
the stuff would change to:
// vol0 <- vol1
for ( x0=0; x0<xs0; x0++)
for (x1=(x*x1)/x0, y0=0; y0<ys0; y0++)
for (y1=(y*y1)/y0, z0=0; z0<zs0; z0++)
for (z1=(z*z1)/z0; i=0; i<3; i++ )
vol0[x0][y0][z0][i]=vol1[x1][y1][z1][i];

What's the best way to implement a color cycling background?

What's the best way to cycle the color of a background smoothly (as well as other things) using cos or sin in java, without using more than one file? I've tried using randomness and increasing each individual r, g, and b value separately to make this look kind of normal, but it's jittery, not smooth, and the colors are horrid. Right now, it's just plain white. I included only the necessary code, and I am using Processing 3.
//background
int bg1 = 255; //r
int bg2 = 255; //g
int bg3 = 255; //b
void draw() {
fill(bg1,bg2,bg3);
}
You've got the general idea down. It's a three-step process:
Step 1: Declare variables at the top of your sketch.
Step 2: Use those variables to draw your scene.
Step 3: Change those variables over time.
This is the basic approach to create any animation in Processing. Here is a tutorial with more information.
Here is a small example that shows a window that cycles between white and black:
float c = 0;
float cChange = 1;
void draw(){
background(c);
c += cChange;
if(c < 0 || c > 255){
cChange *= -1;
}
}
You would need to do something similar, but with 3 color values instead of 1. Note that I'm only changing the color by a small amount each time, which makes it appear smooth instead of jittery.
If you're still having trouble, please post an updated MCVE in a new question and we'll go from there. Good luck.
If you specifically want to use a sine wave as input rather than the sawtooth wave then you need to map your input (e.g. time) to some color range. For example:
every 2000 milliseconds value increases from 0 to 2.0
value ranges from -1 to 1.
the output of sin(value) ranges from -1 to 1.
map the output to a color range.
map() works well for mapping values, but you can also use colorMode() for mapping color ranges -- so rather than moving your sine output values around, just make your output 0-2.0 and set the max RGB or HSB value to 2.0 rather than 255.
Here are some examples, all running simultaneously in one sketch:
float val;
float out;
void draw() {
background(0);
val = TWO_PI * (millis()%2000)/2000.0; // every 2000 milliseconds value increases from 0 to 2PI
out = sin(val);
// white-black (256-0)
pushStyle();
fill(128 + 128*out);
rect(0,0,50,50);
popStyle();
// red-black (255-0)
pushStyle();
colorMode(RGB, 255);
fill(255*(out+1), 0, 0);
rect(50,0,50,50);
popStyle();
// hue rainbow (0-2)
pushStyle();
colorMode(HSB, 2.0);
fill(out+1, 2, 2);
rect(0,50,50,50);
popStyle();
// hue blue-green (3 to 5 / 9)
pushStyle();
colorMode(HSB, 9);
fill(out+4, 9, 9);
rect(50,50,50,50);
popStyle();
translate(width/2,height/2 - out * height/2);
ellipse(0,0,10,10);
}
Don't understand what you mean by cos and sin in relation to background color. But maybe something like this is what you want?
void draw(){
int H = frameCount%1536;
background(color(abs(H-765)-256,512-abs(H-512),512-abs(H-1024)));
}

2D Dynamic Lighting in Java

I am making a game that has campfire objects. What I want to do is to brighten all pixels in a circle around each campfire. However, looping through every pixel and changing those within the radius is not all that efficient and makes the game run at ~7 fps. Ideas on how to either make this process efficient or simulate light differently?
I haven't written the code for the fires but this is the basic loop to check each pixel/change its brightness based on a number:
public static BufferedImage updateLightLevels(BufferedImage img, float light)
{
BufferedImage brightnessBuffer = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);
brightnessBuffer.getGraphics().drawImage(img, 0, 0, null);
for(int i = 0; i < brightnessBuffer.getWidth(); i++)
{
for(int a = 0; a < brightnessBuffer.getHeight(); a++)
{
//get the color at the pixel
int rgb = brightnessBuffer.getRGB(i, a);
//check to see if it is transparent
int alpha = (rgb >> 24) & 0x000000FF;
if(alpha != 0)
{
//make a new color
Color rgbColor = new Color(rgb);
//turn it into an hsb color
float[] hsbCol = Color.RGBtoHSB(rgbColor.getRed(), rgbColor.getGreen(), rgbColor.getBlue(), null);
//lower it by the certain amount
//if the pixel is already darker then push it all the way to black
if(hsbCol[2] <= light)
hsbCol[2] -= (hsbCol[2]) - .01f;
else
hsbCol[2] -= light;
//turn the hsb color into a rgb color
int rgbNew = Color.HSBtoRGB(hsbCol[0], hsbCol[1], hsbCol[2]);
//set the pixel to the new color
brightnessBuffer.setRGB(i, a, rgbNew);
}
}
}
return brightnessBuffer;
}
I apologize if my code is not clean, I'm self taught.
I can give you lots of approaches.
You're currently rendering on the CPU, and you're checking every single pixel. That's hardcore brute force, and brute force isn't what the CPU is best at. It works, but as you've seen, the performance is abysmal.
I'd point you in two directions that would massively improve your performance:
Method 1 - Culling. Does every single pixel really need to have its lighting calculated? If you could instead calculate a general "ambient light", then you could paint most of the pixels in that ambient light, and then only calculate the really proper lighting for pixels closest to lights; so lights throw a "spot" effect which fades into the ambient. That way you're only ever performing checks on a few of the pixels of the screen at a time (the circle area around each light). The code you posted just looks like it paints every pixel, I'm not seeing where the "circle" dropoff is even applied.
Edit:
Instead, sweep through the lights, and just loop through local offsets of the light position.
for(Light l : Lights){
for(int x = l.getX() -LIGHT_DISTANCE, x< l.getX() + LIGHT_DISTANCE, y++){
for(int y = l.getY() - LIGHT_DISTANCE, y < l.getY() + LIGHT_DISTANCE, y++){
//calculate light
int rgb = brightnessBuffer.getRGB(x, y);
//do stuff
}
}
You may want to add a check with that method so overlapping lights don't cause a bunch of rechecks, unless you DO want that behavior (ideally those pixels would be twice as bright)
Method 2 - Offhand calculation to the GPU. There's a reason we have graphics cards; they're specifically built to be able to number crunch those situations where you really need brute force. If you can offload this process to the GPU as a shader, then it'll run licketysplit, even if you run it on every pixel several times over. This will require you to learn graphics APIs however, but if you're working in java, LibGDX makes it very painless to render using the GPU and pass off a couple shaders to the GPU.
I am uncertain about the way in which you are going about calculating light values, but I do know that using the BufferedImage.getRGB() and BufferedImage.setRGB() methods is very slow.
I would suggest accessing the pixels of the BufferedImage directly from an array (much faster IMO)
to do this:
BufferedImage lightImage = new BufferedImage(width,height,BufferedImage.TYPE_INT_ARGB);
Raster r = lightImage.getRaster();
int[] lightPixels = ((DataBufferInt)r.getDataBuffer()).getData();
Now, changing any pixel in this array will show on your image. Note that the values used in this array are color values in the format of whatever format you defined your image with.
In this case it is TYPE_INT_ARGB meaning you will have to include the alpha value in the number when setting the coloar (RRGGBB*AA*)
Since this array is a 1D array, it is more difficult to access pixels using x and y co-ordinates. The following method is an implementation of accessing pixels from the lightPixels array more easily.
public void setLight(int x, int y,int[] array,int width, int value){
array[width*y+x] = value;
}
*note: width is the width of your level, or the width of the 2D array your level might exist as, if it was a 2D array.
You can also get pixels from the lightPixels array with a similar method, just excluding the value and returning the array[width*y+x].
It is up to you how you use the setLight() and getLight() methods but in the cases that I have encountered, using this method is much faster than using getRGB and setRGB.
Hope this helps

Image Processing Edge Detection in Java

This is my situation. It involves aligning a scanned image which will account for incorrect scanning. I must align the scanned image with my Java program.
These are more details:
There is a table-like form printed on a sheet of paper, which will be scanned into an image file.
I will open the picture with Java, and I will have an OVERLAY of text boxes.
The text boxes are supposed to align correctly with the scanned image.
In order to align correctly, my Java program must analyze the scanned image and detect the coordinates of the edges of the table on the scanned image, and thus position the image and the textboxes so that the textboxes and the image both align properly (in case of incorrect scanning)
You see, the guy scanning the image might not necessarily place the image in a perfectly correct position, so I need my program to automatically align the scanned image as it loads it. This program will be reusable on many of such scanned images, so I need the program to be flexible in this way.
My question is one of the following:
How can I use Java to detect the y coordinate of the upper edge of the table and the x-coordinate of the leftmost edge of the table. The table is a a regular table with many cells, with black thin border, printed on a white sheet of paper (horizontal printout)
If an easier method exists to automatically align the scanned image in such a way that all scanned images will have the graphical table align to the same x, y coordinates, then share this method :).
If you don't know the answer to the above to questions, do tell me where I should start. I don't know much about graphics java programming and I have about 1 month to finish this program. Just assume that I have a tight schedule and I have to make the graphics part as simple as possible for me.
Cheers and thank you.
Try to start from a simple scenario and then improve the approach.
Detect corners.
Find the corners in the boundaries of the form.
Using the form corners coordinates, calculate the rotation angle.
Rotate/scale the image.
Map the position of each field in the form relative to form origin coordinates.
Match the textboxes.
The program presented at the end of this post does the steps 1 to 3. It was implemented using Marvin Framework. The image below shows the output image with the detected corners.
The program also outputs: Rotation angle:1.6365770416167182
Source code:
import java.awt.Color;
import java.awt.Point;
import marvin.image.MarvinImage;
import marvin.io.MarvinImageIO;
import marvin.plugin.MarvinImagePlugin;
import marvin.util.MarvinAttributes;
import marvin.util.MarvinPluginLoader;
public class FormCorners {
public FormCorners(){
// Load plug-in
MarvinImagePlugin moravec = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.corner.moravec");
MarvinAttributes attr = new MarvinAttributes();
// Load image
MarvinImage image = MarvinImageIO.loadImage("./res/printedForm.jpg");
// Process and save output image
moravec.setAttribute("threshold", 2000);
moravec.process(image, null, attr);
Point[] boundaries = boundaries(attr);
image = showCorners(image, boundaries, 12);
MarvinImageIO.saveImage(image, "./res/printedForm_output.jpg");
// Print rotation angle
double angle = (Math.atan2((boundaries[1].y*-1)-(boundaries[0].y*-1),boundaries[1].x-boundaries[0].x) * 180 / Math.PI);
angle = angle >= 0 ? angle : angle + 360;
System.out.println("Rotation angle:"+angle);
}
private Point[] boundaries(MarvinAttributes attr){
Point upLeft = new Point(-1,-1);
Point upRight = new Point(-1,-1);
Point bottomLeft = new Point(-1,-1);
Point bottomRight = new Point(-1,-1);
double ulDistance=9999,blDistance=9999,urDistance=9999,brDistance=9999;
double tempDistance=-1;
int[][] cornernessMap = (int[][]) attr.get("cornernessMap");
for(int x=0; x<cornernessMap.length; x++){
for(int y=0; y<cornernessMap[0].length; y++){
if(cornernessMap[x][y] > 0){
if((tempDistance = Point.distance(x, y, 0, 0)) < ulDistance){
upLeft.x = x; upLeft.y = y;
ulDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, cornernessMap.length, 0)) < urDistance){
upRight.x = x; upRight.y = y;
urDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, 0, cornernessMap[0].length)) < blDistance){
bottomLeft.x = x; bottomLeft.y = y;
blDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, cornernessMap.length, cornernessMap[0].length)) < brDistance){
bottomRight.x = x; bottomRight.y = y;
brDistance = tempDistance;
}
}
}
}
return new Point[]{upLeft, upRight, bottomRight, bottomLeft};
}
private MarvinImage showCorners(MarvinImage image, Point[] points, int rectSize){
MarvinImage ret = image.clone();
for(Point p:points){
ret.fillRect(p.x-(rectSize/2), p.y-(rectSize/2), rectSize, rectSize, Color.red);
}
return ret;
}
public static void main(String[] args) {
new FormCorners();
}
}
Edge detection is something that is typically done by enhancing the contrast between neighboring pixels, such that you get a easily detectable line, which is suitable for further processing.
To do this, a "kernel" transforms a pixel according it the pixel's inital value, and the value of that pixel's neighbors. A good edge detection kernel will enhance the differences between neighboring pixels, and reduce the strength of a pixel with similar neigbors.
I would start by looking at the Sobel operator. This might not return results that are immediately useful to you; however, it will get you far closer than you would be if you were to approach the problem with little knowledge of the field.
After you have some crisp clean edges, you can use larger kernels to detect points where it seems that a 90% bend in two lines occurs, that might give you the pixel coordinates of the outer rectangle, which might be enough for your purposes.
With those outer coordinates, it still is a bit of math to make the new pixels be composted with the average values between the old pixels rotated and moved to "match". The results (especially if you do not know about anti-aliasing math) can be pretty bad, adding blur to the image.
Sharpening filters might be a solution, but they come with their own issues, mainly they make the picture sharper by adding graininess. Too much, and it is obvious that the original image is not a high-quality scan.
I researched the libraries but in the end I found it more convenient to code up my own edge detection methods.
The class below will detect black/grayed out edges of a scanned sheet of paper that contains such edges, and will return the x and y coordinate of the edges of the sheet of paper, starting from the rightmost end (reverse = true) or from lower end (reverse = true) or from the top edge (reverse = false) or from left edge (reverse = false). Also...the program will take ranges along vertical edges (rangex) measured in pixels, and horizontal ranges (rangey) measured in pixels. The ranges determine outliers in the points received.
The program does 4 vertical cuts using the specified arrays, and 4 horizontal cuts. It retrieves the values of the dark dots. It uses the ranges to eliminate outliers. Sometimes, a little spot on the paper may cause an outlier point. The smaller the range, the fewer the outliers. However, sometimes the edge is slightly tilted, so you don't want to make the range too small.
Have fun. It works perfectly for me.
import java.awt.image.BufferedImage;
import java.awt.Color;
import java.util.ArrayList;
import java.lang.Math;
import java.awt.Point;
public class EdgeDetection {
public App ap;
public int[] horizontalCuts = {120, 220, 320, 420};
public int[] verticalCuts = {300, 350, 375, 400};
public void printEdgesTest(BufferedImage image, boolean reversex, boolean reversey, int rangex, int rangey){
int[] mx = horizontalCuts;
int[] my = verticalCuts;
//you are getting edge points here
//the "true" parameter indicates that it performs a cut starting at 0. (left edge)
int[] xEdges = getEdges(image, mx, reversex, true);
int edgex = getEdge(xEdges, rangex);
for(int x = 0; x < xEdges.length; x++){
System.out.println("EDGE = " + xEdges[x]);
}
System.out.println("THE EDGE = " + edgex);
//the "false" parameter indicates you are doing your cut starting at the end (image.getHeight)
//and ending at 0
//if the parameter was true, it would mean it would start the cuts at y = 0
int[] yEdges = getEdges(image, my, reversey, false);
int edgey = getEdge(yEdges, rangey);
for(int y = 0; y < yEdges.length; y++){
System.out.println("EDGE = " + yEdges[y]);
}
System.out.println("THE EDGE = " + edgey);
}
//This function takes an array of coordinates...detects outliers,
//and computes the average of non-outlier points.
public int getEdge(int[] edges, int range){
ArrayList<Integer> result = new ArrayList<Integer>();
boolean[] passes = new boolean[edges.length];
int[][] differences = new int[edges.length][edges.length-1];
//THIS CODE SEGMENT SAVES THE DIFFERENCES BETWEEN THE POINTS INTO AN ARRAY
for(int n = 0; n<edges.length; n++){
for(int m = 0; m<edges.length; m++){
if(m < n){
differences[n][m] = edges[n] - edges[m];
}else if(m > n){
differences[n][m-1] = edges[n] - edges[m];
}
}
}
//This array determines which points are outliers or nots (fall within range of other points)
for(int n = 0; n<edges.length; n++){
passes[n] = false;
for(int m = 0; m<edges.length-1; m++){
if(Math.abs(differences[n][m]) < range){
passes[n] = true;
System.out.println("EDGECHECK = TRUE" + n);
break;
}
}
}
//Create a new array only using valid points
for(int i = 0; i<edges.length; i++){
if(passes[i]){
result.add(edges[i]);
}
}
//Calculate the rounded mean... This will be the x/y coordinate of the edge
//Whether they are x or y values depends on the "reverse" variable used to calculate the edges array
int divisor = result.size();
int addend = 0;
double mean = 0;
for(Integer i : result){
addend += i;
}
mean = (double)addend/(double)divisor;
//returns the mean of the valid points: this is the x or y coordinate of your calculated edge.
if(mean - (int)mean >= .5){
System.out.println("MEAN " + mean);
return (int)mean+1;
}else{
System.out.println("MEAN " + mean);
return (int)mean;
}
}
//this function computes "dark" points, which include light gray, to detect edges.
//reverse - when true, starts counting from x = 0 or y = 0, and ends at image.getWidth or image.getHeight()
//verticalEdge - determines whether you want to detect a vertical edge, or a horizontal edge
//arr[] - determines the coordinates of the vertical or horizontal cuts you will do
//set the arr[] array according to the graphical layout of your scanned image
//image - this is the image you want to detect black/white edges of
public int[] getEdges(BufferedImage image, int[] arr, boolean reverse, boolean verticalEdge){
int red = 255;
int green = 255;
int blue = 255;
int[] result = new int[arr.length];
for(int n = 0; n<arr.length; n++){
for(int m = reverse ? (verticalEdge ? image.getWidth():image.getHeight())-1:0; reverse ? m>=0:m<(verticalEdge ? image.getWidth():image.getHeight());){
Color c = new Color(image.getRGB(verticalEdge ? m:arr[n], verticalEdge ? arr[n]:m));
red = c.getRed();
green = c.getGreen();
blue = c.getBlue();
//determine if the point is considered "dark" or not.
//modify the range if you want to only include really dark spots.
//occasionally, though, the edge might be blurred out, and light gray helps
if(red<239 && green<239 && blue<239){
result[n] = m;
break;
}
//count forwards or backwards depending on reverse variable
if(reverse){
m--;
}else{
m++;
}
}
}
return result;
}
}
A similar such problem I've done in the past basically figured out the orientation of the form, re-aligned it, re-scaled it, and I was all set. You can use the Hough transform to to detect the angular offset of the image (ie: how much it is rotated), but you still need to detect the boundaries of the form. It also had to accommodate for the boundaries of the piece of paper itself.
This was a lucky break for me, because it basically showed a black and white image in the middle of a big black border.
Apply an aggressive, 5x5 median filter to remove some noise.
Convert from grayscale to black and white (rescale intensity values from [0,255] to [0,1]).
Calculate the Principal Component Analysis (ie: calculate the Eigenvectors of the covariance matrix for your image from the calculated Eigenvalues) (http://en.wikipedia.org/wiki/Principal_component_analysis#Derivation_of_PCA_using_the_covariance_method)
4) This gives you a basis vector. You simply use that to re-orient your image to a standard basis matrix (ie: [1,0],[0,1]).
Your image is now aligned beautifully. I did this for normalizing the orientation of MRI scans of entire human brains.
You also know that you have a massive black border around the actual image. You simply keep deleting rows from the top and bottom, and both sides of the image until they are all gone. You can temporarily apply a 7x7 median or mode filter to a copy of the image so far at this point. It helps rule out too much border remaining in the final image from thumbprints, dirt, etc.

Generating spectrum color palettes

Is there an easy way to convert between color models in Java (RGB, HSV and Lab).
Assuming RGB color model:
How do I calculate black body spectrum color palette? I want to use it for a heatmap chart.
How about single-wavelength spectrum?
Edit: I found that the ColorSpace class can be used for conversions between RGB/CIE and many other color models.
Java has built-in RGB to HSB conversion. Whenever I need a quick pallet of colors in Java I just do this:
public Color[] generateColors(int n)
{
Color[] cols = new Color[n];
for(int i = 0; i < n; i++)
{
cols[i] = Color.getHSBColor((float) i / (float) n, 0.85f, 1.0f);
}
return cols;
}
It is a quick and dirty hack (I would tweak the 'magic' numbers for your app), but for my simple uses it generates a nice bright pleasant pallet.
Maybe I'm not understanding your question, but you can't really generate a true black-body spectrum from an RGB output device. Limited color gamut would be an issue, if nothing else. If all you want is something that visually resembles a black-body spectrum, that's probably a lot easier.
As an approximation, ramp from (R,G,B) (0,0,0) to (255,0,0), then to (255,255,0), then to (255,255,255). That'd give you the dull-red to orange, to yellow, to white transition.
If you want something more scientific, the Wikipedia article on black body radiation has some plots of color vs temperature. Once you figure out the CIE coordinates, you can translate those to RGB in your favorite color space.
Edit: found some other online references:
What color is the Sun?
What color is a blackbody?
You can build such a palette using the HSV color-model. That's easy once you have the HSV to RGB code in place and play around with the numbers for some minutes.
However, I think it's not worth it to add the code to your project just to generate a little palette.
It's much easier and less work to extract the palettes you need from a file and add them as a static array.
Photoshop let's you edit palettes and comes with a very nice black body palette as a preset.
You can simply save these as a .act file. The file itself is just a simple 256 color á 3 byte file (order is read, green, blue. 8 bits per channel).
You can generate this color spectrum https://i.stack.imgur.com/ktLmt.jpg
using the following code:
public void render(Screen screen) {
int green = 255;
int red = 0;
for (int i = 0; i <= 255 * 2; i++) {
int rate = i / 255;
screen.fillRect((x + (i * width)/6), y, width, height, new Color(red, green, 0));
red += 1 - rate;
green -= rate;
}
}
This is a nice way to make a HSL color square in AS3.
/**
* Generate a BitmapData HSL color square (n x n) of hue
* At a low n dimension you get cool blocky color palettes (e.g. try n=10)
*/
function generateColorSquare(n:uint, hue:uint):BitmapData
{
var bd:BitmapData = new BitmapData(n, n, false, 0xFFFFFF);
for (var i:uint=n*n; i > 0; i--)
{
bd.setPixel(i % n, Math.floor(i / n), HSBColor.convertHSBtoRGB(hue, i / (n*n), (1/n) * (i % n) ));
}
return bd;
}

Categories