I wrote the two methods below to automatically select N distinct colors. It works by defining a piecewise linear function on the RGB cube. The benefit of this is you can also get a progressive scale if that's what you want, but when N gets large the colors can start to look similar. I can also imagine evenly subdividing the RGB cube into a lattice and then drawing points. Does anyone know any other methods? I'm ruling out defining a list and then just cycling through it. I should also say I don't generally care if they clash or don't look nice, they just have to be visually distinct.
public static List<Color> pick(int num) {
List<Color> colors = new ArrayList<Color>();
if (num < 2)
return colors;
float dx = 1.0f / (float) (num - 1);
for (int i = 0; i < num; i++) {
colors.add(get(i * dx));
}
return colors;
}
public static Color get(float x) {
float r = 0.0f;
float g = 0.0f;
float b = 1.0f;
if (x >= 0.0f && x < 0.2f) {
x = x / 0.2f;
r = 0.0f;
g = x;
b = 1.0f;
} else if (x >= 0.2f && x < 0.4f) {
x = (x - 0.2f) / 0.2f;
r = 0.0f;
g = 1.0f;
b = 1.0f - x;
} else if (x >= 0.4f && x < 0.6f) {
x = (x - 0.4f) / 0.2f;
r = x;
g = 1.0f;
b = 0.0f;
} else if (x >= 0.6f && x < 0.8f) {
x = (x - 0.6f) / 0.2f;
r = 1.0f;
g = 1.0f - x;
b = 0.0f;
} else if (x >= 0.8f && x <= 1.0f) {
x = (x - 0.8f) / 0.2f;
r = 1.0f;
g = 0.0f;
b = x;
}
return new Color(r, g, b);
}
This questions appears in quite a few SO discussions:
Algorithm For Generating Unique Colors
Generate unique colours
Generate distinctly different RGB colors in graphs
How to generate n different colors for any natural number n?
Different solutions are proposed, but none are optimal. Luckily, science comes to the rescue
Arbitrary N
Colour displays for categorical images (free download)
A WEB SERVICE TO PERSONALISE MAP COLOURING (free download, a webservice solution should be available by next month)
An Algorithm for the Selection of High-Contrast Color Sets (the authors offer a free C++ implementation)
High-contrast sets of colors (The first algorithm for the problem)
The last 2 will be free via most university libraries / proxies.
N is finite and relatively small
In this case, one could go for a list solution. A very interesting article in the subject is freely available:
A Colour Alphabet and the Limits of Colour Coding
There are several color lists to consider:
Boynton's list of 11 colors that are almost never confused (available in the first paper of the previous section)
Kelly's 22 colors of maximum contrast (available in the paper above)
I also ran into this Palette by an MIT student.
Lastly, The following links may be useful in converting between different color systems / coordinates (some colors in the articles are not specified in RGB, for instance):
http://chem8.org/uch/space-55036-do-blog-id-5333.html
https://metacpan.org/pod/Color::Library::Dictionary::NBS_ISCC
Color Theory: How to convert Munsell HVC to RGB/HSB/HSL
For Kelly's and Boynton's list, I've already made the conversion to RGB (with the exception of white and black, which should be obvious). Some C# code:
public static ReadOnlyCollection<Color> KellysMaxContrastSet
{
get { return _kellysMaxContrastSet.AsReadOnly(); }
}
private static readonly List<Color> _kellysMaxContrastSet = new List<Color>
{
UIntToColor(0xFFFFB300), //Vivid Yellow
UIntToColor(0xFF803E75), //Strong Purple
UIntToColor(0xFFFF6800), //Vivid Orange
UIntToColor(0xFFA6BDD7), //Very Light Blue
UIntToColor(0xFFC10020), //Vivid Red
UIntToColor(0xFFCEA262), //Grayish Yellow
UIntToColor(0xFF817066), //Medium Gray
//The following will not be good for people with defective color vision
UIntToColor(0xFF007D34), //Vivid Green
UIntToColor(0xFFF6768E), //Strong Purplish Pink
UIntToColor(0xFF00538A), //Strong Blue
UIntToColor(0xFFFF7A5C), //Strong Yellowish Pink
UIntToColor(0xFF53377A), //Strong Violet
UIntToColor(0xFFFF8E00), //Vivid Orange Yellow
UIntToColor(0xFFB32851), //Strong Purplish Red
UIntToColor(0xFFF4C800), //Vivid Greenish Yellow
UIntToColor(0xFF7F180D), //Strong Reddish Brown
UIntToColor(0xFF93AA00), //Vivid Yellowish Green
UIntToColor(0xFF593315), //Deep Yellowish Brown
UIntToColor(0xFFF13A13), //Vivid Reddish Orange
UIntToColor(0xFF232C16), //Dark Olive Green
};
public static ReadOnlyCollection<Color> BoyntonOptimized
{
get { return _boyntonOptimized.AsReadOnly(); }
}
private static readonly List<Color> _boyntonOptimized = new List<Color>
{
Color.FromArgb(0, 0, 255), //Blue
Color.FromArgb(255, 0, 0), //Red
Color.FromArgb(0, 255, 0), //Green
Color.FromArgb(255, 255, 0), //Yellow
Color.FromArgb(255, 0, 255), //Magenta
Color.FromArgb(255, 128, 128), //Pink
Color.FromArgb(128, 128, 128), //Gray
Color.FromArgb(128, 0, 0), //Brown
Color.FromArgb(255, 128, 0), //Orange
};
static public Color UIntToColor(uint color)
{
var a = (byte)(color >> 24);
var r = (byte)(color >> 16);
var g = (byte)(color >> 8);
var b = (byte)(color >> 0);
return Color.FromArgb(a, r, g, b);
}
And here are the RGB values in hex and 8-bit-per-channel representations:
kelly_colors_hex = [
0xFFB300, # Vivid Yellow
0x803E75, # Strong Purple
0xFF6800, # Vivid Orange
0xA6BDD7, # Very Light Blue
0xC10020, # Vivid Red
0xCEA262, # Grayish Yellow
0x817066, # Medium Gray
# The following don't work well for people with defective color vision
0x007D34, # Vivid Green
0xF6768E, # Strong Purplish Pink
0x00538A, # Strong Blue
0xFF7A5C, # Strong Yellowish Pink
0x53377A, # Strong Violet
0xFF8E00, # Vivid Orange Yellow
0xB32851, # Strong Purplish Red
0xF4C800, # Vivid Greenish Yellow
0x7F180D, # Strong Reddish Brown
0x93AA00, # Vivid Yellowish Green
0x593315, # Deep Yellowish Brown
0xF13A13, # Vivid Reddish Orange
0x232C16, # Dark Olive Green
]
kelly_colors = dict(vivid_yellow=(255, 179, 0),
strong_purple=(128, 62, 117),
vivid_orange=(255, 104, 0),
very_light_blue=(166, 189, 215),
vivid_red=(193, 0, 32),
grayish_yellow=(206, 162, 98),
medium_gray=(129, 112, 102),
# these aren't good for people with defective color vision:
vivid_green=(0, 125, 52),
strong_purplish_pink=(246, 118, 142),
strong_blue=(0, 83, 138),
strong_yellowish_pink=(255, 122, 92),
strong_violet=(83, 55, 122),
vivid_orange_yellow=(255, 142, 0),
strong_purplish_red=(179, 40, 81),
vivid_greenish_yellow=(244, 200, 0),
strong_reddish_brown=(127, 24, 13),
vivid_yellowish_green=(147, 170, 0),
deep_yellowish_brown=(89, 51, 21),
vivid_reddish_orange=(241, 58, 19),
dark_olive_green=(35, 44, 22))
For all you Java developers, here are the JavaFX colors:
// Don't forget to import javafx.scene.paint.Color;
private static final Color[] KELLY_COLORS = {
Color.web("0xFFB300"), // Vivid Yellow
Color.web("0x803E75"), // Strong Purple
Color.web("0xFF6800"), // Vivid Orange
Color.web("0xA6BDD7"), // Very Light Blue
Color.web("0xC10020"), // Vivid Red
Color.web("0xCEA262"), // Grayish Yellow
Color.web("0x817066"), // Medium Gray
Color.web("0x007D34"), // Vivid Green
Color.web("0xF6768E"), // Strong Purplish Pink
Color.web("0x00538A"), // Strong Blue
Color.web("0xFF7A5C"), // Strong Yellowish Pink
Color.web("0x53377A"), // Strong Violet
Color.web("0xFF8E00"), // Vivid Orange Yellow
Color.web("0xB32851"), // Strong Purplish Red
Color.web("0xF4C800"), // Vivid Greenish Yellow
Color.web("0x7F180D"), // Strong Reddish Brown
Color.web("0x93AA00"), // Vivid Yellowish Green
Color.web("0x593315"), // Deep Yellowish Brown
Color.web("0xF13A13"), // Vivid Reddish Orange
Color.web("0x232C16"), // Dark Olive Green
};
the following is the unsorted kelly colors according to the order above.
the following is the sorted kelly colors according to hues (note that some yellows are not very contrasting)
You can use the HSL color model to create your colors.
If all you want is differing hues (likely), and slight variations on lightness or saturation, you can distribute the hues like so:
// assumes hue [0, 360), saturation [0, 100), lightness [0, 100)
for(i = 0; i < 360; i += 360 / num_colors) {
HSLColor c;
c.hue = i;
c.saturation = 90 + randf() * 10;
c.lightness = 50 + randf() * 10;
addColor(c);
}
Like Uri Cohen's answer, but is a generator instead. Will start by using colors far apart. Deterministic.
Sample, left colors first:
#!/usr/bin/env python3
from typing import Iterable, Tuple
import colorsys
import itertools
from fractions import Fraction
from pprint import pprint
def zenos_dichotomy() -> Iterable[Fraction]:
"""
http://en.wikipedia.org/wiki/1/2_%2B_1/4_%2B_1/8_%2B_1/16_%2B_%C2%B7_%C2%B7_%C2%B7
"""
for k in itertools.count():
yield Fraction(1,2**k)
def fracs() -> Iterable[Fraction]:
"""
[Fraction(0, 1), Fraction(1, 2), Fraction(1, 4), Fraction(3, 4), Fraction(1, 8), Fraction(3, 8), Fraction(5, 8), Fraction(7, 8), Fraction(1, 16), Fraction(3, 16), ...]
[0.0, 0.5, 0.25, 0.75, 0.125, 0.375, 0.625, 0.875, 0.0625, 0.1875, ...]
"""
yield Fraction(0)
for k in zenos_dichotomy():
i = k.denominator # [1,2,4,8,16,...]
for j in range(1,i,2):
yield Fraction(j,i)
# can be used for the v in hsv to map linear values 0..1 to something that looks equidistant
# bias = lambda x: (math.sqrt(x/3)/Fraction(2,3)+Fraction(1,3))/Fraction(6,5)
HSVTuple = Tuple[Fraction, Fraction, Fraction]
RGBTuple = Tuple[float, float, float]
def hue_to_tones(h: Fraction) -> Iterable[HSVTuple]:
for s in [Fraction(6,10)]: # optionally use range
for v in [Fraction(8,10),Fraction(5,10)]: # could use range too
yield (h, s, v) # use bias for v here if you use range
def hsv_to_rgb(x: HSVTuple) -> RGBTuple:
return colorsys.hsv_to_rgb(*map(float, x))
flatten = itertools.chain.from_iterable
def hsvs() -> Iterable[HSVTuple]:
return flatten(map(hue_to_tones, fracs()))
def rgbs() -> Iterable[RGBTuple]:
return map(hsv_to_rgb, hsvs())
def rgb_to_css(x: RGBTuple) -> str:
uint8tuple = map(lambda y: int(y*255), x)
return "rgb({},{},{})".format(*uint8tuple)
def css_colors() -> Iterable[str]:
return map(rgb_to_css, rgbs())
if __name__ == "__main__":
# sample 100 colors in css format
sample_colors = list(itertools.islice(css_colors(), 100))
pprint(sample_colors)
For the sake of generations to come I add here the accepted answer in Python.
import numpy as np
import colorsys
def _get_colors(num_colors):
colors=[]
for i in np.arange(0., 360., 360. / num_colors):
hue = i/360.
lightness = (50 + np.random.rand() * 10)/100.
saturation = (90 + np.random.rand() * 10)/100.
colors.append(colorsys.hls_to_rgb(hue, lightness, saturation))
return colors
Here's an idea. Imagine an HSV cylinder
Define the upper and lower limits you want for the Brightness and Saturation. This defines a square cross section ring within the space.
Now, scatter N points randomly within this space.
Then apply an iterative repulsion algorithm on them, either for a fixed number of iterations, or until the points stabilise.
Now you should have N points representing N colours that are about as different as possible within the colour space you're interested in.
Hugo
Everyone seems to have missed the existence of the very useful YUV color space which was designed to represent perceived color differences in the human visual system. Distances in YUV represent differences in human perception. I needed this functionality for MagicCube4D which implements 4-dimensional Rubik's cubes and an unlimited numbers of other 4D twisty puzzles having arbitrary numbers of faces.
My solution starts by selecting random points in YUV and then iteratively breaking up the closest two points, and only converting to RGB when returning the result. The method is O(n^3) but that doesn't matter for small numbers or ones that can be cached. It can certainly be made more efficient but the results appear to be excellent.
The function allows for optional specification of brightness thresholds so as not to produce colors in which no component is brighter or darker than given amounts. IE you may not want values close to black or white. This is useful when the resulting colors will be used as base colors that are later shaded via lighting, layering, transparency, etc. and must still appear different from their base colors.
import java.awt.Color;
import java.util.Random;
/**
* Contains a method to generate N visually distinct colors and helper methods.
*
* #author Melinda Green
*/
public class ColorUtils {
private ColorUtils() {} // To disallow instantiation.
private final static float
U_OFF = .436f,
V_OFF = .615f;
private static final long RAND_SEED = 0;
private static Random rand = new Random(RAND_SEED);
/*
* Returns an array of ncolors RGB triplets such that each is as unique from the rest as possible
* and each color has at least one component greater than minComponent and one less than maxComponent.
* Use min == 1 and max == 0 to include the full RGB color range.
*
* Warning: O N^2 algorithm blows up fast for more than 100 colors.
*/
public static Color[] generateVisuallyDistinctColors(int ncolors, float minComponent, float maxComponent) {
rand.setSeed(RAND_SEED); // So that we get consistent results for each combination of inputs
float[][] yuv = new float[ncolors][3];
// initialize array with random colors
for(int got = 0; got < ncolors;) {
System.arraycopy(randYUVinRGBRange(minComponent, maxComponent), 0, yuv[got++], 0, 3);
}
// continually break up the worst-fit color pair until we get tired of searching
for(int c = 0; c < ncolors * 1000; c++) {
float worst = 8888;
int worstID = 0;
for(int i = 1; i < yuv.length; i++) {
for(int j = 0; j < i; j++) {
float dist = sqrdist(yuv[i], yuv[j]);
if(dist < worst) {
worst = dist;
worstID = i;
}
}
}
float[] best = randYUVBetterThan(worst, minComponent, maxComponent, yuv);
if(best == null)
break;
else
yuv[worstID] = best;
}
Color[] rgbs = new Color[yuv.length];
for(int i = 0; i < yuv.length; i++) {
float[] rgb = new float[3];
yuv2rgb(yuv[i][0], yuv[i][1], yuv[i][2], rgb);
rgbs[i] = new Color(rgb[0], rgb[1], rgb[2]);
//System.out.println(rgb[i][0] + "\t" + rgb[i][1] + "\t" + rgb[i][2]);
}
return rgbs;
}
public static void hsv2rgb(float h, float s, float v, float[] rgb) {
// H is given on [0->6] or -1. S and V are given on [0->1].
// RGB are each returned on [0->1].
float m, n, f;
int i;
float[] hsv = new float[3];
hsv[0] = h;
hsv[1] = s;
hsv[2] = v;
System.out.println("H: " + h + " S: " + s + " V:" + v);
if(hsv[0] == -1) {
rgb[0] = rgb[1] = rgb[2] = hsv[2];
return;
}
i = (int) (Math.floor(hsv[0]));
f = hsv[0] - i;
if(i % 2 == 0)
f = 1 - f; // if i is even
m = hsv[2] * (1 - hsv[1]);
n = hsv[2] * (1 - hsv[1] * f);
switch(i) {
case 6:
case 0:
rgb[0] = hsv[2];
rgb[1] = n;
rgb[2] = m;
break;
case 1:
rgb[0] = n;
rgb[1] = hsv[2];
rgb[2] = m;
break;
case 2:
rgb[0] = m;
rgb[1] = hsv[2];
rgb[2] = n;
break;
case 3:
rgb[0] = m;
rgb[1] = n;
rgb[2] = hsv[2];
break;
case 4:
rgb[0] = n;
rgb[1] = m;
rgb[2] = hsv[2];
break;
case 5:
rgb[0] = hsv[2];
rgb[1] = m;
rgb[2] = n;
break;
}
}
// From http://en.wikipedia.org/wiki/YUV#Mathematical_derivations_and_formulas
public static void yuv2rgb(float y, float u, float v, float[] rgb) {
rgb[0] = 1 * y + 0 * u + 1.13983f * v;
rgb[1] = 1 * y + -.39465f * u + -.58060f * v;
rgb[2] = 1 * y + 2.03211f * u + 0 * v;
}
public static void rgb2yuv(float r, float g, float b, float[] yuv) {
yuv[0] = .299f * r + .587f * g + .114f * b;
yuv[1] = -.14713f * r + -.28886f * g + .436f * b;
yuv[2] = .615f * r + -.51499f * g + -.10001f * b;
}
private static float[] randYUVinRGBRange(float minComponent, float maxComponent) {
while(true) {
float y = rand.nextFloat(); // * YFRAC + 1-YFRAC);
float u = rand.nextFloat() * 2 * U_OFF - U_OFF;
float v = rand.nextFloat() * 2 * V_OFF - V_OFF;
float[] rgb = new float[3];
yuv2rgb(y, u, v, rgb);
float r = rgb[0], g = rgb[1], b = rgb[2];
if(0 <= r && r <= 1 &&
0 <= g && g <= 1 &&
0 <= b && b <= 1 &&
(r > minComponent || g > minComponent || b > minComponent) && // don't want all dark components
(r < maxComponent || g < maxComponent || b < maxComponent)) // don't want all light components
return new float[]{y, u, v};
}
}
private static float sqrdist(float[] a, float[] b) {
float sum = 0;
for(int i = 0; i < a.length; i++) {
float diff = a[i] - b[i];
sum += diff * diff;
}
return sum;
}
private static double worstFit(Color[] colors) {
float worst = 8888;
float[] a = new float[3], b = new float[3];
for(int i = 1; i < colors.length; i++) {
colors[i].getColorComponents(a);
for(int j = 0; j < i; j++) {
colors[j].getColorComponents(b);
float dist = sqrdist(a, b);
if(dist < worst) {
worst = dist;
}
}
}
return Math.sqrt(worst);
}
private static float[] randYUVBetterThan(float bestDistSqrd, float minComponent, float maxComponent, float[][] in) {
for(int attempt = 1; attempt < 100 * in.length; attempt++) {
float[] candidate = randYUVinRGBRange(minComponent, maxComponent);
boolean good = true;
for(int i = 0; i < in.length; i++)
if(sqrdist(candidate, in[i]) < bestDistSqrd)
good = false;
if(good)
return candidate;
}
return null; // after a bunch of passes, couldn't find a candidate that beat the best.
}
/**
* Simple example program.
*/
public static void main(String[] args) {
final int ncolors = 10;
Color[] colors = generateVisuallyDistinctColors(ncolors, .8f, .3f);
for(int i = 0; i < colors.length; i++) {
System.out.println(colors[i].toString());
}
System.out.println("Worst fit color = " + worstFit(colors));
}
}
HSL color model may be well suited for "sorting" colors, but if you are looking for visually distinct colors you definitively need Lab color model instead.
CIELAB was designed to be perceptually uniform with respect to human color vision, meaning that the same amount of numerical change in these values corresponds to about the same amount of visually perceived change.
Once you know that, finding the optimal subset of N colors from a wide range of colors is still a (NP) hard problem, kind of similar to the Travelling salesman problem and all the solutions using k-mean algorithms or something won't really help.
That said, if N is not too big and if you start with a limited set of colors, you will easily find a very good subset of distincts colors according to a Lab distance with a simple random function.
I've coded such a tool for my own usage (you can find it here: https://mokole.com/palette.html), here is what I got for N=7:
It's all javascript so feel free to take a look on the source of the page and adapt it for your own needs.
A lot of very nice answers up there, but it might be useful to mention the python package distinctify in case someone is looking for a quick python solution. It is a lightweight package available from pypi that is very straightforward to use:
from distinctipy import distinctipy
colors = distinctipy.get_colors(12)
print(colors)
# display the colours
distinctipy.color_swatch(colors)
It returns a list of rgb tuples
[(0, 1, 0), (1, 0, 1), (0, 0.5, 1), (1, 0.5, 0), (0.5, 0.75, 0.5), (0.4552518132842178, 0.12660764790179446, 0.5467915225460569), (1, 0, 0), (0.12076092516775849, 0.9942188027771208, 0.9239958090462229), (0.254747094970068, 0.4768020779917903, 0.02444859177890535), (0.7854526395841417, 0.48630704929211144, 0.9902480906347156), (0, 0, 1), (1, 1, 0)]
Also it has some additional nice functionalities such as generating colors that are distinct from an existing list of colors.
Here's a solution to managed your "distinct" issue, which is entirely overblown:
Create a unit sphere and drop points on it with repelling charges. Run a particle system until they no longer move (or the delta is "small enough"). At this point, each of the points are as far away from each other as possible. Convert (x, y, z) to rgb.
I mention it because for certain classes of problems, this type of solution can work better than brute force.
I originally saw this approach here for tesselating a sphere.
Again, the most obvious solutions of traversing HSL space or RGB space will probably work just fine.
We just need a range of RGB triplet pairs with the maximum amount of distance between these triplets.
We can define a simple linear ramp, and then resize that ramp to get the desired number of colors.
In python:
from skimage.transform import resize
import numpy as np
def distinguishable_colors(n, shuffle = True,
sinusoidal = False,
oscillate_tone = False):
ramp = ([1, 0, 0],[1,1,0],[0,1,0],[0,0,1], [1,0,1]) if n>3 else ([1,0,0], [0,1,0],[0,0,1])
coltrio = np.vstack(ramp)
colmap = np.round(resize(coltrio, [n,3], preserve_range=True,
order = 1 if n>3 else 3
, mode = 'wrap'),3)
if sinusoidal: colmap = np.sin(colmap*np.pi/2)
colmap = [colmap[x,] for x in range(colmap.shape[0])]
if oscillate_tone:
oscillate = [0,1]*round(len(colmap)/2+.5)
oscillate = [np.array([osc,osc,osc]) for osc in oscillate]
colmap = [.8*colmap[x] + .2*oscillate[x] for x in range(len(colmap))]
#Whether to shuffle the output colors
if shuffle:
random.seed(1)
random.shuffle(colmap)
return colmap
I would try to fix saturation and lumination to maximum and focus on hue only. As I see it, H can go from 0 to 255 and then wraps around. Now if you wanted two contrasting colours you would take the opposite sides of this ring, i.e. 0 and 128. If you wanted 4 colours, you would take some separated by 1/4 of the 256 length of the circle, i.e. 0, 64,128,192. And of course, as others suggested when you need N colours, you could just separate them by 256/N.
What I would add to this idea is to use a reversed representation of a binary number to form this sequence. Look at this:
0 = 00000000 after reversal is 00000000 = 0
1 = 00000001 after reversal is 10000000 = 128
2 = 00000010 after reversal is 01000000 = 64
3 = 00000011 after reversal is 11000000 = 192
...
this way if you need N different colours you could just take first N numbers, reverse them, and you get as much distant points as possible (for N being power of two) while at the same time preserving that each prefix of the sequence differs a lot.
This was an important goal in my use case, as I had a chart where colors were sorted by area covered by this colour. I wanted the largest areas of the chart to have large contrast, and I was ok with some small areas to have colours similar to those from top 10, as it was obvious for the reader which one is which one by just observing the area.
This is trivial in MATLAB (there is an hsv command):
cmap = hsv(number_of_colors)
I have written a package for R called qualpalr that is designed specifically for this purpose. I recommend you look at the vignette to find out how it works, but I will try to summarize the main points.
qualpalr takes a specification of colors in the HSL color space (which was described previously in this thread), projects it to the DIN99d color space (which is perceptually uniform) and find the n that maximize the minimum distance between any oif them.
# Create a palette of 4 colors of hues from 0 to 360, saturations between
# 0.1 and 0.5, and lightness from 0.6 to 0.85
pal <- qualpal(n = 4, list(h = c(0, 360), s = c(0.1, 0.5), l = c(0.6, 0.85)))
# Look at the colors in hex format
pal$hex
#> [1] "#6F75CE" "#CC6B76" "#CAC16A" "#76D0D0"
# Create a palette using one of the predefined color subspaces
pal2 <- qualpal(n = 4, colorspace = "pretty")
# Distance matrix of the DIN99d color differences
pal2$de_DIN99d
#> #69A3CC #6ECC6E #CA6BC4
#> 6ECC6E 22
#> CA6BC4 21 30
#> CD976B 24 21 21
plot(pal2)
I think this simple recursive algorithm complementes the accepted answer, in order to generate distinct hue values. I made it for hsv, but can be used for other color spaces too.
It generates hues in cycles, as separate as possible to each other in each cycle.
/**
* 1st cycle: 0, 120, 240
* 2nd cycle (+60): 60, 180, 300
* 3th cycle (+30): 30, 150, 270, 90, 210, 330
* 4th cycle (+15): 15, 135, 255, 75, 195, 315, 45, 165, 285, 105, 225, 345
*/
public static float recursiveHue(int n) {
// if 3: alternates red, green, blue variations
float firstCycle = 3;
// First cycle
if (n < firstCycle) {
return n * 360f / firstCycle;
}
// Each cycle has as much values as all previous cycles summed (powers of 2)
else {
// floor of log base 2
int numCycles = (int)Math.floor(Math.log(n / firstCycle) / Math.log(2));
// divDown stores the larger power of 2 that is still lower than n
int divDown = (int)(firstCycle * Math.pow(2, numCycles));
// same hues than previous cycle, but summing an offset (half than previous cycle)
return recursiveHue(n % divDown) + 180f / divDown;
}
}
I was unable to find this kind of algorithm here. I hope it helps, it's my first post here.
Pretty neat with seaborn for Python users:
>>> import seaborn as sns
>>> sns.color_palette(n_colors=4)
it returns list of RGB tuples:
[(0.12156862745098039, 0.4666666666666667, 0.7058823529411765),
(1.0, 0.4980392156862745, 0.054901960784313725),
(0.17254901960784313, 0.6274509803921569, 0.17254901960784313),
(0.8392156862745098, 0.15294117647058825, 0.1568627450980392)]
Janus's answer but easier to read. I've also adjusted the colorscheme slightly and marked where you can modify for yourself
I've made this a snippet to be directly pasted into a jupyter notebook.
import colorsys
import itertools
from fractions import Fraction
from IPython.display import HTML as html_print
def infinite_hues():
yield Fraction(0)
for k in itertools.count():
i = 2**k # zenos_dichotomy
for j in range(1,i,2):
yield Fraction(j,i)
def hue_to_hsvs(h: Fraction):
# tweak values to adjust scheme
for s in [Fraction(6,10)]:
for v in [Fraction(6,10), Fraction(9,10)]:
yield (h, s, v)
def rgb_to_css(rgb) -> str:
uint8tuple = map(lambda y: int(y*255), rgb)
return "rgb({},{},{})".format(*uint8tuple)
def css_to_html(css):
return f"<text style=background-color:{css}> </text>"
def show_colors(n=33):
hues = infinite_hues()
hsvs = itertools.chain.from_iterable(hue_to_hsvs(hue) for hue in hues)
rgbs = (colorsys.hsv_to_rgb(*hsv) for hsv in hsvs)
csss = (rgb_to_css(rgb) for rgb in rgbs)
htmls = (css_to_html(css) for css in csss)
myhtmls = itertools.islice(htmls, n)
display(html_print("".join(myhtmls)))
show_colors()
If N is big enough, you're going to get some similar-looking colors. There's only so many of them in the world.
Why not just evenly distribute them through the spectrum, like so:
IEnumerable<Color> CreateUniqueColors(int nColors)
{
int subdivision = (int)Math.Floor(Math.Pow(nColors, 1/3d));
for(int r = 0; r < 255; r += subdivision)
for(int g = 0; g < 255; g += subdivision)
for(int b = 0; b < 255; b += subdivision)
yield return Color.FromArgb(r, g, b);
}
If you want to mix up the sequence so that similar colors aren't next to each other, you could maybe shuffle the resulting list.
Am I underthinking this?
This OpenCV function uses the HSV color model to generate n evenly distributed colors around the 0<=H<=360º with maximum S=1.0 and V=1.0. The function outputs the BGR colors in bgr_mat:
void distributed_colors (int n, cv::Mat_<cv::Vec3f> & bgr_mat) {
cv::Mat_<cv::Vec3f> hsv_mat(n,CV_32F,cv::Vec3f(0.0,1.0,1.0));
double step = 360.0/n;
double h= 0.0;
cv::Vec3f value;
for (int i=0;i<n;i++,h+=step) {
value = hsv_mat.at<cv::Vec3f>(i);
hsv_mat.at<cv::Vec3f>(i)[0] = h;
}
cv::cvtColor(hsv_mat, bgr_mat, CV_HSV2BGR);
bgr_mat *= 255;
}
This generates the same colors as Janus Troelsen's solution. But instead of generators, it is using start/stop semantics. It's also fully vectorized.
import numpy as np
import numpy.typing as npt
import matplotlib.colors
def distinct_colors(start: int=0, stop: int=20) -> npt.NDArray[np.float64]:
"""Returns an array of distinct RGB colors, from an infinite sequence of colors
"""
if stop <= start: # empty interval; return empty array
return np.array([], dtype=np.float64)
sat_values = [6/10] # other tones could be added
val_values = [8/10, 5/10] # other tones could be added
colors_per_hue_value = len(sat_values) * len(val_values)
# Get the start and stop indices within the hue value stream that are needed
# to achieve the requested range
hstart = start // colors_per_hue_value
hstop = (stop+colors_per_hue_value-1) // colors_per_hue_value
# Zero will cause a singularity in the caluculation, so we will add the zero
# afterwards
prepend_zero = hstart==0
# Sequence (if hstart=1): 1,2,...,hstop-1
i = np.arange(1 if prepend_zero else hstart, hstop)
# The following yields (if hstart is 1): 1/2, 1/4, 3/4, 1/8, 3/8, 5/8, 7/8,
# 1/16, 3/16, ...
hue_values = (2*i+1) / np.power(2,np.floor(np.log2(i*2))) - 1
if prepend_zero:
hue_values = np.concatenate(([0], hue_values))
# Make all combinations of h, s and v values, as if done by a nested loop
# in that order
hsv = np.array(np.meshgrid(hue_values, sat_values, val_values, indexing='ij')
).reshape((3,-1)).transpose()
# Select the requested range (only the necessary values were computed but we
# need to adjust the indices since start & stop are not necessarily multiples
# of colors_per_hue_value)
hsv = hsv[start % colors_per_hue_value :
start % colors_per_hue_value + stop - start]
# Use the matplotlib vectorized function to convert hsv to rgb
return matplotlib.colors.hsv_to_rgb(hsv)
Samples:
from matplotlib.colors import ListedColormap
ListedColormap(distinct_colors(stop=20))
ListedColormap(distinct_colors(start=30, stop=50))
Related
I'm trying to find the theoretical output range of improved Perlin noise for 1, 2 and 3 dimensions. I'm aware of existing answers to this question, but they don't seem to accord with my practical findings.
If n is the number of dimensions then according to [1] it should be [-sqrt(n/4), sqrt(n/4)]. According to [2] (which refers to [3]) it should be [-0.5·sqrt(n), 0.5·sqrt(n)] (which amounts to the same thing).
This means that the ranges should be approximately:
Dimensions
Range
1
[-0.5, 0.5]
2
[-0.707, 0.707]
3
[-0.866, 0.866]
However when I run the following code (which uses Ken Perlin's own reference implementation of improved noise from his website), I get higher values for 2 and 3 dimensions, namely approximately:
Dimensions
Range
1
[-0.5, 0.5]
2
[-0.891, 0.999]
3
[-0.997, 0.999]
With different permutations I even sometimes get values slightly over 1.0 for 3 dimensions, and for some strange reason one of the bounds for two dimension always seems to be about 0.89 while the other is about 1.00.
I can't figure out whether this is due to a bug in my code (I don't see how since this is Ken Perlin's own code) or due to those discussions not being correct or not being applicable somehow, in which case I would like to know what the theoretical ranges are for improved Perlin noise.
Can you replicate this? Are the results wrong, or can you point me to a discussion of the theoretical values that accords with this outcome?
The code:
public class PerlinTest {
public static void main(String[] args) {
double lowest1DValue = Double.MAX_VALUE, highest1DValue = -Double.MAX_VALUE;
double lowest2DValue = Double.MAX_VALUE, highest2DValue = -Double.MAX_VALUE;
double lowest3DValue = Double.MAX_VALUE, highest3DValue = -Double.MAX_VALUE;
final Random random = new SecureRandom();
for (int i = 0; i < 10000000; i++) {
double value = noise(random.nextDouble() * 256.0, 0.0, 0.0);
if (value < lowest1DValue) {
lowest1DValue = value;
}
if (value > highest1DValue) {
highest1DValue = value;
}
value = noise(random.nextDouble() * 256.0, random.nextDouble() * 256.0, 0.0);
if (value < lowest2DValue) {
lowest2DValue = value;
}
if (value > highest2DValue) {
highest2DValue = value;
}
value = noise(random.nextDouble() * 256.0, random.nextDouble() * 256.0, random.nextDouble() * 256.0);
if (value < lowest3DValue) {
lowest3DValue = value;
}
if (value > highest3DValue) {
highest3DValue = value;
}
}
System.out.println("Lowest 1D value: " + lowest1DValue);
System.out.println("Highest 1D value: " + highest1DValue);
System.out.println("Lowest 2D value: " + lowest2DValue);
System.out.println("Highest 2D value: " + highest2DValue);
System.out.println("Lowest 3D value: " + lowest3DValue);
System.out.println("Highest 3D value: " + highest3DValue);
}
static public double noise(double x, double y, double z) {
int X = (int)Math.floor(x) & 255, // FIND UNIT CUBE THAT
Y = (int)Math.floor(y) & 255, // CONTAINS POINT.
Z = (int)Math.floor(z) & 255;
x -= Math.floor(x); // FIND RELATIVE X,Y,Z
y -= Math.floor(y); // OF POINT IN CUBE.
z -= Math.floor(z);
double u = fade(x), // COMPUTE FADE CURVES
v = fade(y), // FOR EACH OF X,Y,Z.
w = fade(z);
int A = p[X ]+Y, AA = p[A]+Z, AB = p[A+1]+Z, // HASH COORDINATES OF
B = p[X+1]+Y, BA = p[B]+Z, BB = p[B+1]+Z; // THE 8 CUBE CORNERS,
return lerp(w, lerp(v, lerp(u, grad(p[AA ], x , y , z ), // AND ADD
grad(p[BA ], x-1, y , z )), // BLENDED
lerp(u, grad(p[AB ], x , y-1, z ), // RESULTS
grad(p[BB ], x-1, y-1, z ))),// FROM 8
lerp(v, lerp(u, grad(p[AA+1], x , y , z-1 ), // CORNERS
grad(p[BA+1], x-1, y , z-1 )), // OF CUBE
lerp(u, grad(p[AB+1], x , y-1, z-1 ),
grad(p[BB+1], x-1, y-1, z-1 ))));
}
static double fade(double t) { return t * t * t * (t * (t * 6 - 15) + 10); }
static double lerp(double t, double a, double b) { return a + t * (b - a); }
static double grad(int hash, double x, double y, double z) {
int h = hash & 15; // CONVERT LO 4 BITS OF HASH CODE
double u = h<8 ? x : y, // INTO 12 GRADIENT DIRECTIONS.
v = h<4 ? y : h==12||h==14 ? x : z;
return ((h&1) == 0 ? u : -u) + ((h&2) == 0 ? v : -v);
}
static final int p[] = new int[512], permutation[] = { 151,160,137,91,90,15,
131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23,
190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33,
88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166,
77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244,
102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196,
135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123,
5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42,
223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9,
129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228,
251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107,
49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254,
138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180
};
static { for (int i=0; i < 256 ; i++) p[256+i] = p[i] = permutation[i]; }
}
Ken’s not using unit vectors. As [1] says, with my emphasis:
Third, there are many different ways to select the random vectors at the grid cell corners. In Improved Perlin noise, instead of selecting any random vector, one of 12 vectors pointing to the edges of a cube are used instead. Here, I will talk strictly about a continuous range of angles since it is easier – however, the range of value of an implementation of Perlin noise using a restricted set of vectors will never be larger. Finally, the script in this repository assumes the vectors are of unit length. If they not, the range of value should be scaled according to the maximum vector length. Note that the vectors in Improved Perlin noise are not unit length.
For Ken’s improved noise, the maximum vector length is 1 in 1D and √2 in 2D, so the theoretical bounds are [−0.5, 0.5] in 1D and [−1, 1] in 2D. I don’t know why you’re not seeing the full range in 2D; if you shuffled the permutation I bet you would sometimes.
For 3D, the maximum vector length is still √2, but the extreme case identified by [1] isn’t a possible output, so the theoretical range of [−√(3/2), √(3/2)] is an overestimate. These folks tried to work it out exactly, and yes, the maximum absolute value does seem to be strictly greater than 1.
I want to find with OpenCV first red pixel and cut rest of picture on right of it.
For this moment I wrote this code, but it work very slow:
int firstRedPixel = mat.Cols();
int len = 0;
for (int x = 0; x < mat.Rows(); x++)
{
for (int y = 0; y < mat.Cols(); y++)
{
double[] rgb = mat.Get(x, y);
double r = rgb[0];
double g = rgb[1];
double b = rgb[2];
if ((r > 175) && (r > 2 * g) && (r > 2 * b))
{
if (len == 3)
{
firstRedPixel = y - len;
break;
}
len++;
}
else
{
len = 0;
}
}
}
Any solutions?
You can:
1) find red pixels (see here)
2) get the bounding box of red pixels
3) crop your image
The code is in C++, but it's only OpenCV functions so it should not be difficult to port to Java:
#include <opencv2\opencv.hpp>
int main()
{
cv::Mat3b img = cv::imread("path/to/img");
// Find red pixels
// https://stackoverflow.com/a/32523532/5008845
cv::Mat3b bgr_inv = ~img;
cv::Mat3b hsv_inv;
cv::cvtColor(bgr_inv, hsv_inv, cv::COLOR_BGR2HSV);
cv::Mat1b red_mask;
inRange(hsv_inv, cv::Scalar(90 - 10, 70, 50), cv::Scalar(90 + 10, 255, 255), red_mask); // Cyan is 90
// Get the rect
std::vector<cv::Point> red_points;
cv::findNonZero(red_mask, red_points);
cv::Rect red_area = cv::boundingRect(red_points);
// Show green rectangle on red area
cv::Mat3b out = img.clone();
cv::rectangle(out, red_area, cv::Scalar(0, 255, 0));
// Define the non red area
cv::Rect not_red_area;
not_red_area.x = 0;
not_red_area.y = 0;
not_red_area.width = red_area.x - 1;
not_red_area.height = img.rows;
// Crop away red area
cv::Mat3b result = img(not_red_area);
return 0;
}
This is not the way to work with computer vision. I know this, because I did it the same way.
One way to achieve your goal would be to use template matching with a red bar that you cut out of your image, and thus locate the red border, and cut it away.
Another would be to transfer to HSV space, filter out red content, and use contour finding to locate a large red structure, as you need it.
There are plenty of ways to do this. Looping yourself over pixel-values rarely is the right approach though, and you won't take advantage of sophisticated vectorisation or algorithms that way.
This answer suggests that it's over 10 times faster to loop pixel array instead of using BufferedImage.getRGB. Such difference is too important to by ignored in my computer vision program. For that reason, O rewritten my IntegralImage method to calculate integral image using the pixel array:
/* Generate an integral image. Every pixel on such image contains sum of colors or all the
pixels before and itself.
*/
public static double[][][] integralImage(BufferedImage image) {
//Cache width and height in variables
int w = image.getWidth();
int h = image.getHeight();
//Create the 2D array as large as the image is
//Notice that I use [Y, X] coordinates to comply with the formula
double integral_image[][][] = new double[h][w][3];
//Variables for the image pixel array looping
final int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
//final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
//If the image has alpha, there will be 4 elements per pixel
final boolean hasAlpha = image.getAlphaRaster() != null;
final int pixel_size = hasAlpha?4:3;
//If there's alpha it's the first of 4 values, so we skip it
final int pixel_offset = hasAlpha?1:0;
//Coordinates, will be iterated too
//It's faster than calculating them using % and multiplication
int x=0;
int y=0;
int pixel = 0;
//Tmp storage for color
int[] color = new int[3];
//Loop through pixel array
for(int i=0, l=pixels.length; i<l; i+=pixel_size) {
//Prepare all the colors in advance
color[2] = ((int) pixels[pixel + pixel_offset] & 0xff); // blue;
color[1] = ((int) pixels[pixel + pixel_offset + 1] & 0xff); // green;
color[0] = ((int) pixels[pixel + pixel_offset + 2] & 0xff); // red;
//For every color, calculate the integrals
for(int j=0; j<3; j++) {
//Calculate the integral image field
double A = (x > 0 && y > 0) ? integral_image[y-1][x-1][j] : 0;
double B = (x > 0) ? integral_image[y][x-1][j] : 0;
double C = (y > 0) ? integral_image[y-1][x][j] : 0;
integral_image[y][x][j] = - A + B + C + color[j];
}
//Iterate coordinates
x++;
if(x>=w) {
x=0;
y++;
}
}
//Return the array
return integral_image;
}
The problem is that if I use this debug output in the for loop:
if(x==0) {
System.out.println("rgb["+pixels[pixel+pixel_offset+2]+", "+pixels[pixel+pixel_offset+1]+", "+pixels[pixel+pixel_offset]+"]");
System.out.println("rgb["+color[0]+", "+color[1]+", "+color[2]+"]");
}
This is what I get:
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
...
So how should I properly retrieve pixel array for BufferedImage images?
A bug in the code above, that is easily missed, is that the for loop doesn't loop as you'd expect. The for loop updates i, while the loop body uses pixel for its array indexing. Thus, you will only ever see the values of pixel 1, 2 and 3.
Apart from that:
The "problem" with the negative pixel values, is most likely that the code assumes a BufferedImage that stores its pixels in "pixel interleaved" form, however, they are stored "pixel packed". That is, all samples (R, G, B and A) for one pixel is stored in a single sample, an int. This will be the case for all BufferedImage.TYPE_INT_* types (while the BufferedImage.TYPE_nBYTE_* types are stored interleaved).
It's completely normal to have negative values in the raster, this will happen for any pixel that is less than 50% transparent (more than or equal to 50% opaque), because of how the 4 samples are packed into the int, and because int is a signed type in Java.
In this case, use:
int[] color = new int[3];
for (int i = 0; i < pixels.length; i++) {
// Assuming TYPE_INT_RGB, TYPE_INT_ARGB or TYPE_INT_ARGB_PRE
// For TYPE_INT_BGR, you need to reverse the colors.
// You seem to ignore alpha, is that correct?
color[0] = ((pixels[i] >> 16) & 0xff); // red;
color[1] = ((pixels[i] >> 8) & 0xff); // green;
color[2] = ( pixels[i] & 0xff); // blue;
// The rest of the computations...
}
Another possibility, is that you have created a custom type image (BufferedImage.TYPE_CUSTOM) that really uses a 32 bit unsigned int per sample. This is possible, however, int is still a signed entity in Java, so you need to mask off the sign bit. To complicate this a little, in Java -1 & 0xFFFFFFFF == -1 because any computation on an int will still be an int, unless you explicitly say otherwise (doing the same on a byte or short value would have "scaled up" to int). To get a positive value, you need to use a long value like this: -1 & 0xFFFFFFFFL (which is 4294967295).
In this case, use:
long[] color = new long[3];
for(int i = 0; i < pixels.length / pixel_size; i += pixel_size) {
// Somehow assuming BGR order in input, and RGB output (color)
// Still ignoring alpha
color[0] = (pixels[i + pixel_offset + 2] & 0xFFFFFFFFL); // red;
color[1] = (pixels[i + pixel_offset + 1] & 0xFFFFFFFFL); // green;
color[2] = (pixels[i + pixel_offset ] & 0xFFFFFFFFL); // blue;
// The rest of the computations...
}
I don't know what type of image you have, so I can't say for sure which one is the problem, but it's one of those. :-)
PS: BufferedImage.getAlphaRaster() is a possibly an expensive and also inaccurate way to tell if the image has alpha. It's better to just use image.getColorModel().hasAlpha(). See also hasAlpha vs getAlphaRaster.
I am having a slightly odd problem trying to quantize and dither an RGB image. Ideally, I should be able to implement a suitable algorithm in Java or use a Java library, but references to implementations in other languages may be helpful as well.
The following is given as input:
image: 24-bit RGB bitmap
palette: a list of colors defined with their RGB values
max_cols: the maximum number of colours to be used in the output image
It is perhaps important, that both the size of the palette as well as the maximum number of allowed colours is not necessarily a power of 2 and may be greater than 255.
So, the goal is to take the image, select up to max_cols colours from the provided palette and output an image using only the picked colours and rendered using some kind of error-diffusion dithering. Which dithering algorithm to use is not that important, but it should be an error-diffusion variant (e.g. Floyd-Steinberg) and not simple halftone or ordered dithering.
Performance is not particularly important and the size of the expected data input is relatively small. The images would rarely be larger than 500x500 pixel, the provided palette may contain some 3-400 colours and the number of colours will usually be limited to less than 100. It is also safe to assume that the palette contains a wide selection of colours, covering variations of both hue, saturation and brightness.
The palette selection and dithering used by scolorq would be ideal, but it does not seem easy to adapt the algorithm to select colours from an already defined palette instead of arbitrary colours.
To be more precise, the problem where I am stuck is the selection of suitable colours from the provided palette. Assume that I e.g. use scolorq to create a palette with N colours and later replace the colours defined by scolorq with the closest colours from the provided palette, and then use these colours combined with error-diffused dithering. This will produce a result at least similar to the input image, but due to the unpredictable hues of the selected colours, the output image may get a strong, undesired colour cast. E.g. when using a grey-scale input image and a palette with only few neutral gray tones, but a great range of brown tones (or more generally, many colours with the same hue, low saturation and a great variation in the brightness), my colour selection algorithm seem to prefer these colours above the neutral greys since the brown tones are at least mathematically closer to the desired colour than the greys. The same problem remains even if I convert the RGB values to HSB and use different weights for the H, S and B channels when trying to find the nearest available colour.
Any suggestions how to implement this properly, or even better a library I can use to perform the task?
Since Xabster asked, I can also explain the goal with this excercise, although it has nothing to do with how the actual problem can be solved. The target for the output image is an embroidery or tapestry pattern. In the most simplest case, each pixel in the output image corresponds to a stitch made on some kind of carrier fabric. The palette corresponds to the available yarns, which usually come in several hundred colours. For practical reasons, it is however necessary to limit the number of colours used in the actual work. Googling for gobelin embroideries will give several examples.
And to clarify where the problem exactly lies... The solution can indeed be split into two separate steps:
selecting the optimal subset of the original palette
using the subset to render the output image
Here, the first step is the actual problem. If the palette selection works properly, I could simply use the selected colours and e.g. Floyd-Steinberg dithering to produce a reasonable result (which is rather trivial to implement).
If I understand the implementation of scolorq correctly, scolorq however combines these two steps, using knowledge of the dithering algorithm in the palette selection to create an even better result. That would of course be a preferred solution, but the algorithms used in scolorq work slightly beyond my mathematical knowledge.
OVERVIEW
This is a possible approach to the problem:
1) Each color from the input pixels is mapped to the closest color from the input color palette.
2) If the resulting palette is greater than the allowed maximum number of colors, the palette gets reduced to the maximum allowed number, by removing the colors, that are most similar with each other from the computed palette (I did choose the nearest distance for removal, so the resulting image remains high in contrast).
3) If the resulting palette is smaller than the allowed maximum number of colors, it gets filled with the most similar colors from the remaining colors of the input palette until the allowed number of colors is reached. This is done in the hope, that the dithering algorithm could make use of these colors during dithering. Note though that I didn't see much difference between filling or not filling the palette for the Floyd-Steinberg algorithm...
4) As a last step the input pixels get dithered with the computed palette.
IMPLEMENTATION
Below is an implementation of this approach.
If you want to run the source code, you will need this class: ImageFrame.java. You can set the input image as the only program argument, all other parameters must be set in the main method. The used Floyd-Steinberg algorithm is from Floyd-Steinberg dithering.
One can choose between 3 different reduction strategies for the palette reduction algorithm:
1) ORIGINAL_COLORS: This algorithm tries to stay as true to the input pixel colors as possible by searching for the two colors in the palette, that have the least distance. From these two colors it removes the one with the fewest mappings to pixels in the input map.
2) BETTER_CONTRAST: Works like ORIGINAL_COLORS, with the difference, that from the two colors it removes the one with the lowest average distance to the rest of the palette.
3) AVERAGE_DISTANCE: This algorithm always removes the colors with the lowest average distance from the pool. This setting can especially improve the quality of the resulting image for grayscale palettes.
Here is the complete code:
import java.awt.Color;
import java.awt.Image;
import java.awt.image.PixelGrabber;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Random;
import java.util.Set;
public class Quantize {
public static class RGBTriple {
public final int[] channels;
public RGBTriple() { channels = new int[3]; }
public RGBTriple(int color) {
int r = (color >> 16) & 0xFF;
int g = (color >> 8) & 0xFF;
int b = (color >> 0) & 0xFF;
channels = new int[]{(int)r, (int)g, (int)b};
}
public RGBTriple(int R, int G, int B)
{ channels = new int[]{(int)R, (int)G, (int)B}; }
}
/* The authors of this work have released all rights to it and placed it
in the public domain under the Creative Commons CC0 1.0 waiver
(http://creativecommons.org/publicdomain/zero/1.0/).
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Retrieved from: http://en.literateprograms.org/Floyd-Steinberg_dithering_(Java)?oldid=12476
*/
public static class FloydSteinbergDither
{
private static int plus_truncate_uchar(int a, int b) {
if ((a & 0xff) + b < 0)
return 0;
else if ((a & 0xff) + b > 255)
return (int)255;
else
return (int)(a + b);
}
private static int findNearestColor(RGBTriple color, RGBTriple[] palette) {
int minDistanceSquared = 255*255 + 255*255 + 255*255 + 1;
int bestIndex = 0;
for (int i = 0; i < palette.length; i++) {
int Rdiff = (color.channels[0] & 0xff) - (palette[i].channels[0] & 0xff);
int Gdiff = (color.channels[1] & 0xff) - (palette[i].channels[1] & 0xff);
int Bdiff = (color.channels[2] & 0xff) - (palette[i].channels[2] & 0xff);
int distanceSquared = Rdiff*Rdiff + Gdiff*Gdiff + Bdiff*Bdiff;
if (distanceSquared < minDistanceSquared) {
minDistanceSquared = distanceSquared;
bestIndex = i;
}
}
return bestIndex;
}
public static int[][] floydSteinbergDither(RGBTriple[][] image, RGBTriple[] palette)
{
int[][] result = new int[image.length][image[0].length];
for (int y = 0; y < image.length; y++) {
for (int x = 0; x < image[y].length; x++) {
RGBTriple currentPixel = image[y][x];
int index = findNearestColor(currentPixel, palette);
result[y][x] = index;
for (int i = 0; i < 3; i++)
{
int error = (currentPixel.channels[i] & 0xff) - (palette[index].channels[i] & 0xff);
if (x + 1 < image[0].length) {
image[y+0][x+1].channels[i] =
plus_truncate_uchar(image[y+0][x+1].channels[i], (error*7) >> 4);
}
if (y + 1 < image.length) {
if (x - 1 > 0) {
image[y+1][x-1].channels[i] =
plus_truncate_uchar(image[y+1][x-1].channels[i], (error*3) >> 4);
}
image[y+1][x+0].channels[i] =
plus_truncate_uchar(image[y+1][x+0].channels[i], (error*5) >> 4);
if (x + 1 < image[0].length) {
image[y+1][x+1].channels[i] =
plus_truncate_uchar(image[y+1][x+1].channels[i], (error*1) >> 4);
}
}
}
}
}
return result;
}
public static void generateDither(int[] pixels, int[] p, int w, int h){
RGBTriple[] palette = new RGBTriple[p.length];
for (int i = 0; i < palette.length; i++) {
int color = p[i];
palette[i] = new RGBTriple(color);
}
RGBTriple[][] image = new RGBTriple[w][h];
for (int x = w; x-- > 0; ) {
for (int y = h; y-- > 0; ) {
int index = y * w + x;
int color = pixels[index];
image[x][y] = new RGBTriple(color);
}
}
int[][] result = floydSteinbergDither(image, palette);
convert(result, pixels, p, w, h);
}
public static void convert(int[][] result, int[] pixels, int[] p, int w, int h){
for (int x = w; x-- > 0; ) {
for (int y = h; y-- > 0; ) {
int index = y * w + x;
int index2 = result[x][y];
pixels[index] = p[index2];
}
}
}
}
private static class PaletteColor{
final int color;
public PaletteColor(int color) {
super();
this.color = color;
}
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + color;
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
PaletteColor other = (PaletteColor) obj;
if (color != other.color)
return false;
return true;
}
public List<Integer> indices = new ArrayList<>();
}
public static int[] getPixels(Image image) throws IOException {
int w = image.getWidth(null);
int h = image.getHeight(null);
int pix[] = new int[w * h];
PixelGrabber grabber = new PixelGrabber(image, 0, 0, w, h, pix, 0, w);
try {
if (grabber.grabPixels() != true) {
throw new IOException("Grabber returned false: " +
grabber.status());
}
} catch (InterruptedException e) {
e.printStackTrace();
}
return pix;
}
/**
* Returns the color distance between color1 and color2
*/
public static float getPixelDistance(PaletteColor color1, PaletteColor color2){
int c1 = color1.color;
int r1 = (c1 >> 16) & 0xFF;
int g1 = (c1 >> 8) & 0xFF;
int b1 = (c1 >> 0) & 0xFF;
int c2 = color2.color;
int r2 = (c2 >> 16) & 0xFF;
int g2 = (c2 >> 8) & 0xFF;
int b2 = (c2 >> 0) & 0xFF;
return (float) getPixelDistance(r1, g1, b1, r2, g2, b2);
}
public static double getPixelDistance(int r1, int g1, int b1, int r2, int g2, int b2){
return Math.sqrt(Math.pow(r2 - r1, 2) + Math.pow(g2 - g1, 2) + Math.pow(b2 - b1, 2));
}
/**
* Fills the given fillColors palette with the nearest colors from the given colors palette until
* it has the given max_cols size.
*/
public static void fillPalette(List<PaletteColor> fillColors, List<PaletteColor> colors, int max_cols){
while (fillColors.size() < max_cols) {
int index = -1;
float minDistance = -1;
for (int i = 0; i < fillColors.size(); i++) {
PaletteColor color1 = colors.get(i);
for (int j = 0; j < colors.size(); j++) {
PaletteColor color2 = colors.get(j);
if (color1 == color2) {
continue;
}
float distance = getPixelDistance(color1, color2);
if (index == -1 || distance < minDistance) {
index = j;
minDistance = distance;
}
}
}
PaletteColor color = colors.get(index);
fillColors.add(color);
}
}
public static void reducePaletteByAverageDistance(List<PaletteColor> colors, int max_cols, ReductionStrategy reductionStrategy){
while (colors.size() > max_cols) {
int index = -1;
float minDistance = -1;
for (int i = 0; i < colors.size(); i++) {
PaletteColor color1 = colors.get(i);
float averageDistance = 0;
int count = 0;
for (int j = 0; j < colors.size(); j++) {
PaletteColor color2 = colors.get(j);
if (color1 == color2) {
continue;
}
averageDistance += getPixelDistance(color1, color2);
count++;
}
averageDistance/=count;
if (minDistance == -1 || averageDistance < minDistance) {
minDistance = averageDistance;
index = i;
}
}
PaletteColor removed = colors.remove(index);
// find the color with the least distance:
PaletteColor best = null;
minDistance = -1;
for (int i = 0; i < colors.size(); i++) {
PaletteColor c = colors.get(i);
float distance = getPixelDistance(c, removed);
if (best == null || distance < minDistance) {
best = c;
minDistance = distance;
}
}
best.indices.addAll(removed.indices);
}
}
/**
* Reduces the given color palette until it has the given max_cols size.
* The colors that are closest in distance to other colors in the palette
* get removed first.
*/
public static void reducePalette(List<PaletteColor> colors, int max_cols, ReductionStrategy reductionStrategy){
if (reductionStrategy == ReductionStrategy.AVERAGE_DISTANCE) {
reducePaletteByAverageDistance(colors, max_cols, reductionStrategy);
return;
}
while (colors.size() > max_cols) {
int index1 = -1;
int index2 = -1;
float minDistance = -1;
for (int i = 0; i < colors.size(); i++) {
PaletteColor color1 = colors.get(i);
for (int j = i+1; j < colors.size(); j++) {
PaletteColor color2 = colors.get(j);
if (color1 == color2) {
continue;
}
float distance = getPixelDistance(color1, color2);
if (index1 == -1 || distance < minDistance) {
index1 = i;
index2 = j;
minDistance = distance;
}
}
}
PaletteColor color1 = colors.get(index1);
PaletteColor color2 = colors.get(index2);
switch (reductionStrategy) {
case BETTER_CONTRAST:
// remove the color with the lower average distance to the other palette colors
int count = 0;
float distance1 = 0;
float distance2 = 0;
for (PaletteColor c : colors) {
if (c != color1 && c != color2) {
count++;
distance1 += getPixelDistance(color1, c);
distance2 += getPixelDistance(color2, c);
}
}
if (count != 0 && distance1 != distance2) {
distance1 /= (float)count;
distance2 /= (float)count;
if (distance1 < distance2) {
// remove color 1;
colors.remove(index1);
color2.indices.addAll(color1.indices);
} else{
// remove color 2;
colors.remove(index2);
color1.indices.addAll(color2.indices);
}
break;
}
//$FALL-THROUGH$
default:
// remove the color with viewer mappings to the input pixels
if (color1.indices.size() < color2.indices.size()) {
// remove color 1;
colors.remove(index1);
color2.indices.addAll(color1.indices);
} else{
// remove color 2;
colors.remove(index2);
color1.indices.addAll(color2.indices);
}
break;
}
}
}
/**
* Creates an initial color palette from the given pixels and the given palette by
* selecting the colors with the nearest distance to the given pixels.
* This method also stores the indices of the corresponding pixels inside the
* returned PaletteColor instances.
*/
public static List<PaletteColor> createInitialPalette(int pixels[], int[] palette){
Map<Integer, Integer> used = new HashMap<>();
ArrayList<PaletteColor> result = new ArrayList<>();
for (int i = 0, l = pixels.length; i < l; i++) {
double bestDistance = Double.MAX_VALUE;
int bestIndex = -1;
int pixel = pixels[i];
int r1 = (pixel >> 16) & 0xFF;
int g1 = (pixel >> 8) & 0xFF;
int b1 = (pixel >> 0) & 0xFF;
for (int k = 0; k < palette.length; k++) {
int pixel2 = palette[k];
int r2 = (pixel2 >> 16) & 0xFF;
int g2 = (pixel2 >> 8) & 0xFF;
int b2 = (pixel2 >> 0) & 0xFF;
double dist = getPixelDistance(r1, g1, b1, r2, g2, b2);
if (dist < bestDistance) {
bestDistance = dist;
bestIndex = k;
}
}
Integer index = used.get(bestIndex);
PaletteColor c;
if (index == null) {
index = result.size();
c = new PaletteColor(palette[bestIndex]);
result.add(c);
used.put(bestIndex, index);
} else{
c = result.get(index);
}
c.indices.add(i);
}
return result;
}
/**
* Creates a simple random color palette
*/
public static int[] createRandomColorPalette(int num_colors){
Random random = new Random(101);
int count = 0;
int[] result = new int[num_colors];
float add = 360f / (float)num_colors;
for(float i = 0; i < 360f && count < num_colors; i += add) {
float hue = i;
float saturation = 90 +random.nextFloat() * 10;
float brightness = 50 + random.nextFloat() * 10;
result[count++] = Color.HSBtoRGB(hue, saturation, brightness);
}
return result;
}
public static int[] createGrayScalePalette(int count){
float[] grays = new float[count];
float step = 1f/(float)count;
grays[0] = 0;
for (int i = 1; i < count-1; i++) {
grays[i]=i*step;
}
grays[count-1]=1;
return createGrayScalePalette(grays);
}
/**
* Returns a grayscale palette based on the given shades of gray
*/
public static int[] createGrayScalePalette(float[] grays){
int[] result = new int[grays.length];
for (int i = 0; i < result.length; i++) {
float f = grays[i];
result[i] = Color.HSBtoRGB(0, 0, f);
}
return result;
}
private static int[] createResultingImage(int[] pixels,List<PaletteColor> paletteColors, boolean dither, int w, int h) {
int[] palette = new int[paletteColors.size()];
for (int i = 0; i < palette.length; i++) {
palette[i] = paletteColors.get(i).color;
}
if (!dither) {
for (PaletteColor c : paletteColors) {
for (int i : c.indices) {
pixels[i] = c.color;
}
}
} else{
FloydSteinbergDither.generateDither(pixels, palette, w, h);
}
return palette;
}
public static int[] quantize(int[] pixels, int widht, int heigth, int[] colorPalette, int max_cols, boolean dither, ReductionStrategy reductionStrategy) {
// create the initial palette by finding the best match colors from the given color palette
List<PaletteColor> paletteColors = createInitialPalette(pixels, colorPalette);
// reduce the palette size to the given number of maximum colors
reducePalette(paletteColors, max_cols, reductionStrategy);
assert paletteColors.size() <= max_cols;
if (paletteColors.size() < max_cols) {
// fill the palette with the nearest remaining colors
List<PaletteColor> remainingColors = new ArrayList<>();
Set<PaletteColor> used = new HashSet<>(paletteColors);
for (int i = 0; i < colorPalette.length; i++) {
int color = colorPalette[i];
PaletteColor c = new PaletteColor(color);
if (!used.contains(c)) {
remainingColors.add(c);
}
}
fillPalette(paletteColors, remainingColors, max_cols);
}
assert paletteColors.size() == max_cols;
// create the resulting image
return createResultingImage(pixels,paletteColors, dither, widht, heigth);
}
static enum ReductionStrategy{
ORIGINAL_COLORS,
BETTER_CONTRAST,
AVERAGE_DISTANCE,
}
public static void main(String args[]) throws IOException {
// input parameters
String imageFileName = args[0];
File file = new File(imageFileName);
boolean dither = true;
int colorPaletteSize = 80;
int max_cols = 3;
max_cols = Math.min(max_cols, colorPaletteSize);
// create some random color palette
// int[] colorPalette = createRandomColorPalette(colorPaletteSize);
int[] colorPalette = createGrayScalePalette(20);
ReductionStrategy reductionStrategy = ReductionStrategy.AVERAGE_DISTANCE;
// show the original image inside a frame
ImageFrame original = new ImageFrame();
original.setImage(file);
original.setTitle("Original Image");
original.setLocation(0, 0);
Image image = original.getImage();
int width = image.getWidth(null);
int heigth = image.getHeight(null);
int pixels[] = getPixels(image);
int[] palette = quantize(pixels, width, heigth, colorPalette, max_cols, dither, reductionStrategy);
// show the reduced image in another frame
ImageFrame reduced = new ImageFrame();
reduced.setImage(width, heigth, pixels);
reduced.setTitle("Quantized Image (" + palette.length + " colors, dither: " + dither + ")");
reduced.setLocation(100, 100);
}
}
POSSIBLE IMPROVEMENTS
1) The used Floyd-Steinberg algorithm does currently only work for palettes with a maximum size of 256 colors. I guess this could be fixed easily, but since the used FloydSteinbergDither class requires quite a lot of conversions at the moment, it would certainly be better to implement the algorithm from scratch so it fits the color model that is used in the end.
2) I believe using another dithering algorithm like scolorq would perhaps be better. On the "To Do List" at the end of their homepage they write:
[TODO:] The ability to fix some colors to a predetermined set (supported by the algorithm but not the current implementation)
So it seems using a fixed palette should be possible for the algorithm. The Photoshop/Gimp plugin Ximagic seems to implement this functionality using scolorq. From their homepage:
Ximagic Quantizer is a Photoshop plugin for image color quantization (color reduction) & dithering.
Provides: Predefined palette quantization
3) The algorithm to fill the palette could perhaps be improved - e.g. by filling the palette with colors depending on their average distance (like in the reduction algorithm). But this should be tested depending on the finally used dithering algorithm.
EDIT: I think I may have answered a slightly different question. jarnbjo pointed out something that may be wrong with my solution, and I realized I misunderstood the question. I'm leaving my answer here for posterity, though.
I may have a solution to this in Matlab. To find the closest color, I used the weights given by Albert Renshaw in a comment here. I used the HSV colorspace, but all inputs to the code were in standard RGB. Greyscale iamges were converted to 3-channel greyscale images.
To select the best colors to use, I seeded kmeans with the test sample palette and then reset the centroids to be the values they were closest to in the sample pallet.
function imo = recolor(im,new_colors,max_colors)
% Convert to HSV
im2 = rgb2hsv(im);
new_colors = rgb2hsv(new_colors);
% Get number of colors in palette
num_colors = uint8(size(new_colors,1));
% Reshape image so every row is a diferent pixel, and every column a channel
% this is necessary for kmeans in Matlab
im2 = reshape(im2, size(im,1)*size(im,2),size(im,3));
% Seed kmeans with sample pallet, drop empty clusters
[IDX, C] = kmeans(im2,max_colors,'emptyaction','drop');
% For each pixel, IDX tells which cluster in C it corresponds to
% C contains the centroids of each cluster
% Because centroids are adjusted from seeds, we need to select which original color
% in the palette it corresponds to. We cannot be sure that the centroids in C correspond
% to their seed values
% Note that Matlab starts indexing at 1 instead of 0
for i=1:size(C,1)
H = C(i,1);
S = C(i,2);
V = C(i,3);
bdel = 100;
% Find which color in the new_colors palette is closest
for j=1:size(new_colors,1)
H2 = new_colors(j,1);
S2 = new_colors(j,2);
V2 = new_colors(j,3);
dH = (H2-H)^2*0.475;
dS = (S2-S)^2*0.2875;
dV = (V2-V)^2*0.2375;
del = sqrt(dH+dS+dV);
if isnan(del)
continue
end
% update if the new delta is lower than the best
if del<bdel
bdel = del;
C(i,:) = new_colors(j,:);
end
end
end
% Update the colors, this is equal to the following
% for i=1:length(imo)
% imo(i,:) = C(IDX(i),:)
imo = C(IDX,:);
% put it back in its original shape
imo = reshape(imo, size(im));
imo = hsv2rgb(imo);
imshow(imo);
The problem with it right now as I have it written is that it is very slow for color images (Lenna took several minutes).
Is this along the lines of what you are looking for?
Examples.
If you don't understand all the Matlab notation, let me know.
First of all I'd like to insist on the fact that this is no advanced distance color computation.
So far I assumed the first palette is one you either configured or precalculated from an image.
Here, I only configured it and focused on the subpalette extraction problem. I did not use an algorithm, it's highly probable that it may not be the best.
Store an image into a canvas 2d context which will serve as a buffer, I'll refer to it as ctxHidden
Store pixels data of ctxHidden into a variable called img
Loop through entire img with function constraintImageData(img, palette) which accepts as argument img and the palette to transform current img pixels to given colors with the help of the distance function nearestColor(palette, r, g, b, a). Note that this function returns a witness, which basically counts how many times each colors of the palette being used at least once. My example also applies a Floyd-Steinberg dithering, even though you mentionned it was not a problem.
Use the witness to sort descending by colors apparition frequency (from the palette)
Extract these colors from the initial palette to get a subpalette according to maxColors (or max_colors)
Draw the image with the final subpalette, from ctxHidden original data.
You must expect your final image to give you squishy results if maxColors is too low or if your original palette is too distant from the original image colors.
I did a jsfiddle with processing.js, and it is clearly not necessary here but I started using it so I left it as is.
Now here is what the code looks like (the second canvas is the result, applying the final subpalette with a delay of 3 seconds)
var image = document.getElementById('original'),
palettePanel = document.getElementById('palette'),
subPalettePanel = document.getElementById('subpalette'),
canvas = document.getElementById('main'),
maxColors = 12,
palette = [
0x7F8FB1FF,
0x000000FF,
0x404c00FF,
0xe46501FF,
0x722640FF,
0x40337fFF,
0x666666FF,
0x0e5940FF,
0x1bcb01FF,
0xbfcc80FF,
0x333333FF,
0x0033CCFF,
0x66CCFFFF,
0xFF6600FF,
0x000033FF,
0xFFCC00FF,
0xAA0033FF,
0xFF00FFFF,
0x00FFFFFF,
0x123456FF
],
nearestColor = function (palette, r, g, b, a) {
var rr, gg, bb, aa, color, closest,
distr, distg, distb, dista,
dist,
minDist = Infinity;
for (var i = 0; i < l; i++) {
color = palette[i];
rr = palette[i] >> 24 & 0xFF;
gg = palette[i] >> 16 & 0xFF;
bb = palette[i] >> 8 & 0xFF;
aa = palette[i] & 0xFF;
if (closest === undefined) {
closest = color;
}
// compute abs value
distr = Math.abs(rr - r);
distg = Math.abs(gg - g);
distb = Math.abs(bb - b);
dista = Math.abs(aa - a);
dist = (distr + distg + distb + dista * .5) / 3.5;
if (dist < minDist) {
closest = color;
minDist = dist;
}
}
return closest;
},
subpalette = [],
i, l = palette.length,
r, g, b, a,
img,
size = 5,
cols = palettePanel.width / size,
drawPalette = function (p, palette) {
var i, l = palette.length;
p.setup = function () {
p.size(50,50);
p.background(255);
p.noStroke();
for (i = 0; i < l; i++) {
r = palette[i] >> 24 & 0xFF;
g = palette[i] >> 16 & 0xFF;
b = palette[i] >> 8 & 0xFF;
a = palette[i] & 0xFF;
p.fill(r,g,b,a);
p.rect (i%cols*size, ~~(i/cols)*size, size, size);
}
}
},
constraintImageDataToPalette = function (img, palette) {
var i, l, x, y, index,
pixel, x, y,
right, bottom, bottomLeft, bottomRight,
color,
r, g, b, a, i, l,
pr, pg, pb, pa,
rErrorBase,
gErrorBase,
bErrorBase,
aErrorBase,
index,
w = img.width,
w4 = w*4,
h = img.height,
witness = {};
for (i = 0, l = w*h*4; i < l; i += 4) {
x = (i%w);
y = ~~(i/w);
index = x + y*w;
right = index + 4,
bottomLeft = index - 4 + w4,
bottom = index + w4,
bottomRight = index + w4 + 4,
pixel = img.data;
r = pixel[index];
g = pixel[index+1];
b = pixel[index+2];
a = pixel[index+3];
color = nearestColor(palette, r,g,b,a);
witness[color] = (witness[color] || 0) + 1;
// explode channels
pr = color >> 24 & 0xFF;
pg = color >> 16 & 0xFF;
pb = color >> 8 & 0xFF;
pa = color & 0xFF;
// set new color
pixel[index] = pr;
pixel[index+1] = pg;
pixel[index+2] = pb;
pixel[index+3] = pa;
// calculate error
rErrorBase = (r - pr);
gErrorBase = (g - pg);
bErrorBase = (b - pb);
aErrorBase = (a - pa);
///*
// diffuse error right 7/16 = 0.4375
pixel[right] += 0.4375 * rErrorBase;
pixel[right+1] += 0.4375 * gErrorBase;
pixel[right+2] += 0.4375 * bErrorBase;
pixel[right+3] += 0.4375 * aErrorBase;
// diffuse error bottom-left 3/16 = 0.1875
pixel[bottomLeft] += 0.1875 * rErrorBase;
pixel[bottomLeft+1] += 0.1875 * gErrorBase;
pixel[bottomLeft+2] += 0.1875 * bErrorBase;
pixel[bottomLeft+3] += 0.1875 * aErrorBase;
// diffuse error bottom 5/16 = 0.3125
pixel[bottom] += 0.3125 * rErrorBase;
pixel[bottom+1] += 0.3125 * gErrorBase;
pixel[bottom+2] += 0.3125 * bErrorBase;
pixel[bottom+3] += 0.3125 * aErrorBase;
//diffuse error bottom-right 1/16 = 0.0625
pixel[bottomRight] += 0.0625 * rErrorBase;
pixel[bottomRight+1] += 0.0625 * gErrorBase;
pixel[bottomRight+2] += 0.0625 * bErrorBase;
pixel[bottomRight+3] += 0.0625 * aErrorBase;
//*/
}
return witness;
};
new Processing(palettePanel, function (p) { drawPalette(p, palette); });
image.onload = function () {
var l = palette.length;
new Processing(canvas, function (p) {
// argb 24 bits colors
p.setup = function () {
p.size(300, 200);
p.background(0);
p.noStroke();
var ctx = canvas.getContext('2d'),
ctxHidden = document.getElementById('buffer').getContext('2d'),
img, log = [],
witness = {};
ctxHidden.drawImage(image, 0, 0);
img = ctxHidden.getImageData(0, 0, canvas.width, canvas.height);
// constraint colors to largest palette
witness = constraintImageDataToPalette(img, palette);
// show which colors have been picked from the panel
new Processing(subPalettePanel, function (p) { drawPalette(p, Object.keys(witness)); });
ctx.putImageData(img, 0, 0);
var colorsWeights = [];
for (var key in witness) {
colorsWeights.push([+key, witness[key]]);
}
// sort descending colors by most presents ones
colorsWeights.sort(function (a, b) {
return b[1] - a[1];
});
// get the max_colors first of the colors picked to ensure a higher probability of getting a good color
subpalette = colorsWeights
.slice(0, maxColors)
.map(function (colorValueCount) {
// return the actual color code
return colorValueCount[0];
});
// reset image we previously modified
img = ctxHidden.getImageData(0, 0, canvas.width, canvas.height);
// this time constraint with new subpalette
constraintImageDataToPalette(img, subpalette);
// wait 3 seconds to apply new palette and show exactly how it changed
setTimeout(function () {
new Processing(subPalettePanel, function (p) { drawPalette(p, subpalette); });
ctx.putImageData(img, 0, 0);
}, 3000);
};
});
};
NOTE: I have no experience in java image computation, so I used javascript instead. I tried to comment my code, if you have any question about it I'll answer and explain it.
Below is presented an approach implemented in Java using Marvin Framework. It might be a starting point for solving your problem.
Input:
Palette P with M colors.
Number of Colors N.
Image G
Steps:
Apply the Palette P to the image G by replacing the pixels color to the most similar color (less distance in RGB space) in the palette. The output image has the distribution of palette colors by usage.
Compute an histogram containing each color in the palette and how many times it is used in the image (number of pixels).
Sort the palette by pixel usage, most to less used.
Select the N first items in the sorted list and generate a new palette.
Apply this new palette to the image.
Below is presented the output of this approach.
Original image:
(source: sourceforge.net)
Palette, and the image quantitized with 32, 8, 4 colors:
Source code:
public class ColorQuantizationExample {
public ColorQuantizationExample(){
MarvinImage imageOriginal = MarvinImageIO.loadImage("./res/quantization/lena.jpg");
MarvinImage imageOutput = new MarvinImage(imageOriginal.getWidth(), imageOriginal.getHeight());
Set<Color> palette = loadPalette("./res/quantization/palette_7.png");
quantitize(imageOriginal, imageOutput, palette, 32);
MarvinImageIO.saveImage(imageOutput, "./res/quantization/lena_7_32.jpg");
quantitize(imageOriginal, imageOutput, palette, 8);
MarvinImageIO.saveImage(imageOutput, "./res/quantization/lena_7_8.jpg");
quantitize(imageOriginal, imageOutput, palette, 4);
MarvinImageIO.saveImage(imageOutput, "./res/quantization/lena_7_4.jpg");
palette = loadPalette("./res/quantization/palette_8.png");
quantitize(imageOriginal, imageOutput, palette, 32);
MarvinImageIO.saveImage(imageOutput, "./res/quantization/lena_8_32.jpg");
quantitize(imageOriginal, imageOutput, palette, 8);
MarvinImageIO.saveImage(imageOutput, "./res/quantization/lena_8_8.jpg");
quantitize(imageOriginal, imageOutput, palette, 4);
MarvinImageIO.saveImage(imageOutput, "./res/quantization/lena_8_4.jpg");
}
/**
* Load a set of colors from a palette image.
*/
private Set<Color> loadPalette(String path){
Set<Color> ret = new HashSet<Color>();
MarvinImage image = MarvinImageIO.loadImage(path);
String key;
for(int y=0; y<image.getHeight(); y++){
for(int x=0; x<image.getWidth(); x++){
Color c = new Color
(
image.getIntComponent0(x, y),
image.getIntComponent1(x, y),
image.getIntComponent2(x, y)
);
ret.add(c);
}
}
return ret;
}
private void quantitize(MarvinImage imageIn, MarvinImage imageOut, Set<Color> palette, int colors){
applyPalette(imageIn, imageOut, palette);
HashMap<Color, Integer> hist = getColorHistogram(imageOut);
List<Map.Entry<Color, Integer>> list = new LinkedList<Map.Entry<Color, Integer>>( hist.entrySet() );
Collections.sort( list, new Comparator<Map.Entry<Color, Integer>>()
{
#Override
public int compare( Map.Entry<Color, Integer> o1, Map.Entry<Color, Integer> o2 )
{
return (o1.getValue() > o2.getValue() ? -1: 1);
}
} );
Set<Color> newPalette = reducedPalette(list, colors);
applyPalette(imageOut.clone(), imageOut, newPalette);
}
/**
* Apply a palette to an image.
*/
private void applyPalette(MarvinImage imageIn, MarvinImage imageOut, Set<Color> palette){
Color color;
for(int y=0; y<imageIn.getHeight(); y++){
for(int x=0; x<imageIn.getWidth(); x++){
int red = imageIn.getIntComponent0(x, y);
int green = imageIn.getIntComponent1(x, y);
int blue = imageIn.getIntComponent2(x, y);
color = getNearestColor(red, green, blue, palette);
imageOut.setIntColor(x, y, 255, color.getRed(), color.getGreen(), color.getBlue());
}
}
}
/**
* Reduce the palette colors to a given number. The list is sorted by usage.
*/
private Set<Color> reducedPalette(List<Map.Entry<Color, Integer>> palette, int colors){
Set<Color> ret = new HashSet<Color>();
for(int i=0; i<colors; i++){
ret.add(palette.get(i).getKey());
}
return ret;
}
/**
* Compute color histogram
*/
private HashMap<Color, Integer> getColorHistogram(MarvinImage image){
HashMap<Color, Integer> ret = new HashMap<Color, Integer>();
for(int y=0; y<image.getHeight(); y++){
for(int x=0; x<image.getWidth(); x++){
Color c = new Color
(
image.getIntComponent0(x, y),
image.getIntComponent1(x, y),
image.getIntComponent2(x, y)
);
if(ret.get(c) == null){
ret.put(c, 0);
}
ret.put(c, ret.get(c)+1);
}
}
return ret;
}
private Color getNearestColor(int red, int green, int blue, Set<Color> palette){
Color nearestColor=null, c;
double nearestDistance=Integer.MAX_VALUE;
double tempDist;
Iterator<Color> it = palette.iterator();
while(it.hasNext()){
c = it.next();
tempDist = distance(red, green, blue, c.getRed(), c.getGreen(), c.getBlue());
if(tempDist < nearestDistance){
nearestDistance = tempDist;
nearestColor = c;
}
}
return nearestColor;
}
private double distance(int r1, int g1, int b1, int r2, int g2, int b2){
double dist= Math.pow(r1-r2,2) + Math.pow(g1-g2,2) + Math.pow(b1-b2,2);
return Math.sqrt(dist);
}
public static void main(String args[]){
new ColorQuantizationExample();
}
}
I have a list of values from 0-1. I want to convert this list to an image by using a gradient that converts these floating point values to RGB values. Are there any tools in Java that provide you with this functionality?
0 should be mapped 0
1 should be mapped 255
keep in mind that you need 3 of them to make a color
so multiply by 255 the floating number and cast it to int.
Perhaps GradientPaint can do what you want. It's unclear how you want a list of floating point values to be converted into a gradient. Normally a gradient consists of two colors and some mechanism that interpolates between those colors. GradientPaint implements a linear gradient.
Say you have an array made of 64 000 triples corresponding to RGB values, like this:
final Random rand = new Random();
final float[] f = new float[320*200*3];
for (int i = 0; i < f.length; i++) {
f[i] = rand.nextFloat(); // <-- generates a float between [0...1.0[
}
And say you have a BufferedImage that has a size of 320x200 (64 000 pixels) of type INT_ARGB (8 bits per value + 8 bits for the alpha level):
final BufferedImage bi = new BufferedImage( 320, 200, BufferedImage.TYPE_INT_ARGB );
Then you can convert you float array to RGB value and fill the image doing this:
for (int x = 0; x < 320; x++) {
for (int y = 0; y < 200; y++) {
final int r = (int) (f[x+y*200*3] * 255.0);
final int g = (int) (f[x+y*200*3+1] * 255.0);
final int b = (int) (f[x+y*200*3+2] * 255.0);
bi.setRGB( x, y, 0xFF000000 | (r << 16) | (g << 8) | b );
}
}
Note that would you display this image it would appear gray but if you zoom in it you'll see it's actually made of perfectly random colorful pixels. It's just that the random number generator is so good that it all looks gray on screen :)