I'm using iText version 5.2.1.
To fully understand the setSkew method of Chunk object, i began to play with it with the following code:
for (int i=0; i <= 90; i+=5) {
Chunk c = new Chunk("A" + i);
c.setSkew((float)i, (float)-i);
document.add(c);
}
With my big surprise the text progressively gets bigger when approaching to 90 degree. I can't understand this behaviour: according to "itext in action 2nd ed." book, the first parameter of setSkew is the inclination of the baseline of the text, and the second the angle between characters and the (original) baseline.
So, what I'm missing?
What you're missing essentially is that you expect the skewing with your parameters to be merely something like a simple rotation which would keep sizes as they are. This is not the case for skewing. Instead skewing works like this:
(shamelessly copied out of "Figure 13 – Effects of Coordinate Transformations" in the PDF specification ISO 32000-1:2008)
What remains the same is the length of the projection of the skewed base line onto the regular base line of the text line:
Here x0 and x1 have the same length, but the skewed base line is longer. The steeper the angle is, the longer is the skewed base line and also the width of the glyphs there.
The analog goes for the y axis. Thus, your sample results in something like this:
If you prefer a mathematical reasoning, consider that the skewing transformation matrix has the values [1 tan(a) tan(b) 1 0 0]. So (1, 0) is skewed to (1, tan(a)), (0, 1) to (tan(b), 1), and everything else correspondignly according to linearity.
Related
I am creating vertex array for a mesh with given points.So far I was able to create a continuous mesh with thickness.However there is a problem at the intersection of two line segments, the vectors between those segment's sizes needs to be bigger or smaller depending on the situation in order to have a continuous look.
What I have now:
With the given angles theta1 and theta2, how can I calculate the length of red vectors?
What I want:
How I structured my mesh:
You're probably making it more complicated than it needs to be.
Let's start by calculating the red arrows. For any line segment (p_i, p_j), we can calculate the segment's normal with:
dir = normalize(p_j - p_i)
normal = (-dir.y, dir.x) //negate if you want the other direction
At the connection point between two segments, we can just average (and re-normalize) the incident normals. This gives us the red arrows.
The only question that remains is how much you need to shift. The resulting offset for the line segment o_l given an offset of the vertex o_v is:
o_l = o_v * dot(normal_l, normal_v)
This means the following: Both normals are unit vectors. Hence, their dot product is at most one. This is the case when both line segments are parallel. Then, the entire offset of the vertex is transferred to the line. The smaller the angle becomes, the smaller becomes the transferred offset. E.g. if the angle between two consecutive line segments is 120°, then the dot product of normals is 0.5. If you shift the vertex by 1 unit along its normal, both line segment will have a thickness of 0.5.
So, in order to produce a specific line thickness (o_l), we need to shift the vertex by o_v:
o_v = o_l / dot(normal_l, normal_v)
The construction with averaging the line segments' normal for the vertex normal ensures that dot(normal_l1, normal_ v) = dot(normal_l2, normal_v), i.e. the resulting line thickness is equal for both lines in any case.
While rendering the Barnsley fern fractal I come up with single color images or at most four color images i.e. the bottom left, bottom right, bottom stem and the rest of the leaves. Here is the image I get for example:
What I want however is to bring shades in the leaves and making stem thicker and of different color like:
I digged a bit about the algorithms that can be used, then I read in Draves's paper about fractal flames that during iteration of Iterated Function Systems a single point may be rendered many times if we use a single color which results in a loss of information so we need to create a histogram of how many times a point was to be rendered and then perform a rendering pass using the histogram with shades of colors log-density coloring.
I have brought myself to the point where I have the histogram but don't know how to use it to render the shades or using the log-density render technique. Can someone help me with such type of rendering or at least direct me to a source where I can read more about this with practical examples.
Here is what I have tried:
AffineTransformation f1 = new AffineTransformation(0,0,0,0.25,0,-0.4);
AffineTransformation f2 = new AffineTransformation(0.95,0.005,-0.005,0.93,-0.002,0.5);
AffineTransformation f3 = new AffineTransformation(0.035,-0.2,0.16,0.04,-0.09,0.02);
AffineTransformation f4 = new AffineTransformation(-0.04,0.2,0.16,0.04,0.083,0.12);
int N=Width*Height;
int pixelhistogram[] = new int[N];
for(int i=0;i< N*25;i++)
{
Point newpoint = new Point();
double probability = Math.random();
if(probability < 0.01)
{
newpoint = f1.transform(point);
}
else if(probability < 0.94)
{
newpoint = f2.transform(point);
}
else if(probability < 0.97)
{
newpoint = f3.transform(point);
}
else
{
newpoint = f4.transform(point);
}
point = newpoint;
// Translating the point to pixel in the image and
// incrementing that index in the pixelHistogram array by 1
// W and H are the Width and Height
int X=((int)(point.getX()*W/3)+W/2)/2 + W/4-1;
int Y=H-((int)(point.getY()*H/8) + H/9) -1;
pixelhistogram[W*Y+X]++;
}
// Now that I have the pixelhistogram
// I don't know how to render the shades using this
AffineTransformation is a simple class which performs Affine Transformation on a point. I omitted the code because otherwise the question would have become too lengthy.
A simple coloring would be to render pixel (X,Y) light green, green, or brown according to whether pixels[W*Y+X] is less than n1, between n1 and n2, or greater than n2. To determine n1 and n2, trial and error would probably be the simplest solution, but you could make an actual histogram of the log of the pixel counts that you have recorded to help judge where to put the cuts (or more generally you could use clustering algorithms to do it automatically).
PS: In the image that you show it looks like the stem is rendered with an L-system and the fronds are rendered using the three leaf transforms only (i.e. omit the fourth "stem-transform"); I would guess they are using the log pixel counts to shade the level of green but not to shade the stem.
Addition: I was asked, below, to discuss log-histograms. To avoid getting bogged down, I'd recommend first using a full featured data analysis software like R to see if this gets you what you want. Write out the pixels array to a text file with one number per line, then start R and run:
ct=scan('pixels_data.txt')
hist(log(ct))
If you see a a multimodal histogram (i.e. if it has clear peaks and valleys), that will suggest how to choose n1 and n2: put them in the valleys (i.e. if the valley on the plot is at y, set n1=exp(y)).
If you wind up plotting histograms in Java, it can apparently be done with the Jfreechart software. Just create an array with the logs of the values in the pixels array and create the histogram out of that.
At best I expect you to see only one valley in the histogram, if you use the standard 3-transform Barnsley fern, separating the really high stem values from the fronds. To color the fronds, if n is the cut between frond and stem, and pixels[W*Y+X] is less than n, you could color it using, say:
v=128.0*(log(n)-log(pixels[W*Y+X]))/log(n);
RGB=(v,255,v)
PS: Getting thick stems using the random iteration algorithm only is going to be a problem. If you change the 3rd transform to be less singular, your stems will wind up looking like thin ferns and not sticks. E.g.
{"title":"Thick Stem Fern","alist":[[
[0.11378443003074948,-0.005060836319767042,0.013131296101198788,0.21863066144310556,0.44540023470694723,0.01726296943557673],
[0.15415337683611596,-0.17449052243042712,0.23850452316465576,0.2090228040695959,0.3652068203134602,0.11052918709831461],
[-0.09216684947824424,0.20844742602316002,0.2262266208270773,0.22553569847678284,0.6389950926444947,-0.008256440681230735],
[0.8478159879190097,0.027115858923993468,-0.05918196850293869,0.8521840120809901,0.08189073762585078,0.1992198482087391]
]],"va":[1,0,0,1,0,0],"word_length":6,"level_max":40,"rect_size":1}
Is the json data to describe:
So I decided to look up some collision detection but I've been having trouble finding proper information
regarding 2d collision detection between two images that includes how to properly avoid detecting the transparent
areas of an image but I did find one post which I got myself attached to but the problem is that I don't
really understand the post nor do I understand why he does some of those things...
Here's the post in question: https://stackoverflow.com/posts/336615/revisions
So first of all I want to ask if this solution is actually a good one / proper or if I should just look elsewhere.
Secondly I wonder, in his post, he mentions using integer arrays, not 2d arrays either it seems, to
set 1 and 0 to decide whether or not the pixel is transparent or not but I don't really know how I am supposed
to achieve this. At first I thought it could be achieved by just forming a string of 1 and 0s and convert it to a Long
but even with a mere image width of 25, the Long, gets... too long...
I also tried this with no luck, since the code does not function with this array:
long[] array = new long[30*30]; // height * width of the image
int x = 0;
int y = 0;
for(int i = 0; i<30*30; i++){
if(image.getRGB(x,y) == 0){
array[i] = 0;
}
else{ array[i] = 1; }
x++;
if (x==30){
y++;
x=0;
}
}
Thirdly, I was hoping someone could maybe explain the whole process and why the things he does, are necessary.
By the way, I do know how those bit wise operators work!
In other words, I don't understand his train of thought / motivation for doing the all things in the code and I would like to gain an understanding of all this!
I don't really know how to proceed right now hehe...
The result of the bitwise AND operation (&) is true (1) for each each bit where the the corresponding bit is true in both operands, and false (0) otherwise.
The idea he's using is to create a version of the image (a mask) where each non-transparent pixel in the original image is stored as a '1' bit, and each transparent pixel as a '0' bit. These are packed into a single integer, which can be tested against the mask for another image with a single AND operation (before the AND he calculates the horizontal distance between the two images and shifts one of the masks if necessary).
For example, let's assume that we have the following two 4x1 pixel images:
5, 0, 0, 5
8, 8, 8, 8
Although I placed them on separate rows here for practical purposes, you should view them as being on the same row, so the last two pixels of the left image overlap with the first two of the right image.
The masks for the rows when viewed in binary representation would be:
1001
1111
The distance between the left and right image is -2, so we shift the first mask left by 2 bits:
1001 << 2 => 100100
So now we have these masks:
100100
001111
ANDing these gives us:
000100
The non-zero result tells us that we have a collision.
As you can see from the title, I'm busy programming a little programm for visualizing fractals in Java. Anybody who deals with fractals will come to the point where he/she searches for a solution to get these stupid "bands" away, when you just colour a pixel by the number of iterations it took to escape.
So I searched for a more advanced colouring algorithm, finding the "normalized iteration count". The formula I'm using is:
float loc = (float) 1 - Math.log(Math.log(c.abs())) / Math.log(2);
Everybody on the Internet is so happy about this algorithm, everybody uses it, everbody gets great results. Except me. I thought, this algorithm should provide a float between 0 and 1. But that doesn't happen. I did some calculations and came to the conclusion, that this algorithm only works for c.abs() >= Math.E && c.abs() <= Math.exp(2) (that is Math.E * Math.E).
In numbers this means, my input into this equation has to be between about 2.718 and 7.389.
But a complex number c is considerd to tend towards infinity when its magnitude gets greater than 2. But for any Input smaller than Math.E, I get a value greater than one. And for any number greater than Math.exp(2), it gets negative. That is the case if a complex number escapes really fast.
So please tell me: what am I doing wrong. I'm desperate.
Thanks.
EDIT:
I was wrong: the code I posted is correct, I just
1. used it the wrong way and so it didn't provide the right output.
2. had to set the bailout value of the mandelbrot/julia algorithm to 10, otherwise I would've got stupid bands again.
Problem solved!
As you've already discovered, you need to increase the bailout radius before smoothing will look right.
Two is the minimum length that a coordinate can have such that when you square it and add the initial value, it cannot result in a smaller length. If the previous length was 2.0, and you squared it, you'd have a length of 4.0 (pointing in whichever direction), and the most that any value of c could reduce that by is 2.0 (by pointing in precisely the opposite direction). If c were larger than that then it would start to escape right away.
Now, to estimate the fractional part of the number of iterations we look at the final |z|. If z had simply been squared and c not added to it, then it would have a length between 2.0 and 4.0 (the new value must be larger than 2.0 to bail out, and the old value must have been less than 2.0 to have not bailed out earlier).
Without c, taking |z|'s proportional position between 2 and 4 gives us a fractional part of the number of iterations. If |z| is close to 4 then the previous length must have been close to 2, so it was already close to bailing out in the previous iteration and the smoothed result should be close to the previous iteration count to represent that. If it's close to 2, then the previous iteration was further from bailing out, and so the smoothed result should be closer to the new iteration count.
Unfortunately c messes that up. The larger c is, the larger the potential error is in that simple relationship. Even if the old length was nearly at 2.0, it might have landed such that c's influence made it look like it must have been smaller.
Increasing the bailout mitigates the effect of adding c. If the bailout is 64 then the resulting length will be between 64 and 4096, and c's maximum offset of 2 has a proportionally smaller very impact on the result.
You have left out the iteration value, try this:
float loc = <iteration_value> + (float) 1 - Math.log(Math.log(c.abs())) / Math.log(2);
The iteration_value is the number of iterations which yielded c in the formula.
I have a function named resize, which takes a source array, and resizes to new widths and height. The method I'm using, I think, is inefficient. I heard there's a better way to do it. Anyway, the code below works when scale is an int. However, there's a second function called half, where it uses resize to shrink an image in half. So I made scale a double, and used a typecast to convert it back to an int. This method is not working, and I dont know what the error is (the teacher uses his own grading and tests on these functions, and its not passing it). Can you spot the error, or is there a more efficient way to make a resize function?
public static int[][] resize(int[][] source, int newWidth, int newHeight) {
int[][] newImage=new int[newWidth][newHeight];
double scale=newWidth/(source.length);
for(int i=0;i<newWidth/scale;i++)
for(int j=0;j<newHeight/scale;j++)
for (int s1=0;s1<scale;s1++)
for (int s2=0;s2<scale;s2++)
newImage[(int)(i*scale+s1)][(int)(j*scale+s2)] =source[i][j];
return newImage;
}
/**
* Half the size of the image. This method should be just one line! Just
* delegate the work to resize()!
*/
public static int[][] half(int[][] source) {
int[][] newImage=new int[source.length/2][source[0].length/2];
newImage=resize(source,source.length/2,source[0].length/2);
return newImage;
}
So one scheme for changing the size of an image is to resample it (technically this is really the only way, every variation is really just a different kind of resampling function).
Cutting an image in half is super easy, you want to read every other pixel in each direction, and then load that pixel into the new half sized array. The hard part is making sure your bookkeeping is strong.
static int[][] halfImage(int[][] orig){
int[][] hi = new int[orig.length/2][orig[0].length/2];
for(int r = 0, newr = 0; r < orig.length; r += 2, newr++){
for(int c = 0, newc = 0; c < orig[0].length; c += 2, newc++){
hi[newr][newc] = orig[r][c];
}
}
return hi;
}
In the code above I'm indexing into the original image reading every other pixel in every other row starting at the 0th row and 0th column (assuming images are row major, here). Thus, r tells us which row in the original image we're looking at, and c tells us which column in the original image we're looking at. orig[r][c] gives us the "current" pixel.
Similarly, newr and newc index into the "half-image" matrix designated hi. For each increment in newr or newc we increment r and c by 2, respectively. By doing this, we skip every other pixel as we iterate through the image.
Writing a generalized resize routine that doesn't operate on nice fractional quantities (like 1/2, 1/4, 1/8, etc.) is really pretty hard. You'd need to define a way to determine the value of a sub-pixel -- a point between pixels -- for more complicated factors, like 0.13243, for example. This is, of course, easy to do, and you can develop a very naive linear interpolation principle, where when you need the value between two pixels you simply take the surrounding pixels, construct a line between their values, then read the sub-pixel point from the line. More complicated versions of interpolation might be a sinc based interpolation...or one of many others in widely published literature.
Blowing up the size of the image involves something a little different than we've done here (and if you do in fact have to write a generalized resize function you might consider splitting your function to handle upscaling and downscaling differently). You need to somehow create more values than you have originally -- those interpolation functions work for that too. A trivial method might simply be to repeat a value between points until you have enough, and slight variations on this as well, where you might take so many values from the left and so many from the right for a particular position.
What I'd encourage you to think about -- and since this is homework I'll stay away from the implementation -- is treating the scaling factor as something that causes you to make observations on one image, and writes on the new image. When the scaling factor is less than 1 you generally sample from the original image to populate the new image and ignore some of the original image's pixels. When the scaling factor is greater than 1, you generally write more often to the new image and might need to read the same value several times from the old image. (I'm doing a poor job highlighting the difference here, hopefully you see the dualism I'm getting at.)
What you have is pretty understandable, and I think it IS an O(n^4) algorithm. Ouchies!
You can improve it slightly by pushing the i*scale and j*scale out of the inner two loops - they are invariant where they are now. The optimizer might be doing it for you, however. There are also some other similar optimizations.
Regarding the error, run it twice, once with an input array that's got an even length (6x6) and another that's odd (7x7). And 6x7 and 7x6 while you're at it.
Based on your other question, it seems like you may be having trouble with mixing of types - with numeric conversions. One way to do this, which can make your code more debuggable and more readable to others not familiar with the problem space, would be to split the problematic line into multiple lines. Each minor operation would be one line, until you reach the final value. For example,
newImage[(int)(i*scale+s1)][(int)(j*scale+s2)] =source[i][j];
would become
int x = i * scale;
x += s1;
int y = j* scale;
y +=s2;
newImage[x][y] = source[i][j];
Now, you can run the code in a debugger and look at the values of each item after each operation is performed. When a value doesn't match what you think it should be, look at it and figure out why.
Now, back to the suspected problem: I expect that you need to use doubles somewhere, not ints - in your other question you talked about scaling factors. Is the factor less than 1? If so, when it's converted to an int, it'll be 0, and you'll get the wrong result.