jfreechart : BarChart total label positioning issue? - java

I am using following code to position the total label on the bar chart's bar. The position is supposedly on the top of the each rendered bar. But some reason the label rendering is messing up the entire chart!
UPDATE Positioning Total Labels issue The totals for each bar should be on the top-center of the each bar! instead it positioned random somewhere near top and not aligned to bars.
Following lines are trouble having rendering chart lable:
ItemLabelAnchor labelAnchor = "O".equals(direction) ? ItemLabelAnchor.OUTSIDE12 : ItemLabelAnchor.INSIDE8;
TextAnchor textAnchor = "O".equals(direction) ? TextAnchor.TOP_CENTER : TextAnchor.CENTER;
TextAnchor rotationAnchor = "O".equals(direction) ? TextAnchor.TOP_CENTER : TextAnchor.BOTTOM_CENTER;
renderer.setSeriesPositiveItemLabelPosition(0,
new ItemLabelPosition(labelAnchor, textAnchor, rotationAnchor, ((angle * Math.PI) / 180)));
I am using jdk 8 with jFreeChart-1.0.19. Following complete code listing:
CategoryPlot plot = chart.getCategoryPlot();
if (plot != null) {
if (log.isDebugEnabled()) {
log.debug("Total Renderers : " + plot.getRendererCount());
}
try {
JRPropertiesMap propMap = jasperChart.getPropertiesMap();
if (propMap.containsProperty(APPLY_LABEL_ROTATION)) {
DecimalFormat dfKey = new DecimalFormat("###,###");
/* {0} - label would be equal to Series expression,
* {1} - label would be equal to Category expression,
* {2} - label would be equal to Value expression
*/
StandardCategoryItemLabelGenerator labelGenerator = new StandardCategoryItemLabelGenerator("{2}", dfKey);
LegendItemCollection lic = new LegendItemCollection();
List<LegendItem> legendList = new ArrayList<LegendItem>();
LegendItem totalOpenLegend = null;
for (int i = 0; i <= (plot.getRendererCount() - 1); i++) {
String prop = propMap.getProperty("Chart_" + (i + 1));
Double angle = null == prop ? 0D : Double.valueOf(prop.substring(0, prop.length() - 1));
String direction = null == prop ? "O" : "" + prop.charAt(prop.length() - 1);
if (log.isDebugEnabled()) {
log.debug("Property value for renerer : " + i + " Chart_" + (i + 1) + " angle : " + angle + ", Direction : " + direction);
}
CategoryItemRenderer renderer = plot.getRenderer(i);
renderer.setBaseItemLabelsVisible(true);
renderer.setBaseItemLabelGenerator(labelGenerator);
if (i == 0) {
Shape shape = ShapeUtilities.createLineRegion(new Line2D.Double(-6, 0, 6, 0), 1);
((LineAndShapeRenderer) renderer).setSeriesShape(0, shape);
((LineAndShapeRenderer) renderer).setBaseItemLabelsVisible(false);
}
if (i == 1) {
//shape = ShapeUtilities.createLineRegion(new Line2D.Double(0, 0, 1, 1), 2);
((LineAndShapeRenderer) renderer).setBaseShapesFilled(false);
((LineAndShapeRenderer) renderer).setBaseItemLabelPaint(Color.RED);
}
if (i == 2) {
((LineAndShapeRenderer) renderer).setBaseItemLabelsVisible(false);
}
if (i == 3) {
((LineAndShapeRenderer) renderer).setBaseItemLabelPaint(new Color(139, 90, 43));
}
LegendItem item = renderer.getLegendItem(i, 0);
if ((i >= 0) && (i < 4)) {
legendList.add(item);
} else {
totalOpenLegend = item;
BarRenderer barRenderer = (BarRenderer) renderer;
barRenderer.setMaximumBarWidth(0.3);
barRenderer.setItemMargin(0.1);
}
ItemLabelAnchor labelAnchor = "O".equals(direction) ? ItemLabelAnchor.OUTSIDE12 : ItemLabelAnchor.INSIDE8;
TextAnchor textAnchor = "O".equals(direction) ? TextAnchor.TOP_CENTER : TextAnchor.CENTER;
TextAnchor rotationAnchor = "O".equals(direction) ? TextAnchor.TOP_CENTER : TextAnchor.BOTTOM_CENTER;
renderer.setSeriesPositiveItemLabelPosition(0,
new ItemLabelPosition(labelAnchor, textAnchor, rotationAnchor, ((angle * Math.PI) / 180)));
//--
plot.setRenderer(i, renderer);
}
lic.add(totalOpenLegend);
for (LegendItem li : legendList) {
lic.add(li);
}
System.out.println("Setting Legend Items");
plot.setFixedLegendItems(lic);
plot.getDomainAxis().setLowerMargin(0);

Related

Java OpenCV detect and crop circullar/elliptical shapes

I am trying to detect and crop circullar/elliptical shapes of different sizes.
This is an example of an image I am trying to do the detection and croping.
Input Image
The result I am trying to get in the aforementioned image is 3 cropped images
looking like this:
segmented part 1, segmented part 2, segmented part 3
Another image could look like this: different image
Just like the previous image, I am trying to do the same to this one.
The shapes are dramatically smaller from the first one.
Can this be achieved algorithmically or should I look for a machine learning-like solution?
Note: The final image has been applied by the following filters: Gaussian Blur, Grayscale, Threshold, Contour and Morphological Dilation.
[EDIT]
The code I have written(not working as intended):
findReference() finds a shape in the middle of the image and returns its rectangle.
private Rect findReference(Mat inputImage) {
// clone the image
Mat original = inputImage.clone();
// find the center of the image
double[] centers = {(double)inputImage.width()/2, (double)inputImage.height()/2};
Point image_center = new Point(centers);
// finding the contours
ArrayList<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
Imgproc.findContours(inputImage, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
// finding best bounding rectangle for a contour whose distance is closer to the image center that other ones
double d_min = Double.MAX_VALUE;
Rect rect_min = new Rect();
for (MatOfPoint contour : contours) {
Rect rec = Imgproc.boundingRect(contour);
// find the best candidates
if (rec.height > inputImage.height()/2 & rec.width > inputImage.width()/2){
continue;
}
Point pt1 = new Point((double)rec.x, (double)rec.y);
Point center = new Point(rec.x+(double)(rec.width)/2, rec.y + (double)(rec.height)/2);
double d = Math.sqrt(Math.pow((double)(pt1.x-image_center.x),2) + Math.pow((double)(pt1.y -image_center.y), 2));
if (d < d_min)
{
d_min = d;
rect_min = rec;
}
}
// showReference( rect_min, original);
return rect_min;
}
I use the rectangle as reference and create a bigger one and a smaller one, so that similar shapes fit in the dimensions of the smaller and bigger rectangle.
findAllEllipses() tries to find similar shapes fitting in the smaller and bigger rectangles. After that it draws ellipses around the found shapes.
private Mat findAllEllipses(Rect referenceRect, Mat inputImage) {
float per = 0.5f;
float perSquare = 0.05f;
Rect biggerRect = new Rect();
Rect smallerRect = new Rect();
biggerRect.width = (int) (referenceRect.width / per);
biggerRect.height = (int) (referenceRect.height / per);
smallerRect.width = (int) (referenceRect.width * per);
smallerRect.height = (int) (referenceRect.height * per);
System.out.println("reference rectangle height: " + referenceRect.height + " width: " + referenceRect.width);
System.out.println("[" + 0 +"]: biggerRect.height: " + biggerRect.height + " biggerRect.width: " + biggerRect.width);
System.out.println("[" + 0 +"]: smallerRect.height: " + smallerRect.height + " smallerRect.width: " + smallerRect.width);
//Finding Contours
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchey = new Mat();
Imgproc.findContours(inputImage, contours, hierarchey, Imgproc.RETR_TREE,
Imgproc.CHAIN_APPROX_SIMPLE);
System.out.println("the numbers of found contours is: " + contours.size());
int sum = 0;
//Empty rectangle
RotatedRect[] rec = new RotatedRect[contours.size()];
for (int i = 0; i < contours.size(); i++) {
rec[i] = new RotatedRect();
if(contours.get(i).toArray().length >= 5 ){
Rect foundRect = Imgproc.boundingRect(contours.get(i));
// Rect foundBigger = new Rect();
// Rect foundSmaller = new Rect();
//
// foundBigger.width = (int) (foundBigger.width + foundBigger.width * per);
// foundBigger.height = (int) (foundBigger.height + foundBigger.height * per);
//
// foundSmaller.width = (int) (foundRect.width - foundRect.width * per);
// foundSmaller.height = (int) (foundRect.height - foundRect.height * per);
if (
(biggerRect.height >= foundRect.height && biggerRect.width >= foundRect.width)
&& (smallerRect.height <= foundRect.height && smallerRect.width <= foundRect.width)
&& (((foundRect.width - foundRect.width * perSquare) <= foundRect.height) && ((foundRect.width + foundRect.width * perSquare) >= foundRect.height))
&& (((foundRect.height - foundRect.height * perSquare) <= foundRect.width) && ((foundRect.height + foundRect.height * perSquare) >= foundRect.width))
) {
System.out.println("[" + i +"]: foundRect.width: " + foundRect.width + " foundRect.height: " + foundRect.height);
System.out.println("----------------");
rec[i] = Imgproc.fitEllipse(new MatOfPoint2f(contours.get(i).toArray()));
sum++;
}
}
Scalar color_elli = new Scalar(190, 0, 0);
Imgproc.ellipse(inputImage, rec[i], color_elli, 5);
}
System.out.println("found ellipses: " + sum);
// trytest(ImageUtils.doResizeMat(outputImage),0,0);
return inputImage;
}
Unfortuantelly there are several variables that are hardcoded into the method.
This is used to make the smaller and bigger rectangles (used as a percentage)
float per = 0.5f;
perSquare is used to get shapes closer to a square (fluctuated width height)
float perSquare = 0.05f;
This code might work in some images, while on others will not find a single shape, like I mentioned the shapes are circullar/elliptical and of different sizes.

What does posenet return?

I'm working on a project which read an image as an input and show and output image. The output image contains some lines to indicate the human body skeleton. I'm using pose estimation model from tensorflow-lite:
https://www.tensorflow.org/lite/models/pose_estimation/overview
I have read the docs, and it shows that the output contains a 4-dimensions array. I have tried to use netron to visualize my model file and it looks like this:
I succeeded to get the result heatmap from the input but I got a problem that all the float are negative. It makes me confused and I'm not sure if I did anything wrong or how to understand these outputs.
Here's the code for the output
tfLite = new Interpreter(loadModelFile());
Bitmap inputPhoto = BitmapFactory.decodeResource(getResources(), R.drawable.human2);
inputPhoto = Bitmap.createScaledBitmap(inputPhoto, INPUT_SIZE_X, INPUT_SIZE_Y, false);
inputPhoto = inputPhoto.copy(Bitmap.Config.ARGB_8888, true);
int pixels[] = new int[INPUT_SIZE_X * INPUT_SIZE_Y];
inputPhoto.getPixels(pixels, 0, INPUT_SIZE_X, 0, 0, INPUT_SIZE_X, INPUT_SIZE_Y);
int pixelsIndex = 0;
for (int i = 0; i < INPUT_SIZE_X; i ++) {
for (int j = 0; j < INPUT_SIZE_Y; j++) {
int p = pixels[pixelsIndex];
inputData[0][i][j][0] = (p >> 16) & 0xff;
inputData[0][i][j][1] = (p >> 8) & 0xff;
inputData[0][i][j][2] = (p) & 0xff;
pixelsIndex ++;
}
}
float outputData[][][][] = new float[1][23][17][17];
tfLite.run(inputData, outputData);
The output is an array [1][23][17][17] which is all negative. So is there anyone who known about this can help me :(
Thanks a lot !
This post came Active today so I post a late answer, sorry about that.
You should check the Posenet.kt file. There you can see a very detailed documented code. You can see how this:
Initializes an outputMap of 1 * x * y * z FloatArrays for the model processing to populate.
*/
private fun initOutputMap(interpreter: Interpreter): HashMap<Int, Any> {
val outputMap = HashMap<Int, Any>()
// 1 * 9 * 9 * 17 contains heatmaps
val heatmapsShape = interpreter.getOutputTensor(0).shape()
outputMap[0] = Array(heatmapsShape[0]) {
Array(heatmapsShape[1]) {
Array(heatmapsShape[2]) { FloatArray(heatmapsShape[3]) }
}
}
// 1 * 9 * 9 * 34 contains offsets
val offsetsShape = interpreter.getOutputTensor(1).shape()
outputMap[1] = Array(offsetsShape[0]) {
Array(offsetsShape[1]) { Array(offsetsShape[2]) { FloatArray(offsetsShape[3]) } }
}
// 1 * 9 * 9 * 32 contains forward displacements
val displacementsFwdShape = interpreter.getOutputTensor(2).shape()
outputMap[2] = Array(offsetsShape[0]) {
Array(displacementsFwdShape[1]) {
Array(displacementsFwdShape[2]) { FloatArray(displacementsFwdShape[3]) }
}
}
// 1 * 9 * 9 * 32 contains backward displacements
val displacementsBwdShape = interpreter.getOutputTensor(3).shape()
outputMap[3] = Array(displacementsBwdShape[0]) {
Array(displacementsBwdShape[1]) {
Array(displacementsBwdShape[2]) { FloatArray(displacementsBwdShape[3]) }
}
}
return outputMap
}
and of course how the output is transformed to points on screen:
/**
* Estimates the pose for a single person.
* args:
* bitmap: image bitmap of frame that should be processed
* returns:
* person: a Person object containing data about keypoint locations and confidence scores
*/
fun estimateSinglePose(bitmap: Bitmap): Person {
val estimationStartTimeNanos = SystemClock.elapsedRealtimeNanos()
val inputArray = arrayOf(initInputArray(bitmap))
Log.i(
"posenet",
String.format(
"Scaling to [-1,1] took %.2f ms",
1.0f * (SystemClock.elapsedRealtimeNanos() - estimationStartTimeNanos) / 1_000_000
)
)
val outputMap = initOutputMap(getInterpreter())
val inferenceStartTimeNanos = SystemClock.elapsedRealtimeNanos()
getInterpreter().runForMultipleInputsOutputs(inputArray, outputMap)
lastInferenceTimeNanos = SystemClock.elapsedRealtimeNanos() - inferenceStartTimeNanos
Log.i(
"posenet",
String.format("Interpreter took %.2f ms", 1.0f * lastInferenceTimeNanos / 1_000_000)
)
val heatmaps = outputMap[0] as Array<Array<Array<FloatArray>>>
val offsets = outputMap[1] as Array<Array<Array<FloatArray>>>
val height = heatmaps[0].size
val width = heatmaps[0][0].size
val numKeypoints = heatmaps[0][0][0].size
// Finds the (row, col) locations of where the keypoints are most likely to be.
val keypointPositions = Array(numKeypoints) { Pair(0, 0) }
for (keypoint in 0 until numKeypoints) {
var maxVal = heatmaps[0][0][0][keypoint]
var maxRow = 0
var maxCol = 0
for (row in 0 until height) {
for (col in 0 until width) {
if (heatmaps[0][row][col][keypoint] > maxVal) {
maxVal = heatmaps[0][row][col][keypoint]
maxRow = row
maxCol = col
}
}
}
keypointPositions[keypoint] = Pair(maxRow, maxCol)
}
// Calculating the x and y coordinates of the keypoints with offset adjustment.
val xCoords = IntArray(numKeypoints)
val yCoords = IntArray(numKeypoints)
val confidenceScores = FloatArray(numKeypoints)
keypointPositions.forEachIndexed { idx, position ->
val positionY = keypointPositions[idx].first
val positionX = keypointPositions[idx].second
yCoords[idx] = (
position.first / (height - 1).toFloat() * bitmap.height +
offsets[0][positionY][positionX][idx]
).toInt()
xCoords[idx] = (
position.second / (width - 1).toFloat() * bitmap.width +
offsets[0][positionY]
[positionX][idx + numKeypoints]
).toInt()
confidenceScores[idx] = sigmoid(heatmaps[0][positionY][positionX][idx])
}
val person = Person()
val keypointList = Array(numKeypoints) { KeyPoint() }
var totalScore = 0.0f
enumValues<BodyPart>().forEachIndexed { idx, it ->
keypointList[idx].bodyPart = it
keypointList[idx].position.x = xCoords[idx]
keypointList[idx].position.y = yCoords[idx]
keypointList[idx].score = confidenceScores[idx]
totalScore += confidenceScores[idx]
}
person.keyPoints = keypointList.toList()
person.score = totalScore / numKeypoints
return person
}
The whole .kt file is the heart of bitmap to points on screen!
If you need anything else tag me.
Happy coding

Classes in ArrayList updates each other properties

I am creating small firework simulation in LibGDX. I have ArrayList called particles and this is filling it:
for (int i = 0; i < 2; i++) {
Particle p = new Particle();
p.position = position;
p.velocity.x = MathUtils.random(-1f, 1f);
p.velocity.y = MathUtils.random(-1f, 1f);
particles.add(p);
}
And then in update loop:
for (int i = 0; i < particles.size(); i++) {
System.out.println(i + " " + particles.get(i).position.toString() + " + " + particles.get(i).velocity.toString() + " = ");
particles.get(i).update();
System.out.println(" " + particles.get(i).position.toString());
}
Particle update function:
velocity.add(acceleration);
position.add(velocity);
acceleration.set(0, 0);
Velocity is random and every particle have unique velocity but position is the same. Here is output:
0 (300.0,620.91364) + (-0.94489133,-0.45628428) =
(299.0551,620.45734)
1 (299.0551,620.45734) + (0.3956585,0.5208683) =
(299.45078,620.9782)
0 (299.45078,620.9782) + (-0.94489133,-0.45628428) =
(298.5059,620.5219)
1 (298.5059,620.5219) + (0.3956585,0.5208683) =
(298.90155,621.0428)
0 (298.90155,621.0428) + (-0.94489133,-0.45628428) =
(297.95667,620.5865)
1 (297.95667,620.5865) + (0.3956585,0.5208683) =
(298.35233,621.10736)
First is particle index, position, velocity and then output position.
Why is it using position from another particle? I am trying to figure it out but I can't.
In your for loop where you fill the ArrayList you have the line:
p.position = position;
I don't know where position comes from but here all Particles point to the same.
You must create a new Position for every Particle
p.position = new Position(x, y);
If position is the start point for your Particles you can write:
p.position = new Position(position.x, position.y);

Vaadin add Grid in table

I have the following code:
GridLayout grid = new GridLayout(3, 3);
grid.addComponent(btnRemove, 0, 0);
grid.addComponent(lblIstMenge, 1, 0);
grid.addComponent(btnAdd, 2, 0);
int i = 0;
if (vList != null && vList.size() > 0)
{
for (VTr component : vList)
{
String transactionTypeName = component.getTransactionTypeName();
transaktionTable.addItem(new Object[]{++transaktionTableCounter + "",
transactionTypeName,
"123123123123123", grid, "Bemerkung^^^"},
transaktionTableCounter);
// System.out.println("Grid: " + grids.get(i));
}
}
Which gives me something like this:
So the grid is added only in the last column. I have tried creating different grids for each column in a list but this did not work for me.
If you have any ideas or recommendations it would be nice.
When I move the instantiation of the buttons and grids inside the for loop it is working as expected.
int i = 0;
if (vList != null && vList.size() > 0)
{
for (VTr component : vList)
{
btnAdd = new Button();
btnAdd.setIcon(new ThemeResource("images/btnIncrease.png"));
btnRemove = new Button();
btnRemove.setIcon(new ThemeResource("images/btnDescrease.png"));
GridLayout grid = new GridLayout(3, 3);
grid.addComponent(btnRemove, 0, 0);
grid.addComponent(lblIstMenge, 1, 0);
grid.addComponent(btnAdd, 2, 0);
String transactionTypeName = component.getTransactionTypeName();
transaktionTable.addItem(new Object[]{++transaktionTableCounter + "", transactionTypeName,
"123123123123123", grid, "Bemerkung^^^"}, transaktionTableCounter);
}
}

circular gradient blur filter

I am doing a school assignment in which I need to create a circular gradient fog filter. I did some research on how to do blur an image and founf that I need to take the color value of the pixels around the current pixel and average them to set the new color for the blurred image. So far, I am just focusing on the blurring aspect, and I have this code:
public void circularGradientBlur()
{
Pixel regularPixel = null;
Pixel L_regularPixel = null;
Pixel R_regularPixel = null;
Pixel T_regularPixel = null;
Pixel B_regularPixel = null;
Pixel blurredPixel = null;
Pixel pixels[][] = this.getPixels2D();
for (int row = 2; row < pixels.length; row++)
{
for (int col = 2; col < pixels[0].length - 1; col++)
{
regularPixel = pixels[row][col];
if (row != 0 && col != 0 && row != 498 && col != 498)
{
L_regularPixel = pixels[row - 1][col];
R_regularPixel = pixels[row + 1][col];
T_regularPixel = pixels[row][col - 1];
B_regularPixel = pixels[row][col + 1];
blurredPixel.setRed((L_regularPixel.getRed() + R_regularPixel.getRed() + T_regularPixel.getRed() + B_regularPixel.getRed()) / 4);
blurredPixel.setGreen((L_regularPixel.getGreen() + R_regularPixel.getGreen() + T_regularPixel.getGreen() + B_regularPixel.getGreen()) / 4);
blurredPixel.setBlue((L_regularPixel.getBlue() + R_regularPixel.getBlue() + T_regularPixel.getBlue() + B_regularPixel.getBlue()) / 4);
}
}
}
}
When I try use this code, I get a NullPointerException on the lines where I set the new colors- blurredPixel.setRed(), blurredPixel.setGreen(), blurredPixel.setBlue(). This error is confusing me as I have other methods that have similar code and don't throw this error. I would appreciate any help that I can get!
You have to create an instance of the blurredPixel. it will always be null if you call the setters.

Categories