In android I would like to draw flower by adding each petal from a center point. I used setRotation on the image view but the center point is different for each petal. (I mean the center point from the flower) Can anybody look at my code and suggest me correction? Thanks.
int angle=0;
int ypos=500;
int xpos=500;
RelativeLayout layout = (RelativeLayout)findViewById(R.id.ln1);
for(int i=0;i<10;i++)
{
ImageView image = new ImageView(this);
image.setLayoutParams(new android.view.ViewGroup.LayoutParams(150,400));
image.setX(xpos);
image.setY(ypos);
image.setPadding(-7,-30,-10,0);
image.setPivotX(1.0f);
image.setScaleX(1.5f);
image.setScaleY(1.5f);
image.setImageResource(R.drawable.petal);
image.setRotation(image.getRotation() + angle);
angle=angle+36;
layout.addView(image);
}
Image I get is this
When you rotate the image, the rotation is done with the top left corner of the image, not the center of the rotated image.
Below image might illustrate this. The black square represents your image. The lefthand site shows the situation you have now. The righthand side shows the situation you want.
Before rotating, you should subtract half the width from the x position, and add half the height from the y position. Then you should get the desired image.
As user Ralf Renz pointed out in their comment, you could also simply start with an angle of -36. This is a useful workaround.
I accomplished this using the following strategy.
image.setPivotY(-1);
Now the flower was coming in a different design because petal was Tilted. So I Tilted my actual image to the opposite direction using inkscape and now I get the desired output.
Related
Is it possible to detect on color touch with Bitmap on 'Android'?
The screen will look like this:
The black arrow is the Bitmap object that user can move up and down. It should detect when black arrow is touching the blue line and add points for each second that it has touched the line.
Maybe it is worth noting that arrow can be only moved up and down and the blue lines are what are moving from right to left.
There is a wierd background and the background may contain blue since it will be a camera preview screen and the Canvas where the arrow and the blue line are moving will be transparent. Will that be an issue or is there maybe a better way to detect collision?
The most important part that I need here is to detect 'collision' or weather the bitmap and the line are touching or not but the second part of the question would be, is there a way to add an animation or something that would show the user that the lines are touching, maybe chaing the arrow for a golden one or something.
The line is made with 'GraphView' and thus I can not really treat that as an 'object'. More about graphview - http://www.android-graphview.org/
If there is any need for source code then I can provide that but I'd rather not share it right off the bat.
EDIT I have tried using Pixels and detecting color that way but I have not gotten it to work.
For a given pixel, x and y:
ImageView imageView = (ImageView)v;
Bitmap bitmap = ((BitmapDrawable)imageView.getDrawable()).getBitmap();
int pixel = bitmap.getPixel(x,y);
int redValue = Color.red(pixel);
int blueValue = Color.blue(pixel);
int greenValue = Color.green(pixel);
Reference here.
I am currently trying to get an image view containing an image to move from top of the screen downwards. When it reaches the buttom of the screen i'd like to switch direction going upwards again.
My problem is i can't find the exact spot, where i should make the image view change direction. My screen has a very high resolution and therefore i've set my os, windows, to enlarge all components. I can therefore not use my screen size to calculate how many pixels there are from top to bottom.
Therefore i use following code to get height of screen:
Screen screen = Screen.getPrimary();
Rectangle2D bounds = screen.getVisualBounds();
bounds.getHeight();
My Imageview is initialized at point (0,0) and i can therefore at any given time get the y-koordinate of my top left corner by using imageview.getY()
Therefore i should move it downwards until the value of imageView.getY() + height the image view equals the height of the screen.
But this solution seems to make my image view switch direction a bit before it reaches the bottom of the screen.
For calculating height of image i use the method imageView.getFitHeight();
I suspect imageView.getFitHeight for delivering the height desired by the imageview before it is actually determined by the underlying anchor pane and therefor i am not sure imageView.getFitHeight() actually delivers the height of the imageView. I can't seem to find any other method in imageView which regards height of the imageView.
I don't know how to make the imageview switch direction exactly at the bottom, can anybody help?
Regards Martin
Hell All & thanks for reading,
I recently started working on an 2D Android/Desktop project and have become stuck trying to display my sprites in the way i want.
I have a background Sprite that is 144(w) by 160(h) that I want to be able to position other sprites onto the screen relative to points on the background sprite.
I think I understand that if I create a camera/viewport that is 144 x 160 I would be able to position my sprites on the background sprite using the co-ordinates based on the 144 x 160 of the background sprite. This will work across the different screen resolutions found on mobile devices but will stretch the background sprite despite experimenting with the different viewport types (FillViewport, FitViewport etc..).
What I want to achieve is to have my background sprite to maintain it ratio across different screen resolutions and to be able to place other sprites over the background sprite. The placing of sprite need to work across different resolutions.
Apologies if my explanation is confusing or makes no sense. I would add some image to help explain but I reputation to add any to the post. However I think the TLTR question is "What is the correct way to display sprites on multiple screen resolutions while keeping a correct ratios and scaling to the screen size and position of sprite in a way that works across multiple resolutions?"
Thank, All Questions Welcome
A FitViewport would do what you described (maintain aspect ratio), but you will have black bars on some devices. Based on the code you posted on the libgdx forum, I see that you forgot to update the viewport in the resize method, so it is not behaving as designed.
However, for a static camera game like what you described, I think the best solution would be to plan your game around a certain area that is always visible on any device, for example, the box from (0,0) to (144,160). Then use an ExtendViewport with width and height of 144 and 160. After you update the viewport in resize, you can move the camera to be centered on the rectangle like this:
private static final float GAME_WIDTH = 144;
private static final float GAME_HEIGHT = 160;
public void create(){
//...
viewport = new ExtendViewport(GAME_WIDTH, GAME_HEIGHT);
//...
}
public void resize(int width, int height){
viewport.update(width, height, false); //centering by putting true here would put (0,0) at bottom left of screen, but then the game rectangle would be off center
//manually center the center of your game box
Camera camera = viewport.getCamera();
camera.position.x = GAME_WIDTH /2;
camera.position.y = GAME_HEIGHT/2;
camera.update();
}
Now your 144x160 box is centered on the screen as it would be with FitViewport, but you are not locked into having black bars, because you can draw extra background outside the 144x160 area using whatever method you like.
In your case 144:160 is a wider portrait aspect ratio than any screen out there, so you wouldn't need to worry about ever filling in area to the sides of your game rectangle. The narrowest aspect ratio of any phone or tablet seems to be 9:16, so you can do the math to see how much extra background above and below the game rectangle should be drawn to avoid black showing through on any device.
In this case it works out to 48 units above and below the rectangle that you would want to fill in:
144 pixels wide at 9:16 would be 256 tall.
(256 - 160) / 2 = 48
EDIT: I see from your post on the libgdx forum that you want the game area stuck at the top of the screen and the remainder of the area to be used for game controls. In that case, I would change the resize method like this, since you want to have the game area's top edge aligned with the top edge of the screen. You can also calculate where the bottom of the controls area will be on the Y axis. (The top will be at Y=0.)
public void resize(int width, int height){
viewport.update(width, height, false);
//align game box's top edge to top of screen
Camera camera = viewport.getCamera();
camera.position.x = GAME_WIDTH /2;
camera.position.y = GAME_HEIGHT - viewport.getWorldHeight()/2;
camera.update();
controlsBottomY = GAME_HEIGHT - viewport.getWorldHeight();
}
I'm not sure how you plan to do your controls, but they would need to fit in the box (0, controlsBottomY) to (GAME_WIDTH, 0). Keep in mind that there are some phones with aspect ratios as small as 3:4 (although rare now). So with your 0.9 aspect ratio, on a 3:4 phone only the bottom 17% of the screen would be available for controls. Which might be fine if it's just a couple of buttons, but would probably be problematic if you have a virtual joystick.
I'm using Java Graphics2D to generate this map with some sort of tinted red overlay over it. As you can see, the overlay gets cut off along the image boundary on the left side:-
After demo'ing this to my project stakeholders, what they want is for this overlay to clip along the map boundary with some consistent padding around it. The simple reason for this is to give users the idea that the overlay extends outside the map.
So, my initial thought was to perform a "zoom and shift", by creating another larger map that serves as a "cookie cutter", here's my simplified code:-
// polygon of the map
Polygon minnesotaPolygon = ...;
// convert polygon to area
Area minnesotaArea = new Area();
minnesotaArea.add(new Area(minnesotaPolygon));
// this represents the whole image
Area wholeImageArea = new Area(new Rectangle(mapWidth, mapHeight));
// zoom in by 8%
double zoom = 1.08;
// performing "zoom and shift"
Rectangle bound = minnesotaArea.getBounds();
AffineTransform affineTransform = new AffineTransform(g.getTransform());
affineTransform.translate(-((bound.getWidth() * zoom) - bound.getWidth()) / 2,
-((bound.getHeight() * zoom) - bound.getHeight()) / 2);
affineTransform.scale(zoom, zoom);
minnesotaArea.transform(affineTransform);
// using it as a cookie cutter
wholeImageArea.subtract(minnesotaArea);
g.setColor(Color.GREEN);
g.fill(wholeImageArea);
The reason I'm filling the outside part with green is to allow me to see if the cookie cutter is implemented properly. Here's the result:-
As you can see, "zoom and shift" doesn't work in this case. There is absolutely no padding at the bottom right. Then, I realized that this technique will not work for irregular shape, like the map... and it only works on simpler shapes like square, circle, etc.
What I want is to create consistent padding/margin around the map before clipping the rest off. To make sure you understand what I'm saying here, I photoshopped this image below (albeit, poorly done) to explain what I'm trying to accomplish here:-
I'm not sure how to proceed from here, and I hope you guys can give me some guidance on this.
Thanks.
I'll just explain the logic, as I don't have time to write the code myself. The short answer is that you should step through each pixel of the map image and if any pixels in the surrounding area (i.e. a certain distance away) are considered "land" then you register the current pixel as part of the padding area.
For the long answer, here are 9 steps to achieve your goal.
1. Decide on the size of the padding. Let's say 6 pixels.
2. Create an image of the map in monochrome (black is "water", white is "land"). Leave a margin of at least 6 pixels around the edge. This is the input image: (it isn't to scale)
3. Create an image of a circle which is 11 pixels in diameter (11 = 6*2-1). Again, black is empty/transparent, white is solid. This is the hit-area image:
4. Create a third picture which is all black (to start with). Make it the same size as the input image. It will be used as the output image.
5. Iterate each pixel of the input image.
6. At that pixel overlay the hit-area image (only do this virtually, via calculation), so that the center of the hit-area (the white circle) is over the current input image pixel.
7. Now iterate each pixel of the hit-area image.
8. If the any white pixel of the hit-area image intersects a white pixel of the input image then draw a white pixel (where the center of the circle is) into the output image.
9. Go to step 5.
Admittedly, from step 6 onward it isn't so simple, but it should be fairly easy to implement. Hopefully you understand the logic. If my explanation is too confusing (sorry) then I could spend some time and write the full solution (in Javascript, C# or Haskell).
I have a program that needs to take in a photo taken by an iphone (or any kind of decent camera) of a 7x10 grid with a thick black boarder around the edges. This image can be received rotated to the right or to the left (there's no need to worry about skew). I have an image of the grid in its original state already, but I need to get the picture that I'm taking in and rotate it to its "perfect/original" state.
Idea 1: Performance Hog/Bad Results
Threshold the picture that I receive and the perfect grid Image I already have. Compare each pixel for 0 rotation, get a total score, and save it. Do do this rotating the image of increments by 1 to 359. The lowest score is the rotation we need to get the picture back to its original state.
Idea 2: Still Unsure How To Go About Doing This
Threshold the picture that I receive and the perfect grid Image I already have. Draw a a line through the center of the picture vertically and horizontally. Find the rotation based on the black pixel count that the vertical and horizontal line passed through. This would require some sort of Trigonometry that I'm not to great with understanding.
Does anyone have any other ideas for getting this working?
Any help for pointing me in the right direction would be greatly appreciated!
Thanks!
Instead of drawing one horizontal and one vertical line, draw instead two horizontal lines (say, each at a third of the picture). Only look at the left halves of these lines and calculate how many black pixels there are on the path of each (a1 and a2). You also have to keep track of the distance between the two red lines, so the number of pixels d.
Using this notation in the figure above, your desired angle is:
alpha=atan2((a2-a1),d)
and a counterclockwise rotation by alpha will bring the white portion of the picture into proper alignment.