How to resize a Bitmap while runnning in Android - java

I have a Panel class, which extends SurfaceView. The panel fills my whole activity. Inside the panel I draw a Ball - Bitmap (and some other stuff(lines/squares)).
I add some pseudo depth. Something like 2.5 D, not really 3D, but kind of. But now I am facing a problem, when the Ball goes into depth I don't know how to re size it. Because the Ball is constantly moving back and forth (left/right, up/down too).
I don't know where to read about it, but I think if I add the Bitmap to an ImageView, this would solve all my problems.
My First question, does this even solve my problem? And if not, how else could I solve it.
And my second question is, how do I add an Imageview on a SurfaceView?
I found some good hints here: Implement a ImageView to a SurfaceView
but it only says what to do, not how to do it. Because nothing happens in my activity. Everything happens in my Panel.

I think the answer to one of my questions may help you: Unable to draw on top of a custom SurfaceView class. It's not possible to add a ImageView into a SurfaceView because SurfaceView doesn't extent the ViewGroup class, but as that answer shows, it is straightforward to draw on top of a SurfaceView.
In that question, I had a custom view called DrawOnTop which inherited directly from View, and I drew resized and rotated bitmaps onto that view's canvass without any problems. My guess is that you'll be able to do the same will your ball object. The code I used was as follows:
Bitmap bitmapOrg = BitmapFactory.decodeResource(getResources(), R.drawable.bitmap);
int bitmapOrg_width = bitmapOrg.getWidth();
int bitmapOrg_height = bitmapOrg.getHeight();
float bitmapScaleFactor = 1.5; // to create a resized bitmap 50% bigger
float angle = 90; // to rotate by 90 degrees
Matrix matrix = new Matrix();
// resize the bit map
matrix.postScale(bitmapScaleFactor, bitmapScaleFactor);
// rotate the Bitmap
matrix.postRotate(angle);
// create the resized bitmap
Bitmap resizedBitmap = Bitmap.createBitmap(bitmapOrg, 0, 0, bitmapOrg_width, bitmapOrg_height, matrix, true);
canvas.drawBitmap(resizedBitmap, left, top, null);

Related

How To Draw A Flower In Android Each Petal By Petal

In android I would like to draw flower by adding each petal from a center point. I used setRotation on the image view but the center point is different for each petal. (I mean the center point from the flower) Can anybody look at my code and suggest me correction? Thanks.
int angle=0;
int ypos=500;
int xpos=500;
RelativeLayout layout = (RelativeLayout)findViewById(R.id.ln1);
for(int i=0;i<10;i++)
{
ImageView image = new ImageView(this);
image.setLayoutParams(new android.view.ViewGroup.LayoutParams(150,400));
image.setX(xpos);
image.setY(ypos);
image.setPadding(-7,-30,-10,0);
image.setPivotX(1.0f);
image.setScaleX(1.5f);
image.setScaleY(1.5f);
image.setImageResource(R.drawable.petal);
image.setRotation(image.getRotation() + angle);
angle=angle+36;
layout.addView(image);
}
Image I get is this
When you rotate the image, the rotation is done with the top left corner of the image, not the center of the rotated image.
Below image might illustrate this. The black square represents your image. The lefthand site shows the situation you have now. The righthand side shows the situation you want.
Before rotating, you should subtract half the width from the x position, and add half the height from the y position. Then you should get the desired image.
As user Ralf Renz pointed out in their comment, you could also simply start with an angle of -36. This is a useful workaround.
I accomplished this using the following strategy.
image.setPivotY(-1);
Now the flower was coming in a different design because petal was Tilted. So I Tilted my actual image to the opposite direction using inkscape and now I get the desired output.

How to create a proper ortographic camera in libgdx

So I am having a little hard time understanding how ortographic cameras work in libgdx.
what I want is to have a camera that will only render things within a square while having another camera set the bounds for my whole screen.
So here, I was able to do what I wanted on the whole screen for the game pad. But, the thing you see on the top right is the background map of the game and i want to render the parts only fall within the red square you see here. How do I achieve that?
Are cameras supposed to do that or do I need to figure out a way to do it manually? I am really confused as to how cameras and projection matrices work.
Here on this screen, The red square and the green pad on the left are being drawn using the projection matrix of my screen camera. The map(top right) is drawn using my map cam.
Map cam is a view port of 400x400 but as you can see , the tiles are rectangular and that isnt the aspect ration i want. If someone can briefly explain how cameras work , I'd greatly appreciate it.
The reason I am not posting my code here is because I feel like I need to understand how camera mechanics work to even code it properly so I want to address that issue first.
Following #Tenfour04's advice worked perfectly. In case anyone wonders what I wanted to achieve. Here's a picture.
A camera alone cannot crop off part of the screen. For that you need to use glViewport. There is already a Viewport class in Libgdx that can do that for you. You will need two orthographic cameras (one for the map and one for the GUI), but the viewport can create its own.
private Viewport viewport;
//in create:
viewport = new FitViewport(400, 400);
//in resize:
viewport.update(width, height);
//in render:
viewport.getCamera().position.set(/*...move your map camera as needed*/);
viewport.apply(); //viewport cropped to your red square
batch.setProjectionMatrix(viewport.getCamera().combined);
batch.begin();
//draw map
batch.end();
//return to full screen viewport
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setProjectionMatrix(yourGUICamera.combined);
batch.begin();
//draw gui
batch.end();
What happens, is the camera will fit itself to the size of the screen. In order to change this, you would want to use a FrameBuffer. The frame buffer will constrain the camera into the desired size, then can be drawn as a texture.
Create the frame buffer with the dimensions being in pixels.
//Initialize the buffer
FrameBuffer fbo = new FrameBuffer(Format.RGB565, width, helght, false);
Render the world within the buffer.
fbo.begin();
//Draw the world here
fbo.end();
Draw the buffer to the screen with a batch.
batch.begin();
batch.draw(fbo.getColorBufferTexture(), x, y);
batch.end();

Magnetic sensor and image rotation in Android

trying to build an magnetic compass, I have the following code.
public void onSensorChanged(SensorEvent event) {
imageCompass = (ImageView) findViewById(R.id.imageMapDrawView);
Bitmap myImg = BitmapFactory.decodeResource(getResources(), R.drawable.compass);
Matrix matrix = new Matrix();
matrix.postRotate(event.values[0] );
Bitmap rotated = Bitmap.createBitmap(myImg, 0, 0, myImg.getWidth(), myImg.getHeight(),
matrix, true);
imageCompass.setImageBitmap(rotated);
}
I have the following questions,
A. I guess event.values[0] is in degree? not in radian?
B. I want to get the image from image view and rotate it around the center of the image, where I am telling that?
C. I want to draw another image (an indicator on top of that Image), can I do this? I have already an compass image in the image view and I want to draw another image on top. I can't redraw whole view. How can I achieve it?
A. Check http://developer.android.com/reference/android/hardware/SensorEvent.html.
The length and contents of the values array depends on which sensor type is being monitored.
Sensor.TYPE_ACCELEROMETER is in m/s²
Sensor.TYPE_MAGNETIC_FIELD is in uT (micro Tesla)
Sensor.TYPE_GYROSCOPE is in rad/s
and go on for all sensors (check the link).
Also, interface SensorListener is Deprecated since API level 3. You should use SensorEventListener instead.
B and C. You should consider using OpenGL ES for drawing your images. Specially OpenGL ES 2.0 gives you so much details and ways manipulate your images that would be easy to do let you do about anything you need.

Double buffering in Java on Android with canvas and surfaceview

How does one go about doing this? Could somebody give me an outline?
From what I've found online, it seems like in my run() function:
create a bitmap
create a canvas and attach it to the bitmap
lockCanvas()
call draw(canvas) and draw bitmap into back buffer (how??)
unlockCanvasAndPost()
Is this correct? If so, could I get a bit of an explanation; what do these steps mean and how do I implement them? I've never programmed for Android before so I'm a real noob. And if it isn't correct, how DO I do this?
It's already double buffered, that's what the unlockCanvasAndPost() call does. There is no need to create a bitmap.
The steps from Android Developers Group say that you need a buffer-canvas, to which all the renders are drawn onto.
Bitmap buffCanvasBitmap;
Canvas buffCanvas;
// Creating bitmap with attaching it to the buffer-canvas, it means that all the changes // done with the canvas are captured into the attached bitmap
tempCanvasBitmap = Bitmap.createBitmap(getWidth(), getHeight(), Bitmap.Config.ARGB_8888);
tempCanvas = new Canvas();
tempCanvas.setBitmap(tempCanvasBitmap);
// and then you lock main canvas
canvas = getHolder().lockCanvas();
// draw everything you need into the buffer
tempCanvas.drawRect.... // and etc
// then you draw the attached bitmap into the main canvas
canvas.drawBitmap(tempCanvasBitmap, 0, 0, drawView.getPaint());
// then unlocking canvas to let it be drawn with main mechanisms
getHolder().unlockCanvasAndPost(canvas);
You are getting the main buffer, which you are drawing into without getting different double-buffer canvas' on each holder's lock.

Attempting to get Bitmap pixel from an ImageView's onTouchListener

I am displaying a Bitmap on an ImageView. When the ImageView is tapped, I would like to get the pixel x/y coordinates for the Bitmap where it was tapped. I have registered the ImageView's onTouchListener, and within the onTouch method, I use getX and getY to get the location of the touch. The problem is that the image within the ImageView may be larger than the view itself, so it is scaled down to fit the screen. The x and y coordinates returned, then, are the coordinates of the view, but not necessarily the coordinates of the corresponding pixel on the Bitmap.
Can I get some sort of scale factor to know how much it was resized? Or if not, could someone suggest how I could go about getting the information I need? What's most important is that I can get the pixel coordinates - if I have to change the view type, that's alright.
Also, sometimes the bitmap is smaller than the screen, so it scales it up. In this scenario, it is possible that the x and y received from the MotionEvent are outside the bounds of the actual Bitmap.
Check into the ScaleType property. You'll have to do the math on the original image, based on the results of getScaleType(), but it's worth checking out.
I can across the same case. The thread is more than a year old, but thought it might be useful for someone. I had to scale the bitmap before setting to the ImageView.
Here is the code:
//Convert Uri to Bitmap
inputBitmap = MediaStore.Images.Media.getBitmap(this.getContentResolver(), selectedImage);
//Scale the bitmap to fit in the ImageView
Bitmap bmpScaled = Bitmap.createScaledBitmap(inputBitmap, img.getWidth(), img.getHeight(),true);
//Set the scaled bitmap to the ImageView
img.setImageBitmap(bmpScaled);
After this, the touched coordinates are the same as the coordinates of the bitmap.

Categories