How to make Android views shape other than rectangular - java

I'm dealing with collision detection with some animated views and I'm animating some alpha image views. What's happening is that the collision detection is triggered even when the second object is on the transparent part of the ImageView when it's apparently not touching it but the ImageView container is rectangular box which exceeds the image bounds.
How can I detect when it touches the drawn part of the image or making the container a triangle?
Here's how Im detecting collision between two views:
public boolean checkCollision(View v1, View v2) {
if (v1 == null || v2 == null) {
log.e("Views must not be null");
throw new IllegalArgumentException("Views mut be not null");
}
Rect R1 = new Rect();
v1.getHitRect(R1);
Rect R2 = new Rect();
v2.getHitRect(R2);
return Rect.intersects(R1, R2);
}

What I would recommend doing is to do an initial check of the bounding boxes just to see if you need to perform a more accurate checking. This step is optional if you only have a few objects colliding, but if you have a lot of them it will save a lot of performance.
If you do need to do a further check, make points around the image where you know there is solid texture and then check for a collision with those points. I can try and get you some code for this if you would like, but check out this question here which explains things in depth.
https://gamedev.stackexchange.com/questions/30866/collision-detection-with-non-rectangular-images

Related

Bounding Camera to Content Javafx

I am trying to set restrictions on the movement of a camera in JavaFX such that when it moves, it does not allow the user to move in such a way that only the content of the subscene is visible. Currently, my movement code looks as follows and has no checks to prevent this, I have tried limiting the movement of the camera by checking its coordinates and approximating if it will or will not show the content of the subscene, but that has problems in that it is purely an approximation and when zooming. TLDR, the problem involves 1 detecting when the camera moves away from the content of it, and 2 preventing a transformation from occurring if it will result in the camera moving away from the content.
mapView.addEventFilter(MouseEvent.MOUSE_PRESSED, e->{
startX = e.getX();
startY = e.getY();
});
mapView.addEventFilter(MouseEvent.MOUSE_DRAGGED, e -> {
camera.setTranslateX(camera.getTranslateX() + (startX - e.getX()));
camera.setTranslateY(camera.getTranslateY() + (startY - e.getY()));
});
mapView is a MeshView if that is relevant.
If you would like me to clarify anything or need further information I will provide it. Thanks for the help and good day.
The camera has a viewport that you can imagine as a movable overlay above the contents (with some background being displayed in areas where no contents are placed). For the sake of simplicity, I would separate scrolling (i.e. moving the viewport) from content transformations (e.g. zooming).
Based on this mental model, you can define the scrollable bounds to be the bounds of your contents as well as a possibly empty portion of the current viewport (e.g. in case of contents smaller than viewport). The scrollable bounds needs to be recomputed after every scroll operation (increasing/reducing empty space within the current viewport) or content manipulation (transformations and bounds changes). If you restrict scrolling to the scrollable bounds, then you can ensure that empty space within the viewport is never increased by a scroll operation.
You can create an ObjectBinding scrollableBounds that is bound to the contents' bounds-in-local and local-to-parent-transform properties, as well the viewport-bounds. Then you can create a scollableBoundsProperty that is bound to the binding. That property can be accessed when scrolling to restrict the translation before applying it, thus preventing an increase of empty space within the viewport.
ObjectBinding<Bounds> scrollableBoundsBinding = new ObjectBinding<>() {
{
// TODO: bind to dependencies: viewport bounds and content bounds
// TODO: (transformed to the same coordinate system)
bind(camera.boundsInParentProperty(),
contentPane.boundsInLocalProperty(),
contentPane.localToParentTransformProperty());
}
#Override protected Bounds computeValue() {
// TODO: compute union of viewport and content bounds
return unionBounds(viewportBounds, contentBounds);
}
};
ObjectProperty<Bounds> scrollableBoundsProperty = new SimpleObjectProperty<>(
scrollableBoundsBinding);
// ...
// on mouse drag:
// dx, dy: relative mouse movement
// tx, ty: current scrolling
// mintx, maxtx, minty, maxty: translation range
// (taken from scrollable bounds and viewport size)
if (dx < 0) { tx = max(mintx, tx + dx); }
else { tx = min(maxtx, tx + dx); }
if (dy < 0) { ty = max(minty, ty + dy); }
else { ty = min(maxty, ty + dy); }
You might want to further restrict scrolling when the contents fully fit within the viewport, e.g. by placing the contents at the top left corner. You could also restrict the minimal zoom level in that case so that the contents are displayed as big as possible.
Note on usability: As already pointed out by another answer, you might want to consider allowing to drag over the contents by a bit, possibly with decreasing efficiency the further away one tries to scroll from the contents, comparable to the behavior of scrolling via touchpad in Safari. Then, when the interaction finishes, you could transition back instead of snapping in order to restrict the viewport to the contents again.
that's pretty common: just move and after you moved check if you're out of bounds... in that case go back into scene... this usually feels natural as when you try to pan an image on your phone.. it doesn't just block: it appears as it's making resistance and when you end your gesture it goes back... that's the simplest thing to do

LibGDX — How to detect if 3d object was clicked on?

I'm trying to make a simple bit of code that will detect whether a model was clicked on. So far the best method I've seen is to create some sort of rectangle around the mesh and detect with Gdx.input.justTouched() to get the x,y coordinates, and then check if the rectangle contains the coordinates returned by justTouched().
I have no idea if there's a better way to do this, some kind of mesh onClick listener or something that LibGDX has in place that I'm unaware of (I've been scouring Google and the javadocs but I can't seem to find anything). I don't really need to deal with the z-axis coordinate, at least I don't think so. I only have the one PerspectiveCamera and it's not going to be moving around that much (not sure if this matters?)
Anyways, in my render() method I have:
if (Gdx.input.justTouched()) {
//this returns the correct values relative to the screen size
Vector2 pos = new Vector2(Gdx.input.getX(), Gdx.input.getY());
//I'm not sure how to get the correct rectangle to see what the
//width and height are for the model relative to the screen?
Rectangle modelBounds = new Rectangle(<<not sure what to put here>>);
if (modelBounds.contains(pos.x, pos.y) {
System.out.println("Model is being touched at: " + pos.x + ", " + pos.y);
}
}
I'm really not sure if this is the correct way to do this. I can get the position of the model with:
modelInstance.getNode("Node1").globalTransform.getTranslation(new Vector3());
but I'm not sure how to get the width and height as a rectangle relative to the screen size, if it's even possible.
I'm also unsure if this would cause massive lag, as I'm going to have about 7 nodes total that I need to detect if they're clicked on or not.
Is there a better way to do this? If not, is there a way to get the model width & height relative to the screensize (or camera, maybe)?
EDIT: Read about using Bounding Boxes, seems like what I need. Not quite sure how to implement it properly, however. I've changed my code to such:
public ModelInstance modelInstance;
public BoundingBox modelBounds;
#Override
public void create() {
...
//omitted irrelevant bits of code
modelInstance = new ModelInstance(heatExchangerModel);
modelBounds = modelInstance.calculateBoundingBox(new BoundingBox());
}
#Override
public void render() {
...
if (Gdx.input.justTouched()) {
Vector3 pos = new Vector3(Gdx.input.getX(), Gdx.input.getY(), 0);
System.out.println(pos);
if (modelBounds.contains(pos)) {
System.out.println("Touching the model");
}
}
}
I'm not really sure what the output of BoundingBox is supposed to be, or how the numbers it gives me correlates to the position in a 2d space. Hmm..
EDIT2: Think I'm getting closer.. Read about Rays and the .getPickRay method for my PerspectiveCamera. .getPickRay seems to return completely unusable numbers though, like really tiny numbers. I think I need to do something like:
if (Gdx.input.justTouched()) {
Vector3 intersection = new Vector3();
Ray pickRay = perspectiveCamera.getPickRay(Gdx.input.getX(), Gdx.input.getY());
Intersector.intersectRayBounds(pickRay, modelBounds, intersection);
}
and then intersection should give me the point where they overlap. It appears to be not working, however, giving me really small numbers like (4.8066642E-5, 2.9180354E-5, 1.0) .. hmmm..

Drawing objects behind circle except the ones behind 'background'

Situation: I have a canvas on an Android game, I have some objects (I will keep it as simple as possible):World(where are storaged all Laser and Block objects), Block and Laser. I can draw all this objects in the canvas.
I would like to 'hide' them behind a black 'background', and then draw a blurry 'transparent' circle, so all objects are hidden behind the black background, except the objects behing the circle.
I have thought about it, but I can't think of an approach to do this.
Images:
This is my actual situation:
This is the expected:
Do something like this:
public void drawBitmapsInCanvas(Canvas c){
c.drawBitmap(block, new Rect(/*coordinates here*/), new Rect(/*More coordinates*/),null);
c.drawBitmap(block2, new Rect(/*coordinates here*/), new Rect(/*More coordinates*/),null);
c.drawBitmap(laser, new Rect(/*coordinates here*/), new Rect(/*More coordinates*/),null);
c.drawColor(Color.BLACK);//this hides everything under your black background.
c.drawBitmap(circle, new Rect(/*coordinates here*/), new Rect(/*More coordinates*/),null);
}
If you want transparency:
Paint paint =new Paint();
paint.setARGB(120,0,0,0); //for the "120" parameter, 0 is completely transparent, 255 is completely opaque.
paint.setAntiAlias(true);
c.drawBitmap(bmp,Rect r,Rect rr, paint);
or if you are trying to change the opacity of individual pixels, the approach is a bit more complicated (I have not tested the code, but you get the gist of it):
public static final Bitmap getNewBitmap(Bitmap bmp, int circleCenterX,
int circleCenterY,int circleRadius){
//CIRCLE COORDINATES ARE THE DISTANCE IN RESPECT OF (0,0) of the bitmap
//, not (0,0) of the canvas itself. The circleRadius is the circle's radius.
Bitmap temp=bmp.copy(Bitmap.Config.ARGB_8888, true);
int[]pixels = new int[temp.getWidth()*temp.getHeight()];
temp.getPixels(pixels,0 ,temp.getWidth(),0,0,temp.getWidth(), temp.getHeight());
int counter=0;
for(int i=0;i<pixels.length;i++){
int alpha=Color.alpha(pixels[i]);
if(alpha!=0&&!((Math.pow(counter/temp.getWidth()-circleCenterY,2.0)+
Math.pow(counter%temp.getWidth()-circleCenterX,2.0))<Math.pow(circleRadius,2.0))){
//if the pixel itself is not completely transparent and the pixel is NOT within range of the circle,
//set the Alpha value of the pixel to 0.
pixels[i]=Color.argb(0,Color.red(pixels[i]),Color.green(pixels[i]),Color.blue(pixels[i]));
}
counter++;
}
temp.setPixels(pixels,0, temp.getWidth(),0,0,temp.getWidth(),temp.getHeight());
return temp;
}
and then draw temp.
I'm not completely sure what you are trying to ask, so you may have to modify as necessary.
If you try the second answer of qwertyuiop5040, you will get a ver low - perfomance when you try to apply it to a large image. Let's say a 1000*800 pixels image. Then you will have a loop:
for (int i = 0 ; i < 1000*800; i++)
You could create an image that's a black rectangle with a transparent hole in it. The hole would be the circle that you can see through, and the image would be rendered over the spot you want to be visible. Then, you can draw four black rectangles around the image to cover the rest of the screen.

Android draw multiple Rect as background color

I used an canvas to draw multiple textures on it. these textures are rectangles and now I want to use these textures with parts of them invisble, so I could draw background colors behind the textures to have teh same texture with different colors without adding the same picture with different colors.
I tried to add Rects like this:
for(Coordinate c : ch.getVisibleCoords()) {
ShapeDrawable sD = new ShapeDrawable();
Rect r = new Rect(c.getxS(),
c.getyS(),
(sh.getScreenWidth()-c.getxS()-sh.getTSize()),
(sh.getScreenHeight()-c.getyS()-sh.getTSize()));
sD.setBounds(r);
textureColorRects.add(sD);
}
each coordinate represents an texture the xS and yS values are the positions at the screen, for example coordinate 1|1 could have xS=0 | yS=0 and 2|1 xS=48 (48=texturesize) | yS=0. I tried this with ShapeDrawable and Rectangles itself, in the first case it will draw everything the same color expect of one y-line and in the other case it will draw just some buggy shit.
Is there another way to do this or may I didn't understood how to setup those rectangles, I can't figure out how that left, top, right, bottom stuff works.
The rest of the code is here for you so you can see how I draw the ShapeDrawables:
int i = 0;
for(Coordinate c : ch.getVisibleCoords()) {
ShapeDrawable sD = textureColorRects.get(i);
Paint color = new Paint();
color.setColor(c.getLandscape().getType().getColor());
color.setStyle(Paint.Style.FILL);
sD.getPaint().set(color);
sD.draw(canvas);
}
The textureColorRects is a list containing all ShapeDrawables.
Thank you very much for reading.
I found an solution it's a problem other people had too (was just hard to find) it's a bit hard to understand how the Rect works the values for left, top, right and bottom are seen like the beginning and the ed point for example I want a rectangle of the size 16*16 and at the point x=5|y=18 on the screen, so I need to set the right value to x+size (5+16) and the bottom to y+size (18+16). The lft and top can be set to the left upper edge of the rect (start position).

Detect circles in an image?

The program should detect circles and colour them in red. The symmetry method was suggested where I assume each pixel is a center of a circle and check the four points r (radius) distance from it. If they are the same, draw a circle. However in the code bellow I get way to many unnecessary circles
static boolean isCenterOfCircle(int row, int col, int r, BufferedImage image) {
//getPixels gets the color of the current pixel.
if(getPixel(row,col,image) == getPixel(row+r,col,image)
|| getPixel(row,col,image) == getPixel(row-r,col,image)
|| getPixel(row,col,image) == getPixel(row,col+r,image)
|| getPixel(row,col,image) == getPixel(row,col-r,image)){
return true;
}else{
return false;
}
}
This can be done using the Hough transform for circles.
See algorithm for detecting a circle in an image
You should check more than 4 points to detect the circle. What about 16 or more. Maybe depending on the radius. For bigger radius you should check more points.
Or search the web for circle detecting algorithms. There are other approaches than checking a few pixels.

Categories