I´m trying to implement lighting estimation with new Google's API Environmental HDR. I'm following the instructions in developer guide, but I don't know how to implement the app-specific code.
I have configure the session like this:
config.setLightEstimationMode(Config.LightEstimationMode.ENVIRONMENTAL_HDR);
session.configure(config);
And put this code in my update call
private void onSceneUpdate(FrameTime frameTime) {
if (fragment instanceof ArFragment && loadedRenderable) {
if ( frame == null )
return;
LightEstimate lightEstimate = frame.getLightEstimate();
// note - currently only out param.
float[] intensity = lightEstimate.getEnvironmentalHdrMainLightIntensity();
float[] direction = lightEstimate.getEnvironmentalHdrMainLightDirection();
//app.setDirectionalLightValues(intensity, direction);
float[] harmonics = lightEstimate.getEnvironmentalHdrAmbientSphericalHarmonics();
//app.setAmbientSphericalHarmonicsLightValues(harmonics); // app-specific code.
// Get HDR environmental lighting as a cubemap in linear color space.
Image[] lightmaps = lightEstimate.acquireEnvironmentalHdrCubeMap();
for (int i = 0; i < lightmaps.length /*should be 6*/; ++i) {
//app.UploadToTexture(i, lightmaps[i]);
}
}
}
}
I can't figure out what to do with the parameters provided by those methods
I just want to light the 3D model with the same light conditions as the scene. Can anyone help me to achieve this? Or provide any example?
Related
I have three global variables:
private PhysicsActor blade;
private PhysicsActor blades;
private ArrayList<PhysicsActors> blades;
I created an actor object from a class I created for my game.
blade = new PhysicsActor();
blade.storeAnimation( "", exTex );
blade.setOriginCenter();
blade.setEllipseBoundary();
blade.setMaxSpeed(50);
blade.setDeceleration(50);
bladesList = new ArrayList<PhysicsActor>();
for (int i = 0; i < 3 ; i++)
{
float xCoord = randomFloatGenerator(425, 50);
float yCoord = randomFloatGenerator(mapHeight - 200, 275);
blades = blade.clone();
blades.setPosition(xCoord, yCoord);
mainStage.addActor(blades);
bladesList.add(blades);
}
The problem is not that they do not spawn. It is that when I call for them to rotate while my game is running in my update(float dt) method, only one of them is rotating:
public void update(float dt)
{
// rotate the blade 70 degrees
blades.rotateBy(70);
// rest of code etc
}
Here is an image to help visualize
I know that this is happening because I am only rotating the blades actor. What I want to do is have them all rotate from the ArrayList. I do not know how to get them from the list however. I have tried bladesList.get(i) using a for loop and a couple other ways I saw online but it would not work. Any tips or instructions for me?
Also, I will post more code to clarify anything confusing if requested.
You can try this
for (PhysicsActor blade : bladesList) {
blade.rotateBy(70);
}
this will make all the blades in your list rotate by 70. Given you can access the array from where you are calling it.
I am working on a module in which I have to make background of bitmap image transparent. Actually, I am making an app like "Stick it" through which we can make sticker out of any image. I don't know from where to begin.
Can someone give me a link or a hint for it?
Original Image-
After making background transparent-
This is what I want.
I can only provide some hints on how to approach your problem. You need to do a Image Segmenation. This can be achived via the k-means algotithm or similar clustering algorithms. See this for algorithms on image segmantation via clustering and this for a Java Code example. The computation of the clustering can be very time consumeing on a mobile device. Once you have the clustering you can use this approach to distinguish between the background and the foreground. In general all you picture should have a bachground color which differs strongly from the foreground otherwise it is not possible for the clustering to distunguish between them. It can also happen that a pixel inside of you foreground is assigned to the cluster of the background beacuase it has a similar color like your background. To prevent this from happening you could use this approach or a region grwoth algorithm. Afterward you can let you user select the clusters via touch and remove them. I also had the same problems with my Android App. This will give you a good start and once you have implemented the custering you just need to tewak the k parameter of the clustering to get good results.
Seems like a daunting task. If you are talking about image processing if I may understand then you can try https://developers.google.com/appengine/docs/java/images/
Also if you want to mask the entire background ( I have not tried Stick it) the application needs to understand the background image map. Please provide some examples so that I can come up with more definitive answers
One possibility would be to utilize the floodfill operation in the openCV library. There are lots of examples and tutorials on how to do similar stuff to what you want and OpenCV has been ported to Android. The relevant terms to Google are of course "openCV" and "floodfill".
For this kind of task(and app) you'll have to use openGL. Usually when working on openGL you based your fragment shader on modules you build in Matlab. Once you have the fragment shader it's quite easy to apply it on image. check this guide how to do it.
Here's a link to remove background from image in MatLab
I'm not fully familiar with matlab and if it can generate GLSL code by itself(the fragment shader). But even if it doesn't - you might want to learn GLSL yourself because frankly - you are trying to build a graphics app and Android SDK is somehow short when using it for images manipulation, and most important is that without a strong hardware-acceleration engine behind it - I cannot see it works smooth enough.
Once you'll have the figure image - you can apply it on transparent background easily like this:
Canvas canvas = new Canvas(canvasBitmap);
canvas.drawColor(Color.TRANSPARENT);
BitmapDrawable bd = (BitmapDrawable) getResources().getDrawable(R.drawable.loading);
Bitmap yourBitmap = bd.getBitmap();
Paint paint = new Paint();
canvas.drawBitmap(yourBitmap, 0, 0, paint);
Bitmap newBitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(),image.getConfig());
Canvas canvas = new Canvas(newBitmap);
canvas.drawColor(Color.TRANSPARENT);
canvas.drawBitmap(image, 0, 0, null);
OR
See this
hope this wll helps you
if you are working in Android you might need a Buffer to get the pixels from the image - it's a intBuffer and it reduces the memory usage enormously... to get data from and stor data into the Buffer you have three methods (you can skip that part if you don't have 'large' images):
private IntBuffer buffer;
public void copyImageIntoBuffer(File imgSource) {
final Bitmap temp = BitmapFactory.decodeFile(imgSource
.getAbsolutePath());
buffer.rewind();
temp.copyPixelsToBuffer(buffer);
}
protected void copyBufferIntoImage(File tempFile) throws IOException {
buffer.rewind();
Bitmap temp = Bitmap.createBitmap(imgWidth, imgHeight,
Config.ARGB_8888);
temp.copyPixelsFromBuffer(buffer);
FileOutputStream out = new FileOutputStream(tempFile);
temp.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
}
public void mapBuffer(final File tempFile, long size) throws IOException {
RandomAccessFile aFile = new RandomAccessFile(tempFile, "rw");
aFile.setLength(4 * size); // 4 byte pro int
FileChannel fc = aFile.getChannel();
buffer = fc.map(FileChannel.MapMode.READ_WRITE, 0, fc.size())
.asIntBuffer();
}
now you can use the Buffer to get the pixels and modify them as desired... (i've copyied a code snipped that used a Progress bar on my UI and therefore needed a Handler/ProgressBar... when i did this i was working on bigger images and implemented a imageFilter (Gauss-Filter,Grey-Filter, etc.... just delete what is not needed)
public void run(final ProgressBar bar, IntBuffer buffer, Handler mHandler, int imgWidth, int imgHeight, int transparentColor ) {
for (int dy = 0; dy < imgHeight; dy ++){
final int progress = (dy*100)/imgHeight;
for (int dx = 0; dx < imgWidth; dx ++ ){
int px = buffer.get();
//int a = (0xFF000000 & px);
//int r = (0x00FF0000 & px) >> 16;
//int g = (0x0000FF00 & px) >> 8;
//int b = (0x000000FF & px);
//this makes the color transparent
if (px == transparentColor) {
px = px | 0xFF000000;
}
//r = mid << 16;
//g = mid << 8;
//b = mid;
//int col = a | r | g | b;
int pos = buffer.position();
buffer.put(pos-1, px);
}
// Update the progress bar
mHandler.post(new Runnable() {
public void run() {
bar.setProgress(progress);
}
});
}
}
if you really have small images, you can get the pixels directly during onCreate() or even better create a Buffer (maybe a HashMap or a List) before you start the Activity...
I have this code for my DoLogic method. And I'm trying to do a intersection between the shots and the obstacles but I really can't think of nothing.. cause both are different objects.. i tried to do some but it didn't really detect something at all.
for(int i=0; i<shots.length; i++)
{
if(shots[i] != null)
{
shots[i].moveShot(SHOTSPEED);
if(shots[i].getXPos() > 1280)
{
shots[i] = null;
}
}
}
for(int i=0; i<obstacles.length; i++)
{
if(obstacles[i] == null)
{
obstacles[i] = generateObstacle();
break;
}
if(obstacles[i] != null)
{
obstacles[i].moveObstacle();
if(obstacles[i].getXPos() < 10)
{
obstacles[i] = null;
}
else if(obstacles[i].intersects(Player1.character))
{
obstacles[i] = null;
GameSounds.hit("/resources/8bit_bomb_explosion.wav");
lives--;
}
}
}
Can you guys give me an example or at least an advice how to do an intersection between an obstacle and a shot?
Do these classes implement Shape? If not, they should. See the answer to Collision detection with complex shapes for an SSCCE.
..and i should implement the Rectangle in Obstacle and Oval in shot?
That seems logical to me, from your description of both objects.
..i just type implements Shape?
I would tend to use a Rectangle2D or Rectangle2D.Double for the obstacle & an Ellipse2D or Ellipse2D.Double for the shot. Rather than extend them, just hold them as an instance variable.
Give it a go & let us know how you go. If you get stuck, post an SSCCE of your best attempt.
You might need to hot-link to some small images.
.. ..
I'm making a sliding puzzle ( 3x3/4x4/5x5 with the lower-right corner cut out ). However I can't figure out where to start with programmatically cutting images ( which will be loaded from own gallery in sdcard or database from app ) in the puzzle pieces.
I've been looking over the internet and nothing really helped me.
What is the best way to cut up this image and store it in a new database (and still will be able to slide them)? Just a push in the right direction would be appreciated.
Check the PhotoGaffe app..
Its available on Google code here.
It allows user to choose between 3x3, 4x4, 5x5, and 6x6 puzzles.
This may help you in doing your task.
Straight from something I'm working on at the moment!
Bitmap main = BitmapFactory.decodeResource(getResources(), R.drawable.puzzle);
if( main.getHeight() > main.getWidth() ){
rescalefactor =((float)screenHeight)/main.getHeight();}
else {
rescalefactor = ( (float)screenWidth)/main.getWidth();
}
main = Bitmap.createScaledBitmap(main,(int)(main.getWidth()*rescalefactor),(int)(main.getHeight()*rescalefactor), false);
Bitmap cropped;
LinearLayout layout[] = new LinearLayout[rows];
int x=0,y=0,i=0,j=0,width=main.getWidth()/column,height=main.getHeight()/rows;
int count = 1;
for(i=0;i<rows;++i)
{
layout[i] = new LinearLayout(this);
for(j=0;j<column;++j)
{
cropped = Bitmap.createBitmap(main,x,y,width,height);
image[i][j] = new Tile(this);
image[i][j].setImageBitmap(cropped);
image[i][j].row =i;image[i][j].column =j;
image[i][j].setPadding(1, 1, 1, 1);
image[i][j].setOnClickListener(this);
image[i][j].setDrawingCacheEnabled(true);
image[i][j].setId(count); count++;
layout[i].addView(image[i][j]);
x += width;
}
x = 0; y += height;
root.addView(layout[i]);
}
This is the line where the work is really done:
cropped = Bitmap.createBitmap(main,x,y,width,height);
The Tile class is super simple. Just an extended ImageView with row and column fields:
public class Tile extends ImageView {
public int row, column;
public Tile(Context context)
{ super(context);}
}
So I have some path generator which now works like this
http://www.openprocessing.org/visuals/?visualID=2615 (There is source; WQRNING - JAVA APPLET)
I want to create some 3D object using paths I generated so it locked in one of perspectives similar to what I get now in 2D.
So how do I dynamically construct 3D object by adding paths?
BTW: actually I ment algorithm like this http://www.derschmale.com/2009/07/20/slice-based-volume-rendering-using-pixel-bender/
So I want to create from such PATH (I do not want to use images and I do not want to use flash I want to use Java + OpenGl)
such 3d image (But note I want openGL Java and Path's))
I'm not sure I understand what you're after.
The example you supplied draws 2d paths, but merely uses z. scaling would have worked
in a similar way.
So How to dinamicly construct 3d
object by adding path's ?
Do you mean extruding/lathing an object, or replicating the scrunch sketch ?
Drawing a path is easy in processing, you just place vertex objects, in a for loop
between beginShape() and endShape() calls.
Here is the bit of code that does that in the example you've sent:
beginShape();
for (int p=0; p<pcount; p++){
vertex(Ring[p].position().x(),Ring[p].position().y());
}
endShape(CLOSE);
you can also call vertex(x,y,z)
I wanted to extrude a path a while back, here is my question in case it helps.
Basic sketch is uploaded here.
EDIT:
If you have an array of 2 polygons, you can just loop through them, and draw
using something similar to beginShape() and endShape(), GL_POLYGON might be handy.
e.g.
import processing.opengl.*;
import javax.media.opengl.*;
int zSpacing = 10;
PVector[][] slices;
void setup() {
size(600, 500, OPENGL);
slices = new PVector[3][3];
//dummy slice 1
slices[0][0] = new PVector(400, 200,-200);
slices[0][1] = new PVector(300, 400,-200);
slices[0][2] = new PVector(500, 400,-200);
//dummy slice 2
slices[1][0] = new PVector(410, 210,-200);
slices[1][1] = new PVector(310, 410,-200);
slices[1][2] = new PVector(510, 410,-200);
//dummy slice 3
slices[2][0] = new PVector(420, 220,-200);
slices[2][1] = new PVector(320, 420,-200);
slices[2][2] = new PVector(520, 420,-200);
}
void draw() {
background(255);
PGraphicsOpenGL pgl = (PGraphicsOpenGL) g; // g may change
GL gl = pgl.beginGL(); // always use the GL object returned by beginGL
for(int i = 0 ; i < slices.length; i ++){
gl.glColor3f(0, .15 * i, 0);
gl.glBegin(GL.GL_POLYGON);
for(int j = 0; j < slices[i].length; j++){
gl.glVertex3f(slices[i][j].x, slices[i][j].y,slices[i][j].z + (zSpacing * i));
}
gl.glEnd();
}
pgl.endGL();
}
The idea is you loop through each slice, and for each slice your loop through all its points. Obviously slices and the number of 3d vectors inside each slice is up to your data. Speaking of which, where does your data come from ?
If slices is not what your after volTron could come in handy:
volTron http://dm.ncl.ac.uk/joescully/voltronlib/images/s2.jpg
HTH,
George