I don't understand how I can simply clear the screen in Java while using OpenGL. I have searched all over the internet, there is like no real good resource for OpenGL information. Basically I just want to clear the screen and re-draw a circle. Instead my code decides that it isn't going to clear the screen ever, and it most definitely isn't going to draw anything else.. I want it to clear the screen when I press "e", and then draw a new circle. I have two java files.. I will only post relevant code for the sake of any user's who can help me - but will post more code if needed.
In the beginning of my JOGLEventListener.java file I'm also declaring a global var
// Test
GLAutoDrawable test = null;
JOGLEventListener.java
#Override
public void display(GLAutoDrawable gLDrawable)
{
// Set a global variable to hold the gLDrawable
// May not need this?
test = gLDrawable;
GL2 gl = gLDrawable.getGL().getGL2();
gl.glClearColor(backrgb[0], 0, 1, 1);
gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT);
backrgb[0]+=0.0005;
if (backrgb[0]> 1) backrgb[0] = 0;
// =============================================
// Draw my circle here
//
// =============================================
// =============================================
System.out.println("Drawing Circle..");
drawCircle(5.0f, 5.0f, 10.0f);
}
// Draw Circle
void drawCircle(float x, float y, float radius)
{
System.out.println("IN DRAWCIRCLE");
int i;
GL2 gl = test.getGL().getGL2();
int lineAmount = 100; //# of triangles used to draw circle
final
//GLfloat radius = 0.8f; //radius
float twicePi = (float) (2.0f * Math.PI);
gl.glBegin(gl.GL_LINE_LOOP);
for(i = 0; i <= lineAmount;i++) {
gl.glVertex2f(
x + (radius * (float)Math.cos(i * twicePi / lineAmount)),
y + (radius* (float)Math.sin(i * twicePi / lineAmount))
);
}
gl.glEnd();
}
#Override
public void keyTyped(KeyEvent e)
{
char key= e.getKeyChar();
System.out.printf("Key typed: %c\n", key);
GL2 gl = test.getGL().getGL2();
if(key == 'e')
{
// WHY ISNT THIS WORKING
// CLEAR THE SCREEN AND DRAW ME A NEW CIRCLE
gl.glClear( gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT );
gl.glLoadIdentity();
//test
float x = 100.0f;
float y = 100.0f;
float twicePi = (float) (2.0f * Math.PI);
float radius = 100f;
System.out.println("Draw Another Circle...");
gl.glBegin(gl.GL_LINE_LOOP);
for(int i = 0; i <= 360;i++)
{
gl.glVertex2f(
x + (radius * (float)Math.cos(i * twicePi / 360)),
y + (radius* (float)Math.sin(i * twicePi / 360))
);
}
gl.glEnd();
}
1) That's deprecated OpenGL, don't use it
2) Don't save the gl object to one global value, always get it from the drawable or the GLContext
3) Use a shader program to render and a vertex buffer to hold the vertices position. But first, I'd suggest you to start a tutorial to learn the basic of OpenGL. Or if you want to get something working asap, clone this hello triangle of mine and start experiment on that
The problem is apparently that you don't swap the front and back buffers.
I'm not familiar with the OpenGL bindings for Java, but I guess that the library already does that for you after it calls the display() function. It doesn't do that after keyTyped().
The way you are supposed to do this is to always draw the scene from scratch inside the display() function based on some internal state. Then in keyTyped() you shall modify that internal state and invalidate the window, which will cause the display() to be called again and redraw the scene properly.
EDIT: Calling display() yourself won't be enough. I can't find how to invalidate the window in Java (in C this would be so much easier). As a dirty hack you can try calling temp.swapBuffers() manually in display, setting setAutoSwapBufferMode(false) and calling display from keyTyped().
Related
I have tried to create NPC character that can "see" the player by using cones of vision.
The NPC will rotate back and forth at all times.
My problem is that the arc has a generic and unchanging position, but when its drawn to the screen it looks correct.
[Screenshots of the collisions in action][1]
[GitHub link for java files][2]
I'm using Arc2D to draw the shape like this in my NPC class
// Update the shapes used in the npc
rect.setRect(x, y, w, h);
ellipse.setFrame(rect);
visionArc.setArcByCenter(cx, cy, visionDistance, visionAngle, visionAngle * 2, Arc2D.PIE);
/ CenterX, CenterY (of the npc),
/ the distance from the arc to the npc
/ a constant value around 45 degrees and a constant value around 90 degress (to make a pie shape)
I've tried multiplying the position and the angles by the sin and cosine of the NPC's current angle
something like these
visionArc.setArcByCenter(cx * (Math.cos(Math.toRadians(angle))), cy (Math.sin(Math.toRadians(angle)), visionDistance, visionAngle, visionAngle * 2, Arc2D.PIE);
or
visionArc.setArcByCenter(cx, cy, visionDistance, visionAngle - angle, (visionAngle + angle) * 2, Arc2D.PIE);
or
visionArc.setArcByCenter(cx, cy, visionDistance, visionAngle * (Math.cos(Math.toRadians(angle))), visionAngle * 2, Arc2D.PIE);
I've tried a lot but can't seem to find what works. Making the vision angles not constant makes an arc that expands and contracts, and multiplying the position by the sin or cosine of the angle will make the arc fly around the screen, which doesn't really work either.
This is the function that draws the given NPC
public void drawNPC(NPC npc, Graphics2D g2, AffineTransform old) {
// translate to the position of the npc and rotate
AffineTransform npcTransform = AffineTransform.getRotateInstance(Math.toRadians(npc.angle), npc.x, npc.y);
// Translate back a few units to keep the npc rotating about its own center
// point
npcTransform.translate(-npc.halfWidth, -npc.halfHeight);
g2.setTransform(npcTransform);
// g2.draw(npc.rect); //<-- show bounding box if you want
g2.setColor(npc.outlineColor);
g2.draw(npc.visionArc);
g2.setColor(Color.BLACK);
g2.draw(npc.ellipse);
g2.setTransform(old);
}
This is my collision detection algorithim - NPC is a superclass to ninja (Shorter range, higher peripheral)
public void checkNinjas(Level level) {
for (int i = 0; i < level.ninjas.size(); i++) {
Ninja ninja = level.ninjas.get(i);
playerRect = level.player.rect;
// Check collision
if (playerRect.getBounds2D().intersects(ninja.visionArc.getBounds2D())) {
// Create an area of the object for greater precision
Area area = new Area(playerRect);
area.intersect(new Area(ninja.visionArc));
// After checking if the area intersects a second time make the NPC "See" the player
if (!area.isEmpty()) {
ninja.seesPlayer = true;
}
else {
ninja.seesPlayer = false;
}
}
}
}
Can you help me correct the actual positions of the arcs for my collision detection? I have tried creating new shapes so I can have one to do math on and one to draw to the screen but I scrapped that and am starting again from here.
[1]: https://i.stack.imgur.com/rUvTM.png
[2]: https://github.com/ShadowDraco/ArcCollisionDetection
After a few days of coding and learning and testing new ideas I came back to this program and implemented the collision detection using my original idea (ray casting) and have created the equivalent with rays!
Screenshot of the new product
Github link to the project that taught me the solution
Here's the new math
public void setRays() {
for (int i = 0; i < rays.length; i++) {
double rayStartAngleX = Math.sin(Math.toRadians((startAngle - angle) + i));
double rayStartAngleY = Math.cos(Math.toRadians((startAngle - angle) + i));
rays[i].setLine(cx, cy, cx + visionDistance * rayStartAngleX, cy + visionDistance * rayStartAngleY);
}
}
Here is a link the the program I started after I asked this question and moved on to learn more, and an image to what the new product looks like
(The original github page has been updated with a new branch :) I'm learning git hub right now too
I do not believe that using Arc2D in the way I intended is possible, however there is .setArcByTangent method, it may be possible to use that but I wasn't going to get into that. Rays are cooler.
I'm a total beginner so forgive me if this is probably silly or improper of me to ask.
I'm trying to make my own virtual oscillograph in processing. I don't really know how to explain it, but I want to "zoom out" from where I am getting the peaks in waveforms, which is the window size. I'm not sure what I'm doing wrong here or what's wrong with my code. I've tried changing the buffer size, and changing the multiplier for x/y. My sketch is adapted from a minim example Sketch.
All Help is greatly appreciated.
import ddf.minim.*;
Minim minim;
AudioInput in;
int frames;
int refresh = 7;
float fade = 32;
void setup()
{
size(800, 800, P3D);
minim = new Minim(this);
ellipseMode(RADIUS);
// use the getLineIn method of the Minim object to get an AudioInput
in = minim.getLineIn(Minim.STEREO);
println (in.bufferSize());
//in.enableMonitoring();
frameRate(1000);
background(0);
}
void draw()
{
frames++; //same saying frames = frames+1
if (frames%refresh == 0){
fill (0, 32, 0, fade);
rect (0, 0, width, height);
}
float x;
float y;
stroke (0, 0);
fill (0,255,0);
// draw the waveforms so we can see what we are monitoring
for(int i = 0; i < in.bufferSize() - 1; i++)
{
x = width/2 + in.left.get(i) * height/2;
y = height/2- in.right.get(i) * height/2;
ellipse(x, y, .5, .5);
}
}
Thanks
Edit: you don't need push and pop matrix here. Guess my understanding of it is lacking too. You can just use translate.
You can use matrices to create a camera object, there is tons of material out there that you can read up on to understand the math behind this and implement it anywhere.
However, there might be an easier solution here. You can use pushMatrix and popMatrix in combination with translate. Push and popping the matrix will manipulate the matrix stack - you create a new "frame" where you can play around with translations, then pop back the original frame (so you don't get lost by applying new translations on each frame).
push the matrix, translate the z coordinate once before drawing everything you want zoomed out, pop the matrix. You can set up a variable for the translation so that you can control this with your mouse.
Here's a crude example (I don't have all those libraries so couldn't add it to your code):
float scroll = 0;
float scroll_multiplier = 10;
void setup()
{
size(800, 800, P3D);
frameRate(1000);
background(0);
}
void draw()
{
background(0);
//draw HUD - things that don't zoom.
fill(255,0,0);
rect(400,300,100,100);
//We don't want to mess up our coordinate system, we push a new "frame" on the matrix stack
pushMatrix();
//We can now safely translate the Y axis, creating a zoom effect. In reality, whatever we pass to translate gets added to the coordinates of draw calls.
translate(0,0,scroll);
//Draw zoomed elements
fill(0,255,0);
rect(400,400,100,100);
//Pop the matrix - if we don't push and pop around our translation, the translation will be applied every frame, making our drawables dissapear in the distance.
popMatrix();
}
void mouseWheel(MouseEvent event) {
scroll += scroll_multiplier * event.getCount();
}
I am writing a 2D traditional animation program in Java using swing and JPen. The application works well. However, i am dissatisfied with the results I am getting when I draw lines.
Using the JPen API, the swing panel listens for the stylus input and its code is:
/**
* This method is called whenever the stylus makes contact with the tablet while inside of the JPanel functioning
* as the Drawing Canvas.
* #param evt
*/
public void penLevelEvent(PLevelEvent evt) {
// Get kind of event: does it come from mouse (CURSOR), STYLUS or ERASER?
PKind kind = evt.pen.getKind();
// Discard events from mouse
if (kind == PKind.valueOf(PKind.Type.CURSOR)){
//System.out.println("returning since this is only a mouse cursor");
return;
}
// Get the current cursor location
// position value is in with respect to entire application window
// Get the tilt values (not with a Bamboo... so untested!)
float curX = evt.pen.getLevelValue(PLevel.Type.X);
float curY = evt.pen.getLevelValue(PLevel.Type.Y);
float pressure = evt.pen.getLevelValue(PLevel.Type.PRESSURE);// 0.0 - 1.0
float xTilt = evt.pen.getLevelValue(PLevel.Type.TILT_X);
float yTilt = evt.pen.getLevelValue(PLevel.Type.TILT_Y);
// Set the brush's size, and darkness relative to the pressure
float darkness = 255 * pressure;
// Transform them to azimuthX and altitude, two angles with the projection of the pen against the X-Y plane
// azimuthX is the angle (clockwise direction) between this projection and the X axis. Range: -pi/2 to 3*pi/2.
// altitude is the angle between this projection and the pen itself. Range: 0 to pi/2.
// Might be more pratical to use than raw x/y tilt values.
double[] aa = { 0.0, 0.0 };
PLevel.Type.evalAzimuthXAndAltitude(aa, xTilt, yTilt);
// or just PLevel.Type.evalAzimuthXAndAltitude(aa, evt.pen);
double azimuthX = aa[0];
double altitude = aa[1];
//-------------------------------------------------------------------------------------
// If the stylus is being pressed down, we want to draw a black
// line onto the screen. If it's the eraser, we want to create
// a white line, effectively "erasing" the black line
//-------------------------------------------------------------------------------------
if (kind == PKind.valueOf(PKind.Type.STYLUS)) {
//System.out.println("Darkness "+darkness);
int alpha = 255 - (int)darkness;
color = new Color(0,0,255, 255 - alpha);
}
else if (kind == PKind.valueOf(PKind.Type.ERASER)) {
System.out.println("Handle eraser");
}
else {
return; // IGNORE or CUSTOM...
}
//If movement of the stylus is occuring
if (evt.isMovement()) {
//and the buttonIsDown(boolean)
if(buttonIsDown) {
//drawingCanvas:JPanel -> instruct the jpanel to draw at the following coordinate using the specified pressure
drawingCanvas.stylusMovementInput( prevXPos,prevYPos, curX,curY, pressure);
}
prevXPos = curX;
prevYPos = curY;
}
prevXPos = curX;
prevYPos = curY;
}
So after the above method is invoked, the jpanel(drawingCanvas) starts to draw on a BufferedImage by obtaining the image's graphics2D. Here is the code stylusMovementInput->calls -> performDrawOnBufferImageGraphic2D :
/**
* Draw on the active frame that is selected. Then call channel refresh, to refresh the the composite image derived
* from call changes related to the current frame
* #param cX current
* #param cY current
* #param oX previous
* #param oY previous
* #param pressure pressure 0 - 1f
*/
private void performDrawOnBufferImageGraphic2D(float oX, float oY, float cX, float cY, float pressure){
//Obtain the current layer that user wants to draw one
//MyImageData is encapsulating a BufferedImage
MyImageData $activeData = getActiveLayer();
//Exit if one is not valid
if( $activeData == null) return;
//if valid layer, get the, get the bufferedImage.getGraphics
Graphics2D $activeDataGFX = $activeData.getImageGraphics();
// Customize the drawing brush (create a BasicStroke)
Stroke thickness = Sys.makeStroke(getPencilSize(pressure), null);
// Determine the tranparency with respect to the pressure
int alpha = (int)(255 * pressure * getPencilOpacityPercentage());
// Get the current color found in the color wheel
Color cwVal = Sys.getColorFromColorWheel();
Color drawingColor ;
if(cwVal != null){
// add alpha value to it
drawingColor = new Color(cwVal.getRed(), cwVal.getGreen(), cwVal.getBlue(), alpha);
}else throw new RuntimeException("ColorWheel is null drawing stylus draw");
//set the brush and drawingColor
$activeDataGFX.setStroke(thickness);
// Save reference to the current bufferedImage graphic component
Composite originalComposite =$activeDataGFX.getComposite();
if(getCurrentTool() == DrawingCanvasTool.ERASER){
//If eraser, set new composite information, to allow erasing to transparency
$activeDataGFX.setPaint( new Color(255,255,255, 0));
$activeDataGFX.setComposite(AlphaComposite.getInstance(AlphaComposite.SRC_IN, 0.0f));
}else {
//set the drawing color
$activeDataGFX.setPaint(drawingColor);
}
//---------------------------------------------------------------
// Rotate, Translate, Zoom the image according to the panning, zoom, and rotate
// set by the user
//---------------------------------------------------------------
//Figure out the canvas center, as it is used for rotating
float theta = (float)Math.toRadians(canvasRotation);
Dimension drawingAreaComponentSize = getSize();
float centerX = drawingAreaComponentSize.width/2;
float centerY = drawingAreaComponentSize.height/2;
AffineTransform transform = new AffineTransform();
transform.rotate(-theta, centerX, centerY);
transform.scale(1.0f / canvasZoom, 1.0f / canvasZoom);//erase
transform.translate(-canvasPan.x, -canvasPan.y);//erase
$activeDataGFX.setTransform(transform);
//Now Draw inside of the active data graphics object
Shape line = new Line2D.Float(cX,cY, oX, oY);
Path2D.Float t = new Path2D.Float(line);
$activeDataGFX.draw(t);
//drawing is complete
if(getCurrentTool() ==DrawingCanvasTool.ERASER){
//Restore the old composite object
$activeDataGFX.setComposite(originalComposite);
}
//Refresh basically merges frames along a frame column into a single preview buffered image
//which will later be used to view the tool animation when user clicks "play" button
channelPannel.refreshFrameOut( channelPannel.getCurrentFrame() );
}
I commented a lot of the code, and provided the critical points related to the question. Any help is much appreciated. And again, the problem is how do i draw a smooth line worthy of a descent drawing program.
I am trying to create a method which returns a texture modified by an overlay using libgdx and PixMap.
Assuming I have 2 images:
A Base Image in FileHandle textureInput
And an overlay image in FileHandle overLay
It should produce this texture:
So it should use the RGB values from the textureInput and the alpha values from the overLay and create the final image. I believe I can do this using the Pixmap class but I just can't seem to find exactly how.
Here is what I gather should be the structure of the method:
public Texture getOverlayTexture(FileHandle overLay, FileHandle textureInput){
Pixmap inputPix = new Pixmap(textureInput);
Pixmap overlayPix = new Pixmap(overLay);
Pixmap outputPix = new Pixmap(inputPix.getWidth(), inputPix.getHeight(), Format.RGBA8888);
// go over the inputPix and add each byte to the outputPix
// but only where the same byte is not alpha in the overlayPix
Texture outputTexture = new Texture(outputPix, Format.RGBA8888, false);
inputPix.dispose();
outputPix.dispose();
overlayPix.dispose();
return outputTexture;
}
I am just looking for a bit of direction as to where to go from here. Any help is really appreciated. I apologize if this question is too vague or if my approach is entirely off.
Thanks!
I finally found the way to do this.
How my game is setup is that each item draws itself. They are handed a spritebatch and can do stuff with it. I did it that way various reasons. There is an item manager containing a list of items. Each item has various attributes. Each item has it's own render method along with other independent methods. Here is what finally worked:
A normal item's render method which does not use any alpha masking:
public void render(SpriteBatch batch, int renderLayer) {
if(renderLayer == Integer.parseInt(render_layer)){ // be in the correct render layer
batch.draw(item.region,
item.position.x, // position.x
item.position.y, // position.y
0, //origin x
0, //origin y
item.region.getRegionWidth() , //w
item.region.getRegionHeight(), //h
item.t_scale, //scale x
item.t_scale, //scale y
item.manager.radiansToDegrees(item.rotation)); //angle
}
}
So it is handed a spritebatch that it draws to with the correct image, location, scale, and rotation, and that is that.
After playing around with what I found here: https://gist.github.com/mattdesl/6076846 for a while, this finally worked for an item who needs to use alpha masking:
public void render(SpriteBatch batch, int renderLayer) {
if(renderLayer == Integer.parseInt(render_layer)){
batch.enableBlending();
//draw the alpha mask
drawAlphaMask(batch, item.position.x, item.position.y, item.region.getRegionWidth(), item.region.getRegionHeight());
//draw our foreground elements
drawForeground(batch, item.position.x, item.position.y, item.region.getRegionWidth(), item.region.getRegionHeight());
batch.disableBlending();
}
}
There is a TextureRegion named alphaMask which contains a black shape.
It can be any image, but let's say in this instance its this shape / image:
Here is the function called above that uses that image:
private void drawAlphaMask(SpriteBatch batch, float x, float y, float width, float height) {
//disable RGB color, only enable ALPHA to the frame buffer
Gdx.gl.glColorMask(false, false, false, true);
// Get these values so I can be sure I set them back to how it was
dst = batch.getBlendDstFunc();
src = batch.getBlendSrcFunc();
//change the blending function for our alpha map
batch.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ZERO);
//draw alpha mask sprite
batch.draw(alphaRegion,
x, // position.x
y, // position.y
0, // origin x
0, // origin y
alphaRegion.getRegionWidth(), // w
alphaRegion.getRegionHeight(), // h
item.t_scale, // scale x
item.t_scale, // scale y
item.manager.radiansToDegrees(item.rotation)); // angle
//flush the batch to the GPU
batch.flush();
}
There are a variety of "materials" to apply to any shape. In any instance one of them is assigned to the spriteRegion variable. Let's say right now it is this:
So the drawForeground method called above uses that image like this:
private void drawForeground(SpriteBatch batch, float clipX, float clipY, float clipWidth, float clipHeight) {
//now that the buffer has our alpha, we simply draw the sprite with the mask applied
Gdx.gl.glColorMask(true, true, true, true);
batch.setBlendFunction(GL10.GL_DST_ALPHA, GL10.GL_ONE_MINUS_DST_ALPHA);
batch.draw(spriteRegion,
clipX, // corrected center position.x
clipY, // corrected center position.y
0, //origin x
0, //origin y
spriteRegion.getRegionWidth() , //w
spriteRegion.getRegionHeight(), //h
item.t_scale, //scale x
item.t_scale, //scale y
item.manager.radiansToDegrees(item.rotation)); //angle
//remember to flush before changing GL states again
batch.flush();
// set it back to however it was before
batch.setBlendFunction(src, dst);
}
That all worked right away in the desktop build, and can produce "Brick Beams" (or whatever) in the game nicely:
However in Android and GWT builds (because after all, I am using libgdx) it did not incorporate the alpha mask, and instead rendered the full brick square.
After a lot of looking around I found this: https://github.com/libgdx/libgdx/wiki/Integrating-libgdx-and-the-device-camera
And so to fix this in Android I modified the MainActivity.java onCreate method like this:
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
AndroidApplicationConfiguration cfg = new AndroidApplicationConfiguration();
cfg.useGL20 = false;
cfg.r = 8;
cfg.g = 8;
cfg.b = 8;
cfg.a = 8;
initialize(new SuperContraption("android"), cfg);
if (graphics.getView() instanceof SurfaceView) {
SurfaceView glView = (SurfaceView) graphics.getView();
// force alpha channel - I'm not sure we need this as the GL surface
// is already using alpha channel
glView.getHolder().setFormat(PixelFormat.TRANSLUCENT);
}
}
And that fixes it for Android.
I still cannot figure out how to make it work properly in gwt, as I cannot figure out how to tell libgdx to tell GWT to tell webGl to go ahead and pay attention to the alpha channel. I'm interested in how to do something like this in an easier or less expensive way (though this seems to work fine).
If anyone knows how to make this work with GWT, please post as another answer.
Here is the non-working GWT build if you want to see the texture issue:
https://supercontraption.com/assets/play/index.html
I came across an issue while working on Processing 2.0 software using Java.
Each time I add an animation, I also add a background to erase the previous frame of this animation.
Unfortunately, this process also erases the rest of my graphics.
Is there a way to animate PShape without adding a new background?
Or is there a better way to animate shape in general?
I also would like to mention that I work with ActionScript language and my understanding of animation is based around MovieClip.
Thanks.
EDIT: Code added below:
Application Entry point
LineManager lineManager;
Character character;
void setup() {
size( 300, 600 );
background( 50 );
rectMode( CENTER );
frameRate( 24 );
lineManager = new LineManager();
character = new Character();
}
void draw() {
character.onTick();
}
Character Class
public class Character {
float MIN_VALUE = 80;
float value = MIN_VALUE;
float radius = 50.0;
int X, Y;
int nX, nY;
int delay = 16;
PShape player;
public Character() {
X = width / 2;
Y = height / 2;
nX = X;
nY = Y;
player = loadShape("player.svg");
}
public void onTick() {
value = value + sin( frameCount/4 );
X += (nX-X)/delay;
Y += (nY-Y)/delay;
/*
** My issue is the line below, as when adding it to render the animation
** I end up hiding the rest of my graphics
*/
background(0);
ellipse( X, Y, value, value );
shape( player, -10, 140, 320, 320 );
fill( 222, 222, 222, 222 );
}
}
The Processing dialect doesn't support indipendent graphics layers, but there are plenty of third party libraries that enable you to do that, like this one (last update: 2011).
Check out the updated list of the main libraries on Processing's site, under the Animation section.