I am working currently with LibGDX and Java.
Im wonder me, why LibGDX give you only a render Mehtod for game logic and rendering.
Why LibGDX do this, why LibGDX does not inherited another mehtod like update ?
And should I handle game Logic and rendering in the render method(Of course I split it in an render and update method)
Both render and update methods would be called consecutively if they existed. So there is not much point in separating them and complicating the API.
Of course, if you like, you can create two different methods and call them in provided render method (if you want to separate stuff). You can also conditionally call them if you sometimes want to call only update or only render.
The point here is that it serves simple purposes to have them combined and if you want advanced stuff you can always extend the functionality.
Related
We are trying to make built in game editor for our java framework, that uses GLFW for rendering game.
We want to build editor on top of it, using imgui
which also uses OpenGL, same as GLFW. We cannot touch any code in our java framework.
Is it technically possible to render game using imgui and than let our framework render itself via OpenGL again?
In other case, are there any necessary steps we need to take so it does work? Since each render call swaps buffers etc...
You have another very related question here. I'll answer both.
Let me reword your question: "I have a window which has been rendered with OpenGL in another app (or in a not-touchable code part of my app). How can I add my own render to this window, adding or replacing pixels?"
That other app (let's call it 'appA') has done some tasks: Created a window, created a gl-context (ctxA), set that context as current to some thread (thrA), done rendering and finally used SwapBuffers to see the result in the screen.
The easy case is when appA code is part of your appB code, and appA has not called SwapBuffers yet, and you have the window handle and the ctxA. Then you can do the same proccess: Set ctxA current to your thread (thrB or even thrA if you have access to it, e.g. is the same main thread), do your rendering an call SwapBuffers.
==>But you must know which OpenGL version is used by appA, and do your OpenGL job with the same version.
==>You must also be sure that appA does not execute any OpenGL command while your appB has set ctxA as current for your thread.
A more difficult case is when you can't be sure about appA doing its rendering at the same time appB renders too. You can use a shared context ctxB (so it shares with ctxA most data) which you set as current to your own thread thrB.
==>The issue is that there's no way of knowing who renders first, appA or appB.
If you are getting the whole picture then you see that your main issue is that you need to prevent appA from calling SwapBuffers. If you have this possibility then use your own ctx, thread and OpenGL stuff with the same window. You should be able to read the default framebuffer by ReadPixels.
The worst case is when you have the window and nothing else, or when SwapBuffers has been already called. You still have a chance: use code injection. This technic grabs the call to SwapBuffers from the graphics driver. Then, when it's called, do the pixels modification, and let the SwapBuffers run as before grabbing.
I am trying to get something like this:
I want a UI windows and in the center of window I want to display my world.
I want to use scene2D and scene2D.ui with an Orthographic camera.
Any advice?
Edit: I know that I must use two stages in order get a ui and world windows but I don't know how I can tell to the world window stage that render its content in that section instead all screen.
There are two ways you can go about.
Use a CustomViewPort for the stage responsible for rendering world. It is fairly easy to use and example is given in the wiki.
Let it be rendered on entire screen. This way you could keep UI backgroundless and space between components would be filled by world itself. It might be considered more immersive by many. You would also be allowed to use semi-transparent UI this way.
Anyways, it's a matter of taste.
Hope it helps.
Imagine I have a model that I want to use in a LibGDX game project (as described here). Let's say, it's the model of a human. Now I want to do several animations with this human: I want him to raise his left/right arm, his left/right leg, to raise a single finger, and also all possible combinations of those animations.
My question is: Do I need to create a single animation for all of those movements outside of my Java code (which would mean I need a file for every single animation and would make my project extremly large), or is it somehow possible to create a model (e.g. by using Blender's Armature or something like that) that can be transformed inside my Java code?
Assuming you're asking if you can include one or more animations with the g3db/g3dj file format. Yes you can. Just create your model, including the skeleton and animations. Export it to FBX (with animation enabled). Next, convert to g3db or g3dj (fbx-conv -f file.fbx). Load your model as described in the tutorial you referenced. Now you can animate your model using an AnimationController. If you want to combine multiple animations at the same time, you can use multiple AnimationControllers, as long as they don't affect the same bones.
I have finished writing a Hangman game, but I want to move the hangman out of the canvas when the game is over. I create that hangman with any partition of his body. When I move the object it can move only one object at a time. How can I bunch them together?
You have to create an object of the class GCompound. This class of object allows you to create new object that can be manipulated like GOval and so. In the Stanford course, there is an example called GFace.
Probably you can refactor your code to make the whole hangman one object through the whole implementation and whenever needed make different parts visible. When the time comes to remove him, just dispose of the whole thing either by resetting them to non-visible or try making a new object I guess... If you cna post the code of your implementation I may be able to give you some more help...
I'm trying to build a new java swing component, I realise that I might be able to find one that does what I need on the web, but this is partly an exercise for me to learn ow to do this.
I want to build a swing component that represents a Gantt chart. it would be good (though not essential for people to be able to interact with it (e.g slide the the tasks around to adjust timings)
it feels like the best approach for this is to subclass JComponent, and override PaintComponent() to 'draw a picture' of what the chart should look like, as opposed to doing something like trying to jam everything into a custom JTable.
I've read a couple of books on the subject, and also looked at a few examples (most notably things like JXGraph) - but I'm curious about a few things
When do I have to switch to using UI delegates, and when can I stick to just fiddling around in paintcomponent() to render what I want?
if I want other swing components as sub-elements of my component (e.g I wanted a text box on my gantt chart)
can I no longer use paintComponent()?
can I arbitrarily position them within my Gantt chart, or do I have to use a normal swing layout manager
many thanks in advance.
-Ace
I think that the article i wrote a few years ago for java.net is still correct today. Doing everything in one monolithic class gets you going faster in the beginning, but becomes a mess quite fast. I highly recommend doing the separation between the model (in your main class) and the view (UI delegate). The view is responsible for:
interaction with the user - mouse, keyboard etc.
painting
creating "worker" subcomponents as necessary
In the medium and long run this is the approach that has been validated over and over again in the Flamingo component suite, which you can use as an extra reference point (in addition to how core Swing components are implemented).
Using UI delegates is a good idea if you think that your component should look different for different Look And Feels. Also it is generally a good idea from design point of view to separate you presentation from your component
Even when overrding paintComponent you can still put any sub components on it.
Using null layout you arbitrarey position your components. Alternatively you can use layouts too.
Here is a very good starting point for you.