I'm beginning to write a special use graphing program and I'm leaning towards using OpenGL to generate the graphics. The ultimate goal is an architecture that accommodates both 2D and 3D graphs with the basic framework.
Exporting the generated graphs as images is a critical feature, and eventually I'm going to write the code to generate vector images of the graphs' 2D projections. However, in the mean time, I want to be able to export the graphs as high resolution images--images significantly larger than the application window.
I'm writing this application in Java and using the LWJGL OpenGL wrapper. I've figured out how to take screenshots of the display window, but I haven't been successful creating larger images. I've tried to make invisible Canvases, but I can't make it work.
The documentation says here that the Canvas's isDisplayable() method must return true, and to that end I've overridden the isDisplayable() method to always return true, so that it shouldn't care whether or not it's in a Frame, but this doesn't work. Instead, it throws the following error:
java.lang.RuntimeException: No OpenGL context found in the current thread.
at org.lwjgl.opengl.GLContext.getCapabilities(GLContext.java:124)
at org.lwjgl.opengl.GL20.glDeleteProgram(GL20.java:311)
The problem seems to be that it also needs some properties from the top-level window, but even when I make a dummy Frame class I get the same error as before, until I call setVisible(true) on the frame.
Does anyone know how to fake these graphics properties into thinking it has a visible top-level window? Does anyone know an easier way?
As an alternative, you could use a framebuffer object (FBO) to render into a texture.
Have a look at this render to texture example.
Related
I've recently been working on a java awt application, it started out as a very simple render testing demo but has sort of cascaded and so to make my life easier I have decided to switch to imgui (with an lwjgl backend). I have never used lwjgl but I have used imgui with c++.
In it's current format the gui of the application is incredibly simplistic as I tweak most variables for the algorithms i'm demoing with it from within code which is precisely why I want to start switching over to imgui.
Currently it mostly consists of a pair of images, one which is a per pixel visualisation of my 2d algorithms and one which uses awts graphics features to draw some rectangles to represent other data. essentially producing 2 dynamic images every frame.
my question is then, how would I do this per pixel rendering (and possibly some of the rectangle drawing) inside of lwjgl and get these displaying as part of an imgui gui.
that or getting imgui and my existing awt renderer to coexist within the same window.
EDIT:
Thought I'd add a summary here:
Littereally all I wanna know is how I can get ImGui to render an image that I can modify from within java every frame with some sort of SetPixelAt function.
We need to render to texture entire game window. We have only java SDK jars from our client, and we can access only OpenGL Window context ID of window they create when game runs.
My question is, is window context enough to somehow render it to texture?
We cannot alter code of our client, but we need to render Editor windows on top of their java SDK.
They are using LWJGL for rendering. Plan is to render game into separate window, similar to this:
I guess this can be only achieved via mentioned rendering to texture.
if you can use the regular openGL commands, there might be something that you could do.
the issue is that you'd need to change the openGL's state machine which might collude with what their game is doing.
One thing that could work, but again it might clash with what they are doing.
Since you want to display their final output, it's a safe bet that their are rendering to the default framebuffer. So you create your own framebuffer that has a texture color attachment, and you blit the default framebuffer into your own with glBlitFramebuffer. That way you should get the default framebuffer into a texture.
for that you need to do glBindFramebuffer(GL_READ_FRAMEBUFFER, 0) and glBindFramebuffer(GL_DRAW_FRAMEBUFFER, **your buffer**) before the blitting to set the targets of the operation.
Since I don't know if you can do code for every frame I'm not sure that would work, but it might be worth a try.
What I am trying to do is to create a GUI using SWING and then have a container that will display the actual Slick game inside as seen below.
The problem is that the AppGameContainer is the only available container (that I know of) but that creates the whole window (which includes the title bar and stuff) so I can't really embed that inside the GUI, could I? I'm open to other solutions as well so let me know if there is a better way to achieve this.
I am not very experienced with Slick2D so sorry if it's obvious but I tried Googling it and didn't come up with anything.
I would recommend using an OpenGL Frame Buffer Object (FBO) to render your scene to. An FBO acts like a 2D texture object in OpenGL, so you could then read the pixel data from the FBO and use it to render to a Buffered Image, and use that to render to your java swing canvas. This
is a pretty good tutorial on how to use FBOs if you choose to implement this strategy.
I am conducting a learning experiment with Java. I am attempting to create a simple "Megaman" style game using Java and the 3rd party API "LibGDX". I have obtained a rather solid understanding of the relationship between the OrthographicCamera object from LibGDX and the World object from LibGDX's implementation of "JBox2d".
However, when I resize the window the objects inside World stretch. I have made use of the resize(int width, int height) method of the Screen interface. Inside of which i reset the OrthographicCamera's width and height. This does not seem to have any effect of the way the images looks or behaves in the physics simulation.
So my question is this: How do i properly resize a LibGDX/JBox2d application's window without distorting the objects being simulated?
here is the code (in the form of a git repo because i find GitHub faster, easier, and kinder to the SO server...)
EXTERNAL LINK
https://gist.github.com/konnerdroid/8113302
EXTERNAL LINK
AHA!!!! i wasnt updating my camera...
For any changes made to the camera to be visible and their effect to be felt you need to call camera.update() either in your loop or after any changes are made
well... at least i learned how to use github =)
I'm currently working on a project where I need to plot the predicted footprint of a satellite on a mercator-projected world map with possible scaling/cropping/etc. done to the map.
I thought Cairo would be a good library to use for this purpose. There are Java-bindings available for it. However, I just can't find a way to make it render onto a SWING GUI (e. g. onto the surface of a JPanel). I thought about rendering into a byte buffer and plotting it out pixel by pixel using Java2D, but I can't find any API call to make Cairo render into a buffer (which is weird, as this is one of the most fundamental functionalities I'd expect to get supported by such a library).
Any way I can achieve this? I know there is Java2D, but it is fairly basic. I'd really appreciate a more powerful, widespread, well-tested, high-quality, free (LGPL) graphics library for this purpose. Cairo would be such a perfect fit, if I could get it to work with SWING somehow.
Thank you very much for your proposals.
One of the fundamentals in Cairo is that any non-abstract image context is bound to one of the supported back ends.
I've never tried the Java bindings, but it is likely they are a thin layer, not providing a a new surface type - you should use the "Image Surface" type.
On the C documentation for the Library, there is a "cairo_image_surface_get_data()" call
(here: http://cairographics.org/manual/cairo-Image-Surfaces.html ) which gives one acess to the buffer.
Maybe the bindings didn't expose this to Java due to the low level memory access required to actually use its contents. If that is true, then I propose you the following work-around:
(1)Render your results to a Cairo Image Surface, (2) Write a temporary file with the surface contents,(3) Read and display the temporary file with the Java 2D API.
Here is a example.
I found this examples on http://java-gnome.sourceforge.net
It create a gtk window, and actually a gtk DrawingArea widget, the onDraw() event uses cairo.
I compiled and run it on linux, it works good.
however java-gnome seems only have linux binary. Maybe somebody could make a windows binary, but need some work.
It is a gtk window, so have nothing to do with swing.
Maybe you don't need swing if gtk(java-gnome) can fit your needs.
If you must use swing, you can use cairo to render to a image in memory, then show it to JComponent by somthing like overriding paintComponent() method. I don't know the performance.