I’m interested in generating 3D height maps for a 2D game I am working on. I am using this to create land masses like in Minecraft or Dwarf Fortress.
I've created 2D heightmaps before, but I used a very rudimentary algorithm that just interpolated between points of a fully random noise array to create a fixed size map. This doesn't tile however since if I try to add a new map next to it, it doesn't account for the height of the existing map.
I have read about Perlin and Simplex noise, but I’m now confused on how to apply Perlin or Simplex noise to a 2D array of height values.
Any help with this would be greatly appreciated. I have no idea what to do anymore. The term 'octaves' not on sheet music scares me.
Exactly, you have to look for Perlin/Simplex noise. Think of it as a function f(x,y,...) (as many variable as you wish) that will output random-looking noise. The difference with pure noise is that it acts on gradients, so it will look more natural since it "draws" gradients instead of plain noise with high local variability. Simplex noise is pretty much the same as Perlin's but it divides space in simplexes instead of operating in n-dimensional grids like Perlin does. This alleviates computational cost and has some more benefits.
It might seem scary, but it's simple actually. You're scared about octaves, but they're pretty much the same as octaves in music: just higher (or lower) frequency noise mixed with the original output. Talking about sheet music, it's like playing C4 and C5 at the same time. It's still C but it has some flavor added (little spikes in the waveform.) Don't be afraid and keep researching, it's not that hard.
Regarding tiling:
If you mean linear tiling (like Minecraft does) you just have to use the same seed for the noise algorithm. As soon as you approach your new boundaries, just generate the new chunk of data and it will tile perfectly (just like it does if you infinitely fill with noise.)
If you mean torus tiling (repeating tiles, think Pacman for instance) I found the best solution is to generate your noise tile and then interpolating near the borders as if it were tiled. The noise will deform to match sides and it will be completely tileable.
I think that your question might be phrased incorrectly. A heightmap is 2D, inherently, and you use it to generate 3D terrain (mesh).
http://en.wikipedia.org/wiki/Heightmap
If that is the case... then you can use the Perlin noise function to create a 2D image and use it for a heightmap. If you are unsure of what is created, you can use GIMP or Photoshop or a similar tool to create Perlin noise on a 2D Canvas for an example.
Minecraft makes use of the Perlin noise function to create a 3D cube of noise. So where a heightmap is 2D Perlin noise, Minecraft is 3D Perlin noise. You can also generate 1D Perlin noise.
What is nice about the Perlin noise function is that you can control the "resolution" and "offsets" of the texture through the mathematics and hence create seamless environments. I believe that Minecraft makes use of Perlin noise as a base and then moves on to some cellular automata for finishing touches.
I am unfamiliar with simplex noise.
EDIT: here is a link to test some math functions (in processing)
http://processing.org/learning/basics/noise2d.html
Related
I have been drawing 3D graphics using the graphics.fillPolygon() method in Java. It has worked well for me so far. I can rotate the graphics by dragging my mouse across the screen, and I can zoom in and out of my graphics.
My one issue though, is finding a way to draw the polygons in the correct order so that the background polygons are not drawn on top of the foreground polygons. I know that the answer to my problem is common knowledge to the 3D graphics programmer; Some people have told me to use OpenGL, but that is too much for me to learn right now; I just want to create basic 3D graphics. I am looking for a mathematical procedure to organize my polygons in the order that they should be drawn, (from back to front).
I have thought about just taking the average distance to all points of each polygon, but that is an unreliable method. I have been using trigonometry for all of my methods, but I am starting to learn some linear algebra concepts; The use of vectors may be helpful in finding which polygons lie in front.
#Raisintoe, In computer science, binary space partitioning (BSP) is a method for recursively subdividing a space into convex sets by hyperplanes. This subdivision gives rise to a representation of objects within the space by means of a tree data structure known as a BSP tree.
Binary space partitioning was developed in the context of 3D computer graphics,1 where the structure of a BSP tree allows spatial information about the objects in a scene that is useful in rendering, such as their ordering from front-to-back with respect to a viewer at a given location, to be accessed rapidly. Other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD,[3] collision detection in robotics and 3-D video games, ray tracing and other computer applications that involve handling of complex spatial scenes.
See the Wikipedia article here
This approach has been used by video games mega tubes such as Quake. You can find more about it in this excellent article by Michael Abrash where he explains how they used BSP tree in Quake to determine Quake's visible surfaces.
I Hope this helps
Yes I do agree, OpenGL is really complex, and especially modern openGL that forces you to use shaders always can get more in your way of getting things done, than actually helping you. But openGl solves this problem for you. It draws each pixel of the polygon with it's depth value. When you draw the second polygon, the pixel is only updated when it's depth value is closer to the camera than the old one. You can do the same, and you will have a pixel perfect result.
side note: Modern games engines even prefer rendering from the front to the back, because then the expensive pixel calculation in the fragment shader can be skipped for pixels that would be overdrawn anyway.
side note 2: actually you have to enable the depth test and explicitly tell, that you want the closest pixels.
My name is Chris and I'm working on my first Java game.
Thus far, I have created a tile based 2D game, however my level is done in such a way so that if I create an image and its all green, then that green would stand for a grass tile. If I put a pixel of blue, the game would assign that as a water tile.
However, that limits the game to how I design the level, I'd much rather have an infinite terrain of tiles.
Being a beginner, I looked up different ways to do so. A particularly poignant method was something called a Perlin Noise.
I looked into it but it seemed very complex.
Would somebody mind defining it in simpler terms?
Also, if you have any tutorials that 'dumb' it down a bit and give a brief overview, that'd be fantastic!
Sorry I haven't been too specific, I'm actually avoiding from doing so.
I'd suggest skipping Perlin Noise and taking a look at something called OpenSimplex Noise.
It's useful for basically all of the same things as Perlin Noise, but it has significantly fewer visible directional artifacts. Basically, the noise takes an input coordinate (in 2D, 3D, or 4D) and returns a value between -1 and 1. The output values vary continuously with the input coordinate changes.
Here are three 256x256 images generated using noise (x / 24.0, y / 24.0):
The first one is the raw noise
The second one is green where the values are greater than zero, and blue otherwise
The third one is blue where the values are greater than -0.2 and less than 0.2, and green otherwise.
Note that there's also Simplex Noise (different algorithm from OpenSimplex) that has reduced directional artifacts compared to Perlin Noise, but the 3D and higher implementations of Simplex Noise (if you happen to want to use 3D noise to vary anything in 2D over time) are saddled with a patent.
OpenSimplex Noise is actually an algorithm I've developed for a game of my own, so shameless plug I know, but I think it's the best for you out of what's available.
Perlin noise was brilliantly covered by Daniel Shiffman on The Nature Of Code. It's an online book that has awesome Javascript/ProcessingJS sample code to demonstrate some of the important concepts:
A good random number generator produces numbers that have no relationship and show no discernible pattern. As we are beginning to see, a little bit of randomness can be a good thing when programming organic, lifelike behaviors. However, randomness as the single guiding principle is not necessarily natural. An algorithm known as “Perlin noise”, named for its inventor Ken Perlin, takes this concept into account. Perlin developed the noise function while working on the original Tron movie in the early 1980s; it was designed to create procedural textures for computer-generated effects. In 1997 Perlin won an Academy Award in technical achievement for this work. Perlin noise can be used to generate various effects with natural qualities, such as clouds, landscapes, and patterned textures like marble.
Perlin noise has a more organic appearance because it produces a naturally ordered (“smooth”) sequence of pseudo-random numbers. The graph on the left below shows Perlin noise over time, with the x-axis representing time; note the smoothness of the curve. The graph on the right shows pure random numbers over time.
(The code for generating these graphs is available in the accompanying book downloads.)
Khan Academy dedicated the entire advanced Javascript lessons to dissect some of the stuff shown by Shiffman on his book. They have great lessons on randomness, and of course, one just for the Perlin noise.
You are not obliged to get the full understanding of the perlin or simplex implementation immediately. You can slowly learn while playing with parameters of the various methods you will find. Just use it by feeding x,y, possibly z or more dimension arguments with the coordinates of a grid for example. To keep it simple, you basically mix/superimpose several layers(octaves) of interpolated random images at different scales .
You may also want to evaluate and store your noise offline because of the processing charge it may imply if used at run-time (although depending on the resolution / octaves and your processing budget or testing purposes, you can achieve quite decent real time frame rates too).
I am using the JMonkey Engine to create a 3D bounding box and then I'm trying to use smaller boxes to flood fill the bounding box. Unfortunately I can't find a 3D flood fill algorithm.
Does anyone know of a 3d flood fill algorithm or have any pseudo code or examples of this being done in any language?
I don't think you will find something like that. Floodfill is somewhat bound to pixel based graphics, and that doesn't go along well with OpenGl / 3d.
If you have some kind of pixel concept for 3d, I think adapting a 2d algorithm shouldn't be rocket science. I just doubt anyone found it useful so far.
Perhaps something like octrees is worth further reading?
Hey, I'm currently trying to extract information from a 3d array, where each entry represents a coordinate in order to draw something out of it. The problem is that the array is ridiculously large (and there are several of them) meaning I can't actually draw all of it.
What I'm trying to accomplish then, is just to draw a representation of the outside coordinates, a shell of the array if you'd like. This array is not full, can have large empty spaces with only a few pixels set, or have large clusters of pixel data grouped together. I do not know what kind of shape to expect (could be a simple cube, or a complex concave mesh), and am struggling to come up with an algorithm to effectively extract the border. This array effectively stores a set of points in a 3d space.
I thought of creating 6 2d meshes (one for each side of the 3d array), and getting the shallowest point they can find for each position, and then drawing them separetly. As I said however, this 3d shape could be concave, which creates problems with this approach. Imagine a cone with a circle on top (said circle bigger than the cone's base). While the top and side meshes would get the correct depth info out of the shape, the bottom mesh would connect the base to the circle through vertical lines, making me effectivelly loose the conical shape.
Then I thought of annalysing the array slice by slice, and creating 2 meshes from the slice data. I believe this should work for any type of shape, however I'm struggling to find an algorithm which accuratly gives me the border info for each slice. Once again, if you just try to create height maps from the slices, you will run into problems if they have any concavities. I also throught of some sort of edge tracking algorithm, but the array does not provide continuous data, and there is almost certainly not a continuous edge along each slice.
I tried looking into volume rendering, as used in medical imaging and such, as it deals with similar problems to the one I have, but couldn't really find anything that I could use.
If anyone has any experience with this sort of problem, or any valuable input, could you please point me in the right direction.
P.S. I would prefer to get a closed representation of the shell, thus my earlier 2d mesh approach. However, an approach that simply gives me the shell points, without any connection between them, that would still be extremely helpful.
Thank you,
Ze
I would start by reviewing your data structure. As you observed, the array does not maintain any obvious spatial relationships between points. An octree is a pretty good representation for data like you described. Depending upon the complexity of you point set, you may be able to find the crust using just the octree - assuming you have some connectivity between near points.
Alternatively, you may then turn to more rigorous algorithms like raycasting or marching cubes.
Guess, it's a bit late by now to be truly useful to you, but for reference I'd say this is a perfect scenario for volumetric modeling (as you guessed yourself). As long as you know the bounding box of your point cloud, you can map these coordinates to a voxel space and increase the density (value) of each voxel for each data point. Once you have your volume fully defined, you can then use the Marching cubes algorithm to produce a 3D surface mesh for a given threshold value (iso value). That resulting surface doesn't need to be continuous, but will wrap all voxels with values > isovalue inside. The 2D equivalent are heatmaps... You can refine the surface quality by adjusting the iso threshold (higher means tighter) and voxel resolution.
Since you're using Java, you might like to take a look at my toxiclibs volumeutils library, which also comes with sevaral examples (for Processing) showing the general approach...
Imagine a cone with a circle on top
(said circle bigger than the cone's
base). While the top and side meshes
would get the correct depth info out
of the shape, the bottom mesh would
connect the base to the circle through
vertical lines, making me effectivelly
loose the conical shape.
Even an example as simple as this would be impossible to reconstruct manually, let alone algorithmically. The possibility of your data representing a cylinder with a cone shaped hole is as likely as the vertices representing a cone with a disk attached to the top.
I do not know what kind of shape to
expect (could be a simple cube...
Again, without further information on how the data was generated, 8 vertices arranged in the form of a cube might as well represent 2 crossed squares. If you knew that the data was generated by, for example, a rotating 3d scanner of some sort then that would at least be a start.
Due to lack of capital and time we are having to do our game in 2D even though me (the developer) would prefer if it was done in 3D. I would hate for users to join the game, only to think the graphics look bad and leave. Though I suppose we just have to try and do our best.
That being said, I would like to develop the game in such a way that, should the time come, we can port to 3D as easily as possible. I know of games that had to rewrite the entire thing from scratch. I'd hate to do that, so I'd like some tips / guidelines to use when programming the game so that, when we move to 3D, it will be as simple as changing the code of 1-5 graphics rendering classes (or close) and the 3D graphics would run.
P.S It is a multiplayer roleplaying game (Not an MMORPG, much smaller in scope for now)
The simplest way to achieve this is to write the game in 3D, and render the views using a 3d to 2d projection, e.g. in plan or elevation. If the games internal mechanics are 2D and you try to move to a true 3d frame, you would probably better off with a rewrite. It also depends to an extent on what you mean by 3D, and whether you have an effective mapping option. For example, Microsofts Age of Empires is rendered in 3D, but would work perfectly well as a 2D plan. A first person shooter such as Half Life or Quake on the other hand would not.
Due to lack of capital and time we are
having to do our game in 2D even
though me (the developer) would prefer
if it was done in 3D. I would hate for
users to join the game, only to think
the graphics look bad and leave.
Though I suppose we just have to try
and do our best.
I don't think everything has to be 3D to look good. Well done 2D can look many times better than some standard 3D graphics. To have great 3D graphics you have to invest some serious effort. If your budget doesn't allow for this, I would rather try to put a lot effort into gameplay development.
Just think of (somewhat dated) Diablo II which is not 3D but still has some nice and good looking graphics.
It is certainly possible to build an architecture which could make it easier to change the graphical representation, but I think it will almost never be as simple as you described. Of course, if you just want 3D for the sake of 3D it could be done (instead of bitmaps you now render 3D models) but this is somewhat pointless. If you want to use 3D the player should be able to make use of it (e.g. by moving the camera, having more degrees of freedom, ...) but this would have to be considered in the whole design of the game and thus seriously affects gameplay.
You can use the GL ortho view. This will allow you to draw on screen using only 2D coordinates, if you want 3D later on, switch from Ortho to Perspective view and you have 3D, however i don't think it will help you reuse the code, since the 2D ortho view is usually done with textures, and you cannot transform a texture to a 3D mesh.
Maybe a better approach is to do everything in 3D, and setup your camera to look from above, if you do that later you can switch to 3D just by relocating the camera and making better models and textures. This options sounds nicer but gives you more work with the trade-off 2D to 3D portability without code changes.
An MVC pattern should help.
And I guess you're aware already, but have you looked at Java3D? Perhaps having a plug replaceable 3D rendering view based on that (but which is not necessarily polished and production ready) will keep you honest so that you don't wind up tied to 2D in some horrible way. It could be as simple as rendering your 2D stuff with some Z positioning added.