I'm making a UHC plugin, I need to pregenerate the world before teleporting a user so I load all the chunks. I found that this is ineffective as these chunks will unload when there is no one on them. One of my friends suggested I use this:
Map<Pair<Integer, Integer>, Object> map = CacheBuilder.newBuilder().expireAfterWrite(500, TimeUnit.MILLISECONDS).build(new CacheLoader<Key, Graph>() {
#Override
public Graph load(Key key) { // no checked exception
return createExpensiveGraph(key);
}
});
teleport(Location, Player) {
map.put(new Pair(location,getChunk.getX, location.getChunk.getZ));
location.getChunk().load;
Bukkit.getScheduler.runTaskLater(plugin, () -> player.teleport(location), 5L);
}
Also else where I would have an eventhandler handling the chunk unload event for the time that the chunk is in the Mapping. My problem is I have little to no experience with this google API which I am refrencing to here CacheBuilder and I'm not completely sure what I am doing wrong with this. I realize that createExpensiveGraph is a method but I don't know if it should be within the guava api or I need to make my own. I was wondering if someone might have a better way to solve my problem of pregening the world or help explain what I'm doing wrong when creating a new instance of chacheloader. Any input would be great, thanks!
You could keep the chunks in a list during the time that they should not be unloaded (before the player has been teleported there) and listen to the ChunkUnloadEvent, canceling it if it tries to unload one of the chunks in the list (and of course removing the chunks from the list again after the player has been teleported). A BukkitRunnable that loads all the chunks, waits for them to be finished and then teleports the player might be a good approach for this.
I assume you want this to work mostly for chunks that haven't been generated yet, because teleporting a player to a chunk that has been generated but simply isn't loaded into memory yet shouldn't cause too much trouble since loading of generated chunks often takes less than a tick (the delay seen on clients is just because of the rendering of chunks/sending of packets).
The OnChunkLoad event is called after the chunk is finished generating the basic terrain (and thus cannot be cancelled) which fortunately means that the isLoaded() boolean can be used as an indicator that the simple terrain in a chunk has been generated.
One last thing: Loaded chunks do not unload for at least a few seconds if no player is nearby (although I'm not positive whether this delay decreases if the server has more players/is using more resources) so while delaying the teleport for 5 ticks allows time for new chunks to be generated, I doubt they would already start unloading before those 5 ticks have elapsed, especially if you're only pre-loading one single chunk (in fact, when I tested this, my server never unloaded the single chunk, only once I started preloading multiple chunks were they unloaded after 15-30 seconds, but maybe this is different on a larger or different server)
I have code written for this that I used to test my ideas, although hopefully this is enough information to help you out (posting my code would make this post too long), but let me know if you need any more specific help with this!
Related
I am writing a video game in my spare time and have a question about data consistency when introducing mult-threading.
At the moment my game is single threaded and has a simple game loop as it is taught in many tutorials:
while game window is not closed
{
poll user input
react to user input
update game state
render game objects
flip buffers
}
I now want to add a new feature to my game where the player can automate certain tasks that are long and tedious, like walking long distances (fast travel). I may chose to simply "teleport" the player character to their destination but I would prefer not to. Instead, the game will be sped up and the player character will actually walk as if the player was doing it manually. The benefit of this is that the game world will interact with the player character as usual and any special events that might happen will still happen and immediately stop the fast travel.
To implement this feature I was thinking about something like this:
Start a new thread (worker thread) and have that thread update the game state continuously until the player character reaches its destination
Have the main thread no longer update the game state and render the games objects as usual and instead display the travel progress in a more simplistic manner
Use a synchronized message queue to have the main thread and the worker thread communicate
When the fast travel is finished or canceled (by player interaction or other reasons) have the worker thread die and resume the standard game loop with the main thread
In pseudo code it may look like this:
[main thread]
while game window is not closed
{
poll user input
if user wants to cancel fast travel
{
write to message queue player input "cancel"
}
poll message queue about fast travel status
if fast travel finished or canceled
{
resume regular game loop
} else {
render travel status
flip buffers
}
}
[worker thread]
while (travel ongoing)
{
poll message queue
if user wants to cancel fast travel
{
write to message queue fast travel status "canceled"
return
}
update game state
if fast travel is interrupted by internal game event
{
write to message queue fast travel status "canceled"
return
}
write to message queue fast travel status "ongoing"
}
if travel was finished
{
write to message queue fast travel status "finished"
}
The message queue will be some kind of two-channeled synchronized data structure. Maybe two ArrayDeque's with a Lock for each. I am fairly certain this will not be too much trouble.
What I am more concerned is caching problems with the game data:
1.a) Could it be that the worker thread, after being started, may see old game data because the main thread may run on a different core which has cached some of its results?
1.b) If the above is true: Would I need to declare every single field in the game data as volatile to protect myself with absolute guarantee against inconsistent data?
2) Am I right to assume that performance would take a non trivial hit if all fields are volatile?
3) Since I only need to pass the data between threads at few and well controlled points in time, would it be possible to force all caches to write back to main memory instead of using volatile fields?
4) Is there a better approach? Is my concept perhaps ill conceived?
Thanks for any help and sorry for the big chunk of text. I thought it would be easier to answer the question if you know the intended use.
Since I only need to pass the data between threads at few and well controlled points in time, would it be possible to force all caches to write back to main memory instead of using volatile fields?
No. That's not how any of this works. Let me give you very short answers to explain why you are thinking about this the wrong way:
1.a) Could it be that the worker thread, after being started, may see old game data because the main thread may run on a different core which has cached some of its results?
Sure. Or it might for some other reason. Memory visibility is not guaranteed, so you can't rely on it unless you use something guaranteed to provide memory visilbity.
1.b) If the above is true: Would I need to declare every single field in the game data as volatile to protect myself with absolute guarantee against inconsistent data?
No. Any method of assuring memory visibility will work. You don't have to do it any particular way.
2) Am I right to assume that performance would take a non trivial hit if all fields are volatile?
Probably. This would probably be the worst possible way to do it.
3) Since I only need to pass the data between threads at few and well controlled points in time, would it be possible to force all caches to write back to main memory instead of using volatile fields?
No. Since there is no "write cache back to memory" operation that assures memory visibility. Your platform may not even have such caches and the issue might be something else entirely. You're writing Java code, you don't have to think about how your particular CPU works, what cores or caches it has, or anything like that. That's one of the big advantages of using a language with semantics that are guaranteed and don't talk about cores, caches, or anything like this.
4) Is there a better approach? Is my concept perhaps ill conceived?
Absolutely. You are writing Java code. Use the various Java synchronization classes and functions and rely on them to prove the semantics they're documented to provide. Don't even think about cores, caches, flushing to memory, or anything like that. Those are hardware details that, as a Java programmer, you don't even have to ever think about.
Any Java documentation you see that talks about cores, caches, or flushes to memory is not actually talking about real cores, caches, or flushes to memory. It's just giving you some ways to think about hypothetical hardware so you can wrap your brain around why memory visibility and total ordering don't always work perfectly just by themselves. Your real CPU or platform may have completely different issues that bear no resemblance to this hypothetical hardware. (And real-world CPUs and systems have cache coherency guaranteed by hardware and their visibility/ordering issues in fact are completely different!)
I am creating a voxel game in Java. Currently, I am using perlin noise to generate data for 3d chunks (16x16x16 short arrays) which are contained in a HashMap. This all works correctly. When the player moves, I want to render the chunks near the player (right now, 5 chunks in any direction). If a chunk does not exist, it should generate it.
The problem is that it takes about half a second to generate a chunk so when the player moves out of the generated area, the game loop freezes for a couple seconds while it generates the necessary chunks and then resumes.
I am using lwjgl for OpenGL and my game loop looks something like this:
while (!Display.isCloseRequested()){
update(); //my update method
render(); //my render method
Display.update(); //refresh the screen
Display.sync(60); //sync to 60 fps
}
I have tried, unsuccessfully, to use a second thread to generate data while updating and rendering but I could not figure out how to do it without freezing the game loop. I think there should be a way to queue chunks to generate in a second thread and then run that thread in short bursts but I have little to no experience with multithreading in Java so any help with that would be appreciated.
If the player can see the five chunks around him, you can maybe generate the chunks six rows away before he enters one direction. This way you have done the work early enough and you can display the chunks directly.
This task of generating the chunks you can do in a separate thread. It doesn't have to be called in the game-loop. You have to invoke the generater-thread before entering the game-loop.
I'd have a background thread that's notified when the player moves, then pregenerates chunks adjacent to the where the player moved. It's backed by a priority queue so that backlogs of never-visited cells aren't at the top of the queue, and old queue entries are removed once they're a certain distance from the player.
The key to leveraging threading is that you're generating the chunks before the player moves there, and taking down time in movements to generate chunks, and possibly unneeded chunks.
And the big caveat: if you offload all generation to that thread, you're going to need to use Futures so you always get back a generated chunk.
Okay, I solved my problem using only one thread. I created a TaskManager which was responsible for holding an arraylist of tasks to be done. So every time I needed to generate a chunk, I would pass a task object that contained information to actually perform the task. Then, every update, I would call TaskManager.next() which would perform the next task.
Now, instead of generating 49 new chunks in one update which froze the framerate, it generates one chunk per update until they are all generated.
This question is semi-theory, semi-how to properly code.
I am thinking about making an app in Java that will accepted streaming data, and as the data comes in, update a GUI.
So, what I am thinking of doing is just spawning off threads in Java that will:
collect data for X-milliseconds,
Take new data and update GUI with it
At the same time, start a new thread, collecting data for X milliseconds
This new thread must start off right where the first thread began
And, at the same time, all other parts of the program around going on in their own threads too.
So I need to make sure the threads don't collide, no data is lost in the mix, and I need to have an understanding of the speed limits. Say if the data is coming in at 1 Gbs vs 1 Mbs, what programming difference does that make?
The specific application includes data coming in from bluetooth and also data coming in from the Internet via an HTTPS rest API
If anyone has examples, either online or something quick and dirty right here, that'd be great. My Google searches came up dry..
The question is rather broad, but from an archtetctural point of view, I think the complexity decreases greatly if you change it to one thread reading from your device and putting the data into a buffer and one thread reading from that buffer and updating the UI. This reduces the code that needs to take care of multiple threads accessing it at the same time (idealy it reduces it to the buffer you use) and make synchronization much easier. It also decouples the fetching of the data from displaying it.
Writing the buffer can start off with using PipedInputStream and PipedOutputStream, however in one of my projects it turned out not to be fast enough if you really want to provide real-time processing and display, so you might end up writing yourself a low-latency buffer class.
I'm writing a game in which players write AI agents that compete against one another, on the JVM. Right now the architecture looks like this:
A core server module that handles the physics simulations, and takes messages from the players as input to alter the world. The core also determines what the world looks like from the perspective of each of the players, based on various rules (think fog of war).
Player modules receive updated versions of the world from the core, process them, and stream messages to the core as inputs based on that processing.
The idea is that the core is compiled along with two player modules, and then the simulation is run producing an output stream that can be played back to generate visualization of the match.
My question is, if each of the players runs on a single Java thread, is it possible to ensure that the two player threads get equal amounts of resources (CPU time, primarily, I think)? Because I don't control the nature of the processing that each AI is doing, it's possible that one of the players might be extremely inefficient but written in such a way that its thread consumes so many resources the other player's AI is resource starved and can't compete fairly.
I get the feeling that this isn't possible without a hard realtime OS, which the JVM isn't even close to being, but if there's even a way to get reasonably close I'd love to explore it.
"Player modules receive updated versions of the world from the core, process them, and stream messages to the
core as inputs based on that processing". This means that player module has a loop inside it which receives update message and sends result messages to the core. Then I would use lightweight actor model, each player being an actor, and all actors use the same ExecutorService. Since activated actors go through the same executor task queue, they got roughly the same access to CPU.
Your intuition is right that this isn't really possible in Java. Even if you had a real-time OS, someone could still write a very resource intensive AI thread.
There are a couple of approaches you could take to at least help here. First be sure to give the two player module threads the same priority. If you are running on a machine that has more than 2 processors, and you set each of the player module threads to have the highest priority, then theoretically they should both run whenever they have something to do. But if there's nothing to stop the player modules from spawning new threads themselves, then you can't guarantee a player won't do that.
So short answer is no, you can't make these guarantees in java.
Depending on how your simulation works, maybe you can have a concept of "turns". So the simulation instructs player 1 to make a move, then player 2 makes its move, and back and forth ,so they can each only make one "move" at a time. Not sure if this will work in your situation though.
If you have any knobs to turn regarding how much work the threads have to do (or just set their priority), you can set up another thread that periodically monitors threads using ThreadMXBeans and find their CPU usage using ThreadInfo.getThreadCpuTime. You can then compare each players CPU time and react accordingly.
Not sure if this is timely and accurate enough for you, but over time you could balance the CPU usage.
However, splitting the work in packets and using Executors like suggested before should be the better way and more java-like.
If I read a file until it hits a null line and want to create a somewhat accurate JProgressBar for when the reading will be finished, how would I do this?
The reason I ask is because some files are read within 200ms where as others take between 2500ms and 10000ms.
I'm doing something simple for reading the file
boolean completed
while(!completed) {
//read file
if(reader == null) {
completed = false;
}
}
So is there a way to make a progress bar accurate for a situation like this? (by the way, the size of the file doesn't necessarily mean it will take less time to read in this case)
Use the size of the file -- it's your best measure.
What you display has to have meaning that the users can understand. That's the most important part. If the progress bar means the percent of the file that has been processed, that's very easy to understand, easy to quantify, and easy to calculate and display. This makes your progress bar transparent, and that is what's important.
Think about how this is to be viewed by users. You don't need to program some sort of predictive intelligence because the users have brains and they can draw their own conclusions. So if they see the progress bar run fast for the first half and then slow down, they know that the it's processed half of the file, but the second half is taking longer. That's all they need to know! They can draw their own conclusion that since the progress bar slowed down, that it may take longer to finish the file than the time indicated in the first half.
In fact, the users may already know something about the file being processed. For this particular file, they may know that the useful (or long-processing) data is all in the second half. So when they see the progress bar slow down, it is expected to them. This gives the transparency needed for the users to know that the software is working like it should, and they can draw their own conclusions about how long they need to wait.
Of course, it could also still be useful to show a generic estimation "25% of the file processed in 3 seconds, so we estimate an additional 9 seconds to completion." This is also a very general basic equation that is transparent, so the users can know how to use it and adjust it for the specifics of the file they are processing.