NOTE: This isn't specific to Minecraft Fabric. I'm just new to rigid pre-runtime optimization.
I'm writing an API hook for Minecraft mods that allows the mapping of various tasks to a Villager's "profession" attribute, allowing other mods to add custom tasks for custom professions. I have all of the backend code done, so now I'm worried about optimization.
I have an ImmutableMap.Builder<VillagerProfession, VillagerTask> that I'm using to store the other mods' added tasks. Problem is, while I know that the "put" method will never be called at runtime, I don't know if the compiler does. Obviously, since this is a game and startup times in modpacks are already long, I'd like to optimize this as much as possible, since it will be used by every mod that wishes to add a new villager task.
Here's my current source code for the "task registry":
private static final ImmutableMap.Builder<VillagerProfession, ImmutableList<Pair<Task<? super VillagerEntity>, Integer>>> professionToVillagerTaskBuilder = ImmutableMap.builder();
private static final ImmutableMap<VillagerProfession, ImmutableList<Pair<Task<? super VillagerEntity>, Integer>>> professionToVillagerTaskMap;
// The hook that any mods will use in their source code
public static void addVillagerTasks(VillagerProfession executingProfession, ImmutableList<Pair<Task<? super VillagerEntity>, Integer>> task)
{
professionToVillagerTaskBuilder.put(executingProfession, task);
}
//The tasklist retrieval method used at runtime
static ImmutableList<Pair<Task<? super VillagerEntity>, Integer>> getVillagerRandomTasks(VillagerProfession profession)
{
return professionToVillagerTaskMap.get(profession);
}
static { // probably not the correct way to do this, but it lets me mark the map as final
professionToVillagerTaskMap = professionToVillagerTaskBuilder.build();
}
Thanks!
The brief answer is: you can't do what you want to do.
Problem is, while I know that the "put" method will never be called at runtime, I don't know if the compiler does.
The put method has to be called at runtime for your mod to be useful. By the time your code is being loaded in a form that it can be executed -- that's runtime. It may be the setup phase for your mod, but it's running in a JVM.
If the source code doesn't contain the registry itself, then the compiler can't translate it to executable code; it can't optimize something it doesn't know exists. You (the developer) can't know what mods will be loading, hence the compiler can't know, hence it can't optimize or pre-calculate it. That's the price you pay for dynamic loading of code.
As for the code you put up: it won't work.
The static block is executed when the class is loaded. Think of it as a constructor for your class instead of the objects. By the time a mod can call any of its methods, the class has to be loaded, and its static blocks will already have been executed. Your map will be set and empty before any method is called from the outside. All tasks added will forever linger in the builder, unused, unseen, unloved.
Keep the builder. Let mods add their entries to it. Then, when all mod-loading is done and the game starts, call build() and use the result as a registry. (Use whichever 'game is starting' hook your modding framework provides.)
Related
I Use Java 15 openjdk and tried on Java 14
Include details about your goal
I'm making a RMI system in order to make instances of any object synchronisable between computers and make multiple engines works on the same object. With my system, when i want to synchronise an object, i generate a class that will extends the object class and then override every methods of the class in order to control if the method call must be delegated to the object or to perform a RMI request instead.
The class generation is divided in two part :
I generate the source code in which every non-final methods are overriden in order to add my delegating system. the code is generated in scala language, and this class yet does not extends from the class of the object to synchronise because scala don't let me override some methods, even if they are not final (it's a thing with scala's setters and getters), then i compile the code using the Scala Compiler.
I use javassist to modify the generated class and make it extends the expected class + i add some methods and modify anonfun methods in order to perform super calls.
What is happening when i see the exception ?
I have a module Server and a module Client, they both run the same code except that they have different implementation of the Engine module, which is where i define all features of my framework, my RMI system is a feature of the framework for example, and for this RMI system, absolutely no code is runned into the implementations modules.
In the Engine module, I've made a player command in my program that creates a synchronised list (of type scala.collections.mutable.ListBuffer), thus, with this command i can add some player objects to the list. for example, if i add a player to the list, it will be added in the local list of the program that executes the command, and a RMI request will be done to the other computers that hosts the list in order to make them add the same object in their list.
Now, if i enter something like player add id=7 name=testPlayer x=78 y=23, it will start to get completely weird :
First of all, this exception occurs only when the server program handles the RMI request, which is completely nonsensical because as i said, nothing is run in the implementation.
For example, if i enter the command on the server, the player will be added in it's local list, and a RMI request for the add method will be sent from the server to the client, but on the client, as it will handle the request, will not crash at all (i can spam the command, nothing breaks). So, if the server handles the RMI request, it throws me this error :
java.lang.NegativeArraySizeException: -531627648
at java.base/java.lang.Class.copyFields(Class.java:3538)
at java.base/java.lang.Class.getDeclaredFields(Class.java:2341)
at fr.linkit.engine.connection.packet.serialization.tree.ClassDescription.listAllSerialFields$1(ClassDescription.scala:45)
at fr.linkit.engine.connection.packet.serialization.tree.ClassDescription.listSerializableFields(ClassDescription.scala:52)
at fr.linkit.engine.connection.packet.serialization.tree.ClassDescription.<init>(ClassDescription.scala:23)
at fr.linkit.engine.connection.packet.serialization.tree.DefaultClassProfile.<init>(DefaultClassProfile.scala:23)
at fr.linkit.engine.connection.packet.serialization.tree.DefaultSerialContext.$anonfun$getClassProfile$1(DefaultSerialContext.scala:61)
at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:454)
at fr.linkit.engine.connection.packet.serialization.tree.DefaultSerialContext.getClassProfile(DefaultSerialContext.scala:61)
at fr.linkit.engine.connection.packet.serialization.tree.DefaultNodeFinder.getClassProfile(DefaultNodeFinder.scala:49)
at fr.linkit.engine.connection.packet.serialization.tree.DefaultNodeFinder.getSerialNodeForType(DefaultNodeFinder.scala:36)
at fr.linkit.engine.connection.packet.serialization.tree.DefaultNodeFinder.getSerialNodeForRef(DefaultNodeFinder.scala:44)
at fr.linkit.engine.connection.packet.serialization.tree.DefaultNodeFinder.$anonfun$listNodes$1(DefaultNodeFinder.scala:55)
at scala.collection.immutable.List.map(List.scala:246)
at fr.linkit.engine.connection.packet.serialization.tree.DefaultNodeFinder.listNodes(DefaultNodeFinder.scala:53)
at fr.linkit.engine.connection.packet.serialization.tree.nodes.ObjectNode$ObjectSerialNode.serialize(ObjectNode.scala:58)
at fr.linkit.engine.connection.packet.serialization.tree.nodes.ArrayNode$ArraySerialNode.serializeItem$1(ArrayNode.scala:84)
at fr.linkit.engine.connection.packet.serialization.tree.nodes.ArrayNode$ArraySerialNode.$anonfun$serialize$1(ArrayNode.scala:68)
at fr.linkit.engine.connection.packet.serialization.tree.nodes.ArrayNode$ArraySerialNode.$anonfun$serialize$1$adapted(ArrayNode.scala:67)
at scala.collection.ArrayOps$.foreach$extension(ArrayOps.scala:1323)
at fr.linkit.engine.connection.packet.serialization.tree.nodes.ArrayNode$ArraySerialNode.serialize(ArrayNode.scala:67)
at fr.linkit.engine.connection.packet.serialization.tree.LengthSign$.$anonfun$of$2(LengthSign.scala:62)
at fr.linkit.engine.connection.packet.serialization.tree.LengthSign$.$anonfun$of$2$adapted(LengthSign.scala:54)
at scala.collection.immutable.List.foreach(List.scala:333)
at fr.linkit.engine.connection.packet.serialization.tree.LengthSign$.of(LengthSign.scala:54)
at fr.linkit.engine.connection.packet.serialization.tree.nodes.ObjectNode$ObjectSerialNode.serialize(ObjectNode.scala:63)
at fr.linkit.engine.connection.packet.serialization.tree.nodes.ArrayNode$ArraySerialNode.serializeItem$1(ArrayNode.scala:84)
at fr.linkit.engine.connection.packet.serialization.tree.nodes.ArrayNode$ArraySerialNode.$anonfun$serialize$1(ArrayNode.scala:68)
at fr.linkit.engine.connection.packet.serialization.tree.nodes.ArrayNode$ArraySerialNode.$anonfun$serialize$1$adapted(ArrayNode.scala:67)
at scala.collection.ArrayOps$.foreach$extension(ArrayOps.scala:1323)
at fr.linkit.engine.connection.packet.serialization.tree.nodes.ArrayNode$ArraySerialNode.serialize(ArrayNode.scala:67)
at fr.linkit.engine.connection.packet.serialization.DefaultSerializer.serialize(DefaultSerializer.scala:34)
at fr.linkit.engine.connection.packet.serialization.SimpleTransferInfo.makeSerial(SimpleTransferInfo.scala:38)
at fr.linkit.engine.connection.packet.serialization.LazyPacketSerializationResult.bytes$lzycompute(LazyPacketSerializationResult.scala:27)
at fr.linkit.engine.connection.packet.serialization.LazyPacketSerializationResult.bytes(LazyPacketSerializationResult.scala:27)
at fr.linkit.engine.connection.packet.serialization.LazyPacketSerializationResult.writableBytes$lzycompute(LazyPacketSerializationResult.scala:30)
at fr.linkit.engine.connection.packet.serialization.LazyPacketSerializationResult.writableBytes(LazyPacketSerializationResult.scala:29)
at fr.linkit.server.connection.ExternalConnectionSession.send(ExternalConnectionSession.scala:53)
at fr.linkit.server.connection.ServerExternalConnection.$anonfun$sendPacket$1(ServerExternalConnection.scala:100)
at fr.linkit.engine.local.concurrency.pool.BusyWorkerPool.$anonfun$runLater$1(BusyWorkerPool.scala:351)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
at scala.util.Try$.apply(Try.scala:210)
at fr.linkit.engine.local.concurrency.pool.BusyWorkerPool.$anonfun$runLaterControl$1(BusyWorkerPool.scala:122)
at fr.linkit.engine.local.concurrency.SimpleAsyncTask.runTask(SimpleAsyncTask.scala:75)
at fr.linkit.engine.local.concurrency.pool.BusyWorkerThread.runTask(BusyWorkerThread.scala:67)
at fr.linkit.engine.local.concurrency.pool.BusyWorkerPool.$anonfun$runLaterControl$2(BusyWorkerPool.scala:132)
The exception occurs during the serialization of the response packet, (as we are performing a Remote Method Invocation, we have to send the return value of the method). In this case, the add method returns the instance of the list, so the packet will contains the instance of the list as a result value (sounds useless but i have to deal with this kind of situations). When the list gets serialized, it crashes here :
def listAllSerialFields(cl: Class[_]): Seq[Field] = {
if (cl == null)
return Seq.empty
val fields = cl.getDeclaredFields //Line 45, Here, the cl value is the generated class
fields
.filterNot(p => Modifier.isTransient(p.getModifiers) || Modifier.isStatic(p.getModifiers))
.tapEach(_.setAccessible(true))
.toList ++ listAllSerialFields(cl.getSuperclass)
}
Then, further in the method it crashes here :
private static Field[] copyFields(Field[] arg) {
Field[] out = new Field[arg.length]; //arg.length is -500 millions !
ReflectionFactory fact = getReflectionFactory();
for (int i = 0; i < arg.length; i++) {
out[i] = fact.copyField(arg[i]);
}
return out;
}
I suspect that it's the reflection data that causes that because, when i used the debugger to follow the thread execution, the jvm crashed when the debugger saw the referent field of the SoftReference<ReflectionData> Class.reflectionData field. But i repeat, on the client it does not crash, and my debugger can inspect the reflection data successfully.
EDIT -
If i call getDeclaredFields directly once the class get loaded (here for example):
var loader = puppetClass.getClassLoader
if (loader == null)
loader = getClass.getClassLoader //Use the Application's classloader
val classLoader = new GeneratedClassLoader(folderPath, loader, Seq(classOf[LinkitApplication].getClassLoader))
val clazz = Class.forName(wrapperClassName, false, classLoader).asInstanceOf[Class[_ <: PuppetWrapper[AnyRef]]]
println(s"CREATED CLASS ${clazz} IN THREAD = " + Thread.currentThread())
clazz.getDeclaredFields //Invoking a method in order to make the class load its reflectionData (causes fatal error if not made directly)
ClassMappings.putClass(clazz)
clazz
It will never crash or throw me the an exception, however, it still weird that i have to do that because only the server would oftenly crash, and when it crashes, it can does it in the exact same thread that have loaded the class...
This isn't really an answer to your original question, but re-iterates what was said in the comments. What you're doing is "unsafe", and the behavior is undefined. The fact that the workaround is working at all is somewhat incidental and cannot be relied upon. Maybe it works now, but in a future Java version it might fail.
The Unsafe class is going away once safe replacements exist for all of its useful capabilities. This will likely occur after the completion of the Panama project, which provides access to native memory. The VarHandle class is the replacement for direct field access, but it doesn't permit modifying final fields, and it likely never will. Such a backdoor prevents certain optimizations, and new Java features like "records" and "hidden classes" trust that final fields are really final. This behavior might apply to all classes at some point.
There's no planned safe alternative for allocating classes without a constructor, and so that's a problem too. The built-in Java serialization mechanism will have to continue using a backdoor until it's rewritten to use a different technique.
The safe technique is to generate a hidden constructor which performs deserialization and sets the final fields. It might also need dummy parameters to avoid conflicts with any other constructors. The constructor is added with an instrumentation agent which modifies serializable classes as they're loaded.
Ideally the hidden constructor should be private, but then accessing it becomes tricky. The agent should also define (or augment) a static class intializer which looks up the MethodHandle for the hidden constructor and passes it to some serialization framework layer. The private constructor will still be visible by any code which calls getDeclaredConstructors, but that's a relatively minor problem.
As for serializing the fields out in the first place, a VarHandle for each field can be passed along from the static initializer, or a private method is added which does the serialization. I think that the private method approach is better, and it just needs one MethodHandle for accessing it.
Thanks a lot for ones who tried to help me, and thanks to boneil for his answer because he made me become aware that Unsafe wasn't a great solution, but i still have to deal with it. However, i decided to answer because i just made a "discovery" with the Unsafe.allocateInstance method : When Unsafe allocates an instance, every fields of the objects are null but still neverless non detected as null by the JVM. I just had a case where i didn't knew why i got an exception in the init method of one of my allocated instances, i first started to think that it was called twice, but it turned out that i could'nt even debug the method execution with a breakpoint because it was making my JVM crash as soon as the debugger stops in the instance's method. Thus, as i could not use the debugger, i decided to use the plain old nooby printf statement, but still unable to debug as i got a NPE here :
(in java.lang.String.java)
public static String valueOf(Object obj) {
return (obj == null) ? "null" : obj.toString(); //obj is null.
}
as you can see, the obj was null as it threw a NPE, but it still has been considerated as a normal instance. Therefore, by seeing this, i decided to use reflection to set the field that caused this issue as null (field.set(instance, null)), and i was able to print the entire object, and i was able to use the debugger ! So this means that it's an Unsafe thing that it does not even set fields' value to null, which is very anoying but, it's ok...
Now i think that my jvm crashed because as the debugger tried to introspect my allocated object, it received an NPE, even if it certainly check if the object was not null, thus, this made an internal error in the debugger, and then the JVM crashed.
EDIT: it seems like it's not just when memory is not initialised by Unsafe, "cursed null" fields looks to appear when Unsafe touches an object (putInt, putObject...). If an Unsafe method is invoked to put something in an object's memory, any null field of the object have a (high) chance to become a weird null field
Recently I have really focused on design patterns and implementing them to solve different problems. Today I am working on the Command Pattern.
I have ended up creating an interface:
public interface Command {
public void execute();
}
I have several concrete implementations:
public class PullCommand implements Command {
public void execute() {
// logic
}
}
and:
public class PushCommand implements Command {
public void execute() {
// logic
}
}
There are several other commands as well.
Now.. the thing is there's a BlockingQueue<Command> which runs on a different thread using .take() to retrieve queued commands and execute them as they come in (I'd call this Executor class below) by another class which produces them by parsing the user input and using .queue(). So far so good...
The hard part about me is parsing the command (CLI application).
I have put all of them in a HashMap:
private HashMap<String, Command> commands = new HashMap<String, Command>();
commands.put("pull", new PullCommand());
commands.put("push", new PushCommand());
//etc..
When user inputs a command, the syntax is such that one thing is for sure, and it is that the "action" (pull / push) comes as first argument, so I can always do commands.get(arguments[0]) and check if that is null, if it is, then the command is invalid, if it isn't, then I have successfully retrieved an object that represents that command. The tricky part is that there are other arguments that also need to be parsed and for each command the algorithm for parsing it, is different... Obviously one thing I can do is put the arguments[] as a parameter to the method execute() and end up having execute(String[] args) but that would mean I have to put the parsing of arguments inside the execute() method of the command, which I would like to avoid for several reasons:
The execution of Command happens on a different thread that uses a BlockingQueue, it executes a single command, then another one etc.. The logic I would like to put inside execute() has to ONLY be the execution of the command itself, without the parsing or any for example heavy tasks which would slow the execution (I do realize parsing several args would not mess up the performance that much.. but here I am learning structural designs and ways to build good coding habits and nice solutions. This would not be perfect by any mean)
It makes me feel like I am breaking some fundamental principles of the "Command" pattern. (Even if not so, I'd like to think of a better way to solve this)
It is obvious that I cannot use the constructor of the concrete commands since HashMap returns already initialized objects. Next thing that comes to mind is using another method inside the object that "processes" (process(String[] args)) the arguments and sets private variables to the result of the parsing and this process(String[] args) method is called by the Producer class before doing queue() on the command, so the parsing would end up OUT of the Executor class (thread) and Point 1. from above would not be a problem.
But there's another problem.. What happens if a user enters a lot of commands to the application, the application does .get(args[0]) on the arguments and retrieves a PullCommand, it uses the process(String[] args) and private variables are set, so the command is queued to the Executor class and it is waiting to be executed. Meanwhile.. another command is input by the user, .get(args[0]) is used again, it retrieves a PullCommand from the HashMap (but that PullCommand is the same as the one that is queued for execution) and process() would be called BEFORE the command has been executed by the Executor class and it would screw up the private variables. We would end up with 2 PullCommands records in the BlockingQueue, second one would be correct from user point of view (since he input what he wants it to do and it does just that), but first one will be the same as the second one (since it is the same object) and it would not correspond to the initial arguments.
Another thing I thought of is using a Factory class that implements the parsing for each of the commands and returns the appropriate Command object.
This would mean though, that I need to change the way HashMap is used and instead of Command I have to use the Factory class instead:
HashMap<String, CommandFactory> commands = new HashMap<String, CommandFactory>();
commands.put("pull", new CommandFactory("pull"));
commands.put("pull", new CommandFactory("push"));
and based on the String passed to the Factory, its process() method would use the appropriate parsing for that command and it would return the corresponding Command object, but this would mean that this class could potentially be very big because of containing the parsing for all commands..
Overall, this seems like my only option, but I am very hesitant since from structural point of view, I don't think I am solving this problem nicely. Is this a nice way to deal with this situation? Is there anything I am missing? Is there any way I can maybe refactor part of my code to ease it?
You are over thinking this. The command pattern is basically "keep everything you need to know how do something and do it later", so it's OK to pass stuff to the execution code.
Just do this:
user inputs a String[]
first string is the command "name" (use it as you are now)
the remaining strings are the parameters to the command (if any)
change your interface to public void execute(String[] parameters);
to execute, pass the parameters to the command object
Throwing a design question broadly like this in SO is in general not a good idea. It's a bit surprising to only see a downvote with no close request.
In any case, without understanding your problem thoroughly, it's hard to say what's the "best" design, not to mention if it I did, I won't call anything to be the best. Therefore I will stick with what I say by using Builder pattern.
In general, a builder pattern is used whenever the logic of construction is too complicated to fit in constructor, or it's necessarily divided into phases. In this case, if you want some extreme diversity of how your commands should look like depending on action, then you will want some builder like this:
interface CommandBuilder<T extends Command> {
void parseArgs(String[] args);
T build();
}
Generic here is optional if you don't plan to use these builders more than your current architecture, otherwise it's beneficial to be more precise in the types. parseArgs is responsible for those necessary parsing you were referring to. build should spit out an instance of Command depending on your current arguments.
Then you want your dispatcher map to look like this:
HashMap<String, Supplier<? extends CommandBuilder<? extends Command>>> builders = new HashMap<>();
commands.put("pull", () -> new PullBuilder());
commands.put("push", () -> new PushBuilder());
// etc
Any of these builders can potentially have extremely complicated logic, as you would desire. Then you can do
CommandBuilder builder = builders.get(args[0]).get();
builder.setArgs(args);
queue.add(builder.build());
In this way, your Command interface can focus on what exact it's supposed to do.
Notice that after your builders map is built, everything goes statically, and mutation is localized. I don't fully understand what's your concern, but it should be addressed by doing this.
However, it could be an overkill design, depending on what you want to do.
I am building a game simulator that has hundreds of micro steps like the following. They each perform a unique task, but I left out the implementation details for the sake of brevity.
public class Sim {
static void phase() {
phaseIn();
phaseOut();
}
static void untap() {
}
static void upkeep() {
}
static void draw() {
}
...
}
A Turn usually involves executing steps sequentially, but there are times when a special effect may cause the sequence to change. For example, I may be required to repeat a step twice, skip a step, or rearrange the order of the steps. These actions are all special cases, as the turn typically just occurs in order from start to finish.
For example, the following series of events represents my normal turn.
... > upkeep() > draw() > preCombatMain() > ...
Now, I play something that requires me to repeat my draw step. I need my turn to look like this:
... > upkeep() > draw() > draw() > preCombatMain() > ...
The steps of a turn are methods, and I do not know how to enqueue or dequeue methods. I know that Java 8 has method references, but the feature is relatively new. I have been unable to apply existing tutorials to what I am trying to accomplish. I got as far as Sim::untap, but I have no idea how to assign it, invoke it, etc. How do I queue methods in Java 8, or otherwise call methods in an order determined at run-time by the choices a player makes?
Note: I realize that my inability to understand may be due to a fundamental design flaw. I have never taken a game design course, I am completely open to criticism, and I will change my design if it is flawed. That said, the question is not to be misconstrued as "Please recommend a design pattern." I considered an alternate design, where I "number" each step in a massive switch statement, queue "numbers", and repeatedly switch on the front of the queue, but that seemed like a poor plan (in my opinion).
If you simply want them to run sequentially, you can of course call them one after the other. If the order can change, an alternative is to use a queue of method references:
LinkedList<Runnable> queue = new LinkedList<>();
queue.add(Sim::upkeep);
queue.add(Sim::draw);
queue.add(Sim::preCombatMain);
queue.forEach(Runnable::run);
I was able to use a LinkedList<Runnable> because the signature of your methods is void m(). For other signatures you can use other types, for example:
void m() use Runnable
T m() use Supplier<T>
void m(T o) use Consumer<T>
R m(T o) use Function<T, R>
The solution is to use polymorphism. Define an interface for the step:
interface Step {
void process();
}
Then define each step by implementing it:
class UpkeepStep implements Step {
void process() { ... }
}
Now you can put all your steps in an array, shuffle it, if needed, and execute all steps, like this:
for (Step step : steps) {
step.process();
}
An alternative approach that may run faster, is to generate code that contains the method calls, compile it and load the class. However, it gives you only better performance if the step does not take much runtime compared to the method call overhead and if you execute each generated piece of code a lot, so the JIT will optimize it.
I havent used a lot of static methods before, but just recently I tend to use more of them. For example if I want to set a boolean flag in a class, or acess one without the need to pass the actual object through classes.
For example:
public class MainLoop
{
private static volatile boolean finished = false;
public void run()
{
while ( !finished )
{
// Do stuff
}
}
// Can be used to shut the application down from other classes, without having the actual object
public static void endApplication()
{
MainLoop.finished = true;
}
}
Is this something I should avoid? Is it better to pass a object so you can use the objects methods? Does the boolean finished counts as a global now, or is it just as safe?
A problem with using a static variable in this case is that if you create two (or more) instances of MainLoop, writing code that looks like it is shutting down only one of the instances, will actually shut down both of them:
MainLoop mainLoop1 = new MainLoop();
MainLoop mainLoop2 = new MainLoop();
new Thread(mainLoop1).start();
new Thread(mainLoop2).start();
mainLoop1.finished = true; // static variable also shuts down mainLoop2
This is just one reason (amongst many) for choosing to not use static variables. Even if your program today only creates one MainLoop, it is possible that in the future you may have reason to create many of them: for unit testing, or to implement a cool new feature.
You may think "if that ever happens, I'll just refactor the program to use member variables instead of static variables." But it's generally more efficient to pay the cost up front, and bake modular design into the program from the start.
There's no question that statics often make a quick and dirty program easier to write. But for important / complex code that you intend to test, maintain, grow, share, and use for years to come, static variables are generally recommended against.
As other answers to this question have noted, a static variable is a kind of global variable. And there's lots of information about why (generally) global variables are bad.
Yes, passing objects around is better. Using a singleton or static methods makes OO programming look like procedural programming. A singleton is somewhat better because you can at least make it implement interfaces or extend an abstract class, but it's usually a design smell.
And mixing instance methods with static variables like you're doing is even more confusing: you could have several objects looping, but you stop all of them at once because they all stop when a static variable changes.
Is this something i should avoid?
In general, yes. Statics represent global state. Global state is hard to reason about, hard to test in isolation, and generally has higher thread-safety requirements.
If I want to test what happens to an object in a certain state, I can just create the object, put it into that state, perform my tests, and let it get garbage collected.
If I want to test what happens to global state, I need to make sure I reset it all at the end of my test (or possibly at the start of every test). The tests will now interfere with each other if I'm not careful about doing that.
Of course, if the static method doesn't need to affect any state - i.e. if it's pure - then it becomes somewhat better. At that point all you're losing is the ability to replace that method implementation, e.g. when testing something that calls it.
In general, by making finished static like that you create a situation where there can only be one instance of your MainLoop class executing run at any one time. If there is more than one instance then setting finished will end them all -- not what is usually desired.
However, in this particular scenario, where you want to "end application", presumably meaning you want to end all instances of MainLoop, the approach may be justified.
However, the number of situations where this approach may be merited are few, and a "cleaner" way to handle this scenario would be to keep a static list of instances and work through the list, setting the instance variable finished in each instance. This allows you to also end individual instances, gives you a natural count of existing instances, etc.
I'm trying to write a construct which allows me to run computations in a given time window. Something like:
def expensiveComputation(): Double = //... some intensive math
val result: Option[Double] = timeLimited( 45 ) { expensiveComputation() }
Here the timeLimited will run expensiveComputation with a timeout of 45 minutes. If it reaches the timeout it returns None, else it wrapped the result into Some.
I am looking for a solution which:
Is pretty cheap in performance and memory;
Will run the time-limited task in the current thread.
Any suggestion ?
EDIT
I understand my original problem has no solution. Say I can create a thread for the calculation (but I prefer not using a threadpool/executor/dispatcher). What's the fastest, safest and cleanest way to do it ?
Runs the given code block or throws an exception on timeout:
#throws(classOf[java.util.concurrent.TimeoutException])
def timedRun[F](timeout: Long)(f: => F): F = {
import java.util.concurrent.{Callable, FutureTask, TimeUnit}
val task = new FutureTask(new Callable[F]() {
def call() = f
})
new Thread(task).start()
task.get(timeout, TimeUnit.MILLISECONDS)
}
Only an idea: I am not so familiar with akka futures. But perhaps its possible to stick the future executing thread to the current thread and use akka futures with timeouts?
To the best of my knowledge, either you yield (the computation calls to some scheduler) or you use a thread, which gets manipulated from the "outside".
If you want to run the task in the current thread and if there should be no other threads involved, you would have to check whether the time limit is over inside of expensiveComputation. For example, if expensiveComputation is a loop, you could check for the time after each iteration.
If you are ok for the code of expensiveComputation to check Thread.interrupted() frequently, pretty easy. But I suppose you are not.
I don't think there is any solution that will work for arbitrary expensiveComputation code.
The question is what are you prepared to have as constraint on expensiveComputation.
You have the deprecated and quite unsafe Thead.stop(Throwable) too. If your code does not modify any object but those it created by itself, it might work.
I saw a pattern like this work well for time-limited tasks (Java code):
try {
setTimeout(45*60*1000); // 45 min in ms
while (not done) {
checkTimeout();
// do some stuff
// if the stuff can take long, again:
checkTimeout();
// do some more stuff
}
return Some(result);
}
catch (TimeoutException ex) {
return None;
}
The checkTimeout() function is cheap to call; you add it to code so that it is called reasonably often, but not too often. All it does is check current time against timer value set by setTimeout() plus the timeout value. If current time exceeds that value, checkTimeout() raises a TimeoutException.
I hope this logic can be reproduced in Scala, too.
For a generic solution (without having to go litter each of your expensiveComputations with checkTimeout() code) perhaps use Javassist.
http://www.csg.is.titech.ac.jp/~chiba/javassist/
You can then insert various checkTimeout() methods dynamically.
Here is the intro text on their website:
Javassist (Java Programming Assistant) makes Java bytecode manipulation simple. It is a class library for editing bytecodes in Java; it enables Java programs to define a new class at runtime and to modify a class file when the JVM loads it. Unlike other similar bytecode editors, Javassist provides two levels of API: source level and bytecode level. If the users use the source-level API, they can edit a class file without knowledge of the specifications of the Java bytecode. The whole API is designed with only the vocabulary of the Java language. You can even specify inserted bytecode in the form of source text; Javassist compiles it on the fly. On the other hand, the bytecode-level API allows the users to directly edit a class file as other editors.
Aspect Oriented Programming: Javassist can be a good tool for adding new methods into a class and for inserting before/after/around advice at the both caller and callee sides.
Reflection: One of applications of Javassist is runtime reflection; Javassist enables Java programs to use a metaobject that controls method calls on base-level objects. No specialized compiler or virtual machine are needed.
In the currentThread?? Phhhew...
Check after each step in computation
Well if your "expensive computation" can be broken up into multiple steps or has iterative logic you could capture the time when you start and then check periodically between your steps. This is by no means a generic solution but will work.
For a more generic solution you might make use of aspects or annotation processing, that automatically litters your code with these checks. If the "check" tells you that your time is up return None.
Ill ponder a solution in java quickly below using annotations and an annotation processor...
public abstract Answer{}
public class Some extends Answer {public Answer(double answer){answer=answer}Double answer = null;}
public class None extends Answer {}
//This is the method before annotation processing
#TimeLimit(45)
public Answer CalculateQuestionToAnswerOf42() {
double fairydust = Math.Pi * 1.618;
double moonshadowdrops = (222.21) ^5;
double thedevil == 222*3;
return new Answer(fairydust + moonshadowdrops + thedevil);
}
//After annotation processing
public Answer calculateQuestionToAnswerOf42() {
Date start = new Date() // added via annotation processing;
double fairydust = Math.Pi * 1.618;
if(checkTimeout(start, 45)) return None; // added via annotation processing;
double moonshadowdrops = (222.21) ^5;
if(checkTimeout(start, 45)) return None; // added via annotation processing;
double thedevil == 222*3;
if(checkTimeout(start, 45)) return None; // added via annotation processing;
return new Answer(fairydust + moonshadowdrops + thedevil);
}
If you're very seriously in need of this you could create a compiler plugin that inserts check blocks in loops and conditions. These check blocks can then check Thread.isInterrupted() and throw an Exception to escape.
You could possibly use an annotation, i.e. #interruptible, to mark the methods to enhance.