I'm new to Optaplanner, and I try to solve a quite simple problem (for now, I will add more constraints eventually).
My model is the following: I have tasks (MarkerNesting), that must run one at a time on a VirtualMachine; the goal is to assign a list of MarkerNestings to VirtualMachines, having all machines used (we can consider that we have more tasks than machines as a first approximation). As a result, I expect each task to have a start and a end date (as shadow variables - not implemented yet).
I think I must use a chained variable, with the VirtualMachine being the anchor (chained through time pattern) - am I right?
So I wrote a program inspired by some examples (tsp and coach and shuttle) with 4 machines and 4 tasks, and I expect each machine having one task when it is solved. When running it, though, I get some strange results : not all machines are used, but the worst is that I have duplicate MarkerNesting instances (output example):
[VM 1/56861999]~~~>[Nesting(155/2143571436)/[Marker m4/60s]]~~~>[Nesting(816/767511741)/[Marker m2/300s]]~~~>[Nesting(816/418304857)/[Marker m2/300s]]~~~>[Nesting(980/1292472219)/[Marker m1/300s]]~~~>[Nesting(980/1926764753)/[Marker m1/300s]]
[VM 2/1376400422]~~~>[Nesting(155/1815546035)/[Marker m4/60s]]
[VM 3/1619356001]
[VM 4/802771878]~~~>[Nesting(111/548795052)/[Marker m3/180s]]
The instances are different (to read the log: [Nesting(id/hashcode)]), but they have the same id, so they are the same entity in the end. If I understand well, Optaplanner clones the solution whenever it finds a best one, but I don't know why it mixes instances like that.
Is there anything wrong in my code? Is it a normal behavior?
Thank you in advance!
Duplicate MarkerNesting instances that you didn't create, have the same content, but a different memory address, so are != from each other: that means something when wrong in the default solution cloner, which is based on reflection. It's been a while since anyone ran into an issue there. See docs section on "planning clone". The complex model of chained variables (which will be improved) doesn't help here at all.
Sometimes a well placed #DeepPlanningClone fixes it, but in this case it might as well be due to the #InverseRelationShadowVariable not being picked.
In any case, those system.out's in the setter method are misleading - they can happen both by the solution cloner as well as by the moves, so without the solution hash (= memory address), they tell nothing. Try doing a similar system.out in either your best solution change events, or in the BestSolutionRecaller call to cloneWorkingSolution(), for both the original as well as the clone.
As expected, I was doing something wrong: in Schedule (the PlanningSolution), I had a getter for a collection of VirtualMachine, which calculate from another field (pools : each Pool holds VirtualMachines). As a result, there where no setter, and the solution cloner was probably not able to clone the solution properly (maybe because pools is not annotated as a problem fact or a planning entity?).
To fix the problem, I removed the Pool class (not really needed), leaving a collection of VirtualMachines in Schedule.
To sum up, never introduce too many classes before you need them ^_^'
I pushed the correct version of my code on github.
Related
I'm creating a schedule generator for a school and I am facing two challenges:
1: User feedback during construction phase
During the construction heuristic phase I'm not getting any callbacks to the bestSolutionConsumer passed in to the SolverManager.solveAndListen which means that I'm not able to give any feedback to the user during this phase. (It's only about 10 seconds or so as of today, but still annoying.)
I suspect that this is by design (judging from this question), but please correct me if I'm wrong.
(I suspect that the idea is that the construction heuristic phase should be quick anyway, and that 99% of a long running solve will be spent in the local search phase, and thus that's the only phase that actually matters. Correct?)
2: Manual placement of lectures
This scheduling program will only be semi-automatic. I'd like the user to be able to pin lectures, move lectures around manually, and even remove lectures from the schedule by putting them in a pile on the side for later placement (where the later placement could perhaps be done by OptaPlanner).
Rethink definition of initialized?
This lead me to rethink what I consider an initialized solution. If I...
want to have progress feedback even during initial placement of lectures, and
want to allow the user to interact with a schedule where only half of the lectures have been scheduled
...then maybe I should make the timeslot nullable or have a sentinel timeslot value for unscheduled lectures, and simply penalize such solutions.
In this scenario, I'm imagining that a solution is immediately and trivially initialized (all lectures initially in the unscheduled state, but formally speaking the solution is still initialized) and that the construction phase is basically skipped.
Questions
Is this stupid for some reason?! It feels like I'm throwing out a large part of OptaPlanner capabilities.
Am I overlooking any downsides of this approach?
Is it even possible to skip construction phase by doing this?
Also, repeated planning is of importance to me, and the docs say:
Repeated planning (especially real-time planning) does not mix well with a nullable planning variable.
Does the same apply also to the approach of using an unscheduled sentinel value?
1/ No, this is not stupid. It is, in fact, an example of over-constrained planning.
2/ Well, now that variables are nullable, you need to write your constraints such that nulls are acceptable. And you may run into situations where the solver will find it easier to just leave some variables null, unless there is a pretty substantial penalty for that. You may have to design special constraints to work around that, or in the worst case even custom moves.
3/ Construction heuristics are not mandatory, but they can still be useful. Even if they leave some variables null, they can still give you a decent initial solution. You may also want to try a custom phase.
4/ If you worry about some of the things above, indeed introducing a dummy value instead of making a variable nullable could solve some of those worries. (And introduce others, as every constraint now has to deal with this dummy value.)
My advice is to do a quick proof of concept. See how each of the approaches behaves. Pick the one that you prefer to deal with. There are no silver bullets.
spring-projects in this method, why thepriorityOrderedPostProcessors list stored BeanFactoryPostProcessor type ,and orderedPostProcessorNames,nonOrderedPostProcessorNames stored String. What is the reason for this? I tried to replace them with the same type and they seem to work fine too.
Looking at the code from lines 150–189, I don’t see any reason why there's a two-step process for orderedPostProcessors and nonOrderedPostProcessors but not for priorityOrderedPostProcessors. This is the case in the earliest commit of the file in github, so any reasoning behind it is lost in the mists of time. I'd guess that the two cases were added by different developers with different styles. Interestingly, the instantiations of ArrayList<BeanFactoryPostProcessor> for orderedPostProcessors and nonOrderedPostProcessors were updated at a later time to add the small optimization of declaring the size immediately based on the sizes of orderedPostProcessorNames and nonOrderedPostProcessorNames but that developer apparently never questioned the need for the two-step process.
This is one of the questions that involves crossing what I call the "Hello World Gulf" I'm on the "Hello world" I can use SQLite and Content Providers (and resolvers) but I now need to cross to the other side, I cannot make the assumption that onUpgrade will be quick.
Now my go-to book (Wrox, Professional Android 4 development - I didn't chose it because of professional, I chose it because Wrox are like the O'Reilly of guides - O'Reilly suck at guides, they are reference book) only touches briefly on using Loaders, so I've done some searching, some more reading and so forth.
I've basically concluded a Loader is little more than a wrapper, it just does things on a different thread, and gives you a callback (on that worker thread) to process things in, it gives you 3 steps, initiating the query, using the results of the query, and resetting the query.
This seems like quite a thin wrapper, so question 1:
Why would I want to use Loaders?
I sense I may be missing something you see, most "utilities" like this with Android are really useful if you go with the grain so to speak, and as I said Loaders seem like a pretty thin wrapper, and they force me to have callback names which could become tedious of there are multiple queries going on
http://developer.android.com/reference/android/content/Loader.html
Reading that points out that "they ought to monitor the data and act upon changes" - this sounds great but it isn't obvious how that is actually done (I am thinking about database tables though)
Presentation
How should this alter the look of my application? Should I put a loading spinning thing (I'm not sure on the name, never needed them before) after a certain amount of time post activity creation? So the fragment is blank, but if X time elapses without the loader reporting back, I show a spiny thing?
Other operations
Loaders are clearly useless for updates and such, their name alone tells one this much, so any nasty updates and such would have to be wrapped by my own system for shunting work to a worker thread. This further leads me to wonder why would I want loaders?
What I think my answer is
Some sort of wrapper (at some level, content provider or otherwise) to do stuff on a worker thread will mean that the upgrade takes place on that thread, this solves the problem because ... well that's not on the main thread.
If I do write my own I can then (if I want to) ensure queries happen in a certain order, use my own data-structures (rather than Bundles) it seems that I have better control.
What I am really looking for
Discussion, I find when one knows why things are the way they are that one makes less mistakes and just generally has more confidence, I am sure there's a reason Loaders exist, and there will be some pattern that all of Android lends itself towards, I want to know why this is.
Example:
Adapters (for ListViews) it's not immediately obvious how one keeps track of rows (insert) why one must specify a default style (and why ArrayAdapter uses toString) when most of the time (in my experience, dare I say) it is subclasses, reading the source code gives one an understanding of what the Adapter must actually do, then I challenge myself "Can I think of a (better) system that meets these requirements", usually (and hopefully) my answer to that converges on how it's actually done.
Thus the "Hello World Gulf" is crossed.
I look forward to reading answers and any linked text-walls on the matter.
you shouldnt use Loaders directly, but rather LoaderManager
Lost a bunch of time just trying to figure out what was going on here, but I think I'm finally onto something.
We have some fairly normal PicoContainer code which simply turns on caching, which I thought was supposed to result in singleton behaviour:
container.as(Characteristics.CACHE).addComponent(Service.class, ServiceImpl.class);
However as we found today, we have a component which is apparently being constructed not once, but four times. It's not something I can reproduce on my own computer, just on some other developer machines.
We investigated further, and it turns out that multiple threads were hitting PicoContainer to look up the same component at the same time, and instead of instantiating one copy and making the other three threads wait, it appears that it just instantiates four copies (and then only remembers to keep around one of them.)
Is there some relatively simple way to get true singular behaviour in PicoContainer?
Seems pico-container needs explicit synchronization mechanism for the case you are dealing with. Here is a link which documents this behavior and suggests the solutions for the same.
To quote this link
When components are created by two threads concurrently, with the
intention of the instance being cached, it is possible in a small
percentage of cases for the first instance into the cache to be
replaced with a second instance.
The other link worth visiting is regarding caching;
The project I am working on requires a whole bunch of queries towards a database. In principle there are two types of queries I am using:
read from excel file, check for a couple of parameters and do a query for hits in the database. These hits are then registered as a series of custom classes. Any hit may (and most likely will) occur more than once so this part of the code checks and updates the occurrence in a custom list implementation that extends ArrayList.
for each hit found, do a detail query and parse the output, so that the classes created in (I) get detailed info.
I figured I would use multiple threads to optimize time-wise. However I can't really come up with a good way to solve the problem that occurs with the collection these items are stored in. To elaborate a little bit; throughout the execution objects are supposed to be modified by both (I) and (II).
I deliberately didn't c/p any code, as it would be big chunks of code to make any sense.. I hope it make some sense with the description above.
Thanks,
In Java 5 and above, you may either use CopyOnWriteArrayList or a synchronized wrapper around your list. In earlier Java versions, only the latter choice is available. The same is true if you absolutely want to stick to the custom ArrayList implementation you mention.
CopyOnWriteArrayList is feasible if the container is read much more often than written (changed), which seems to be true based on your explanation. Its atomic addIfAbsent() method may even help simplify your code.
[Update] On second thought, a map sounds more fitting to the use case you describe. So if changing from a list to e.g. a map is an option, you should consider ConcurrentHashMap. [/Update]
Changing the objects within the container does not affect the container itself, however you need to ensure that the objects themselves are thread-safe.
Just use the new java.util.concurrent packages.
Classes like ConcurrentLinkedQueue and ConcurrentHashMap are already there for you to use and are all thread-safe.