I have a Directed Acyclic Graph, arcs are Entities and Weights associated do each Arc are the PlanningVariables. I use:
#ValueRangeProvider(id = "bufferRange")
public CountableValueRange<Integer> getDelayRange() {
return ValueRangeFactory.createIntValueRange(1, 1000);
}
to assign values to my variables. Also, i've come across this issue:
Exhaustive Search in OptaPlanner does not work on very simple example, which is now solved by setting variables from int to Integer and checking null values in the score calculation.
Now the problem is that the solver seems not to be backtraking when assigning values. I've used a print to check values being attributed to each arc. In the beginning of the solving process i can see values being set to different arcs. But after some time attributions the solver stucks in assigning values to the same arc. Checking the prints I see the attributions going from 1 to 1000 and then starting again. Since all values from the domain are tested one time, why the solver does not backtrack instead of assigning the same values again?
I tested with all the <nodeExplorationType> options and created a class to use the <entitySorterManner> with the same results.
Thanks in advance.
I supose you are right Geoffrey, deactivated the log and let the program run for almost 48h and it came up with an answer. The way logs are printed mislead the analysis. Just for remark, if logger is deactivated the performance is considerably superior.
Related
I have been coding a recursive algorithm in order to go through different nodes and analyze all the paths in a directed acyclic graph. The thing is that, after some new data has been introduced to the algorithm I get this message Exception in thread "AWT-EventQueue-0" java.lang.StackOverflowError. I have looked through different questions about this and it seems that the error is because of insufficient memory. Could anyone help me to solve this problem?
Here I add a picture of the recursive algorithm:
public boolean checkduration(Node node, double dur, int freq){
boolean outcome=true;
currentPath.add(node);
if((dur>minduration)&&(node.getRep()<=node.getMaxRep())){
ArrayList<Node> clone = (ArrayList<Node>) currentPath.clone();
currentPath2=clone;
failingPaths.add(new ImmutableTriple<Double,Integer, ArrayList<Node>> (dur,freq,currentPath2));
currentPath.remove(node);
return false;
}
node.setRep(node.getRep()+1);
for(int i=0;i<node.getEdge().size();i++){
if(!checkduration(node.getEdge().get(i).previousNode,dur+node.getEdge().get(i).timeRelation, freq+node.getEdge().get(i).frequency)){
outcome=false;
}
}
currentPath.remove(node);
node.setRep(node.getRep()-1);
return outcome;
}
The error seems to be in the condition of (if(!checkduration(node.getEdge().get(i).previousNode,dur+node.getEdge().get(i).timeRelation, freq+node.getEdge().get(i).frequency))) but I do not understand why it works with some data and not always as not so much information has been changed.
Any comments, suggestions would be truly helpful. Thanks to everyone
The StackOverflowError is occurring because you are recursing too many times. And if there are too many recursive calls, that means there is a flaw or incorrect assumption in your terminating condition. Here is your terminating condition:
(dur > minduration) && (node.getRep() <= node.getMaxRep())
Since you have not provided enough information with your question for us to analyze your graph or nodes further, I would suggest you take a closer look at this terminating condition, and make sure that every traversal of the graph will eventually fulfill this condition.
The use of the debugger can also help you to step through the traversal and see where the cycle is occurring, and why it fails to satisfy the terminal condition.
Your graph traversal seems inefficient, since you are counting repetitions per each node, how many times you have visited and you limit your effort by adding some maxRep
All you need is the set of visited nodes, rather than increment of a counter for each node, when you visit it.
Does you DAG have many roots? Is the lack of cycles guaranteed, or do you need to detect cycles?
Please also note that you don't need cloning of paths.
I do not know how to approach the problem of finding out if two numbers are "similar"/close to each other, based on a dataset.
For example to find out if value 1 is similar to value 3, based on the dataset {1,2,3,4,5}.
What are some statistical approaches to solve this problem?
If I understand your question correctly, you can write a for loop. When passing the first number, you set a boolean to true. If this boolean in true, you will increase the 'simelar-factor'. You can end this loop when passing the second number using 'break'
In a scenario of re-solving a previously solved problem (with some new data, of course), it's typically impossible to re-assign a vehicle's very-first assignment once it was given. The driver is already on its way, and any new solution has to take into account that:
the job must remain his (can't be assigned to another vehicle)
the activity that's been assigned to him as the very-first, must remain so in future solutions
For the sake of simplicity, I'm using a single vehicle scenario, and only trying to impose the second bullet (i.e. ensure that a certain activity will be the first in the solution).
This is how I defined the constraint:
new HardActivityConstraint()
{
#Override
public ConstraintsStatus fulfilled(JobInsertionContext iFacts, TourActivity prevAct, TourActivity newAct, TourActivity nextAct,
double prevActDepTime)
{
String locationId = newAct.getLocation().getId();
// we want to make sure that any solution will have "C1" as its first activity
boolean activityShouldBeFirst = locationId.equals("C1");
boolean attemptingToInsertFirst = (prevAct instanceof Start);
if (activityShouldBeFirst && !attemptingToInsertFirst)
return ConstraintsStatus.NOT_FULFILLED_BREAK;
if (!activityShouldBeFirst && attemptingToInsertFirst)
return ConstraintsStatus.NOT_FULFILLED;
return ConstraintsStatus.FULFILLED;
}
}
This is how I build the algorithm:
VehicleRoutingAlgorithmBuilder vraBuilder;
vraBuilder = new VehicleRoutingAlgorithmBuilder(vrpProblem, "schrimpf.xml");
vraBuilder.addCoreConstraints();
vraBuilder.addDefaultCostCalculators();
StateManager stateManager = new StateManager(vrpProblem);
ConstraintManager constraintManager = new ConstraintManager(vrpProblem, stateManager);
constraintManager.addConstraint(new HardActivityConstraint() { ... }, Priority.HIGH);
vraBuilder.setStateAndConstraintManager(stateManager, constraintManager);
VehicleRoutingAlgorithm algorithm = vraBuilder.build();
The results are not good. I'm only getting solutions with a single job assigned (the one with the required activity). In debug it's clear that the job insertion iterations consider many viable options that appear to solve the problem entirely, but at the bottom line, the best solution returned by the algorithm doesn't include the other jobs.
UPDATE: even more surprising, is that when I use the constraint in scenarios with over 5 vehicles, it works fine (worst results are with 1 vehicle).
I'll gladly attach more information if needed.
Thanks
Zach
First, you can use initial routes to ensure that certain jobs need to be assigned to specific vehicles right from the beginning (see example).
Second, to ensure that no activity will be inserted between start and your initial job(location) (e.g. "C1" in your example), you need to prohibit it the way you defined your HardActConstraint, just modify it so that a newAct can never be between prevAct=Start and nextAct=act(C1).
Third, with regards to your update, just have in mind that the essence of the algorithm is to ruin part of the solution (remove a number of jobs) and recreate the solution again (insert the unassigned jobs). Currently, the schrimpf algorithm ruins a number of jobs relative to the total number of jobs, i.e. noJobs = 0.5 * totalNoJobs for the random ruin and 0.3 * totalNoJobs for the radial ruin. If your problem is very small, the share of jobs to be removed might not sufficiant. This is going to change with next release, where you can use an algorithm out of the box which defines an absolute minimum of jobs that need to be removed. For the time being, modify the shares in your algorithmConfig.xml.
1.What data structure to use for electric circuit representation
for Kirchoff Rules computation purposes
how to differentiate between different types of electric components
how to 'recognize' wire inter-connections between them
2.how to implement Kirchoff Rules
how to obtain current and voltage loops
how to store and evaluate Kirchoff equations
[original question text]
Specifically, how would the program recognize something is in series and parallel and how will it differentiate between a battery, resistor, capacitor, inductors, etc..
Java's an object-oriented language. Start thinking about how you'd model your system as objects.
You have a few object candidates already:
Battery
Resistor
Capacitor
Inductor
These would have input and output nodes. The output from one is the input to the next.
What about transistors? You'll have more than one input. What then? Those are non-linear. How do you model those?
You'll build in the proper behavior for each one and wire them together.
You'll have some kind of transient forcing function here. Input current or voltage waveforms. Output is current and voltage at each node versus time.
This is the electrical engineer's equivalent of finite element analysis.
These are really transient ODE, right? How do you plan to solve them? Numerical integration?
agree with duffymo's answer just some things to add (I am C++ friendly so I stick to it)
first some data to represent components
struct pin
{
char name[]; // name id for pin ("C","B","E"...
int part_ix,pin_ix; // connected to patrs[part_ix].pins[pin_ix]
double i,u; // actual: current,voltage
int direction; // in,out,bidirectional
};
struct part
{
char name[]; // name id for part ("resistor","diode",...
pin pins[n]; // n pins of the part (resistor has 2 , transistor has 3, ...)
// here add all values you need for simulation like:
double R,H21E,...
// or even better do a matrix for it so when you multiply it by input currents and voltages
// of every pin you get the correct currents and voltages
double m[n][n+n];
};
also you can add list of pins connections instead of part_ix,pin_ix to save some processing time.
circuit
part parts[];
simple dynamic list of components the interconnections are inside it
loops
you have to extract closed circuit loop from interconnections for current equations and get nodes that connect current loops for voltage equations. This would lead you to system of equations. Nodes have more that 2 connections and closed current loops are just sequence of connections leads back to itself. Look here:
https://stackoverflow.com/a/21884021/2521214
it is one of my answers where part of the code finds closed loops
evaluation
can use gauss elimination for that. Problematic are non linear components like diodes, transistors ... so may be you will need to add more matrices (approximate to polynomial with bigger degree) then you will need to multiply by all currents and voltages powered by (0,1,2,3,...). I think ^3 will be enough for most components and do not forget that some non linear component also need to remember their states (or last current,voltage,... ...).
Also sometimes is better to use symbolic expressions instead of matrix approach but for that you will need expression evaluation engine. I use this approach a lot for self resizing geometry in CAD/CAM meshes.
Suppose I input to WEKA some dataset and set a normalization filter for the attributes so the values be between 0 and 1. Then suppose the normalization is done by dividing on the maximum value, and then the model is built. Then what happens if I deploy the model and in the new instances to be classified an instance has a feature value that is larger than the maximum in the training set. How such a situation is handled? Does it just take 1 or does it then take more than 1? Or does it throw an exception?
The documentation doesn't specify this for filters in general.So it must depend on the filter. I looked at the source code of weka.filters.unsupervised.attribute.Normalize which I assume you are using, and I don't see any bounds checking in it.
The actual scaling code is in the Normalize.convertInstance() method:
value = (vals[j] - m_MinArray[j]) / (m_MaxArray[j] - m_MinArray[j])
* m_Scale + m_Translation;
Barring any (unlikely) additional checks outside this method I'd say that it will scale to a value greater than 1 in the situation that you describe. To be 100% sure your best bet is to write a testcase, invoke the filter yourself, and find out. With libraries that haven't specified their working in the Javadoc, you never know what the next release will do. So if you greatly depend on a particular behaviour, it's not a bad idea to write an automated test that regression-tests the behaviour of the library.
I have the same questions as you said. I did as follows and may this method can help you:
I suppose you use the weka.filters.unsupervised.attribute.Normalize to normalize your data.
as Erwin Bolwidt said, weka use
value = (vals[j] - m_MinArray[j]) / (m_MaxArray[j] - m_MinArray[j])
* m_Scale + m_Translation;
to normalize your attribute.
Don't forget that the Normalize class has this two method:
public double[] getMinArray()
public double[] getMaxArray()
Which Returns the calculated minimum/maximum values for the attributes in the data.
And you can store the minimum/maximum values. And then use the formula to normalize your data by yourself.
Remember you can set the attribute in Instance class, and you can classify your result by Evaluation.evaluationForSingleInstance
I 'll give you the link later, may this help you.
Thank you