Using Optaplanner to solve VRPTWPD - java

I'm new to optaplanner, and am hoping to use it to solve the VRPTW problem with pickups and deliveries (VRPTWPD).
I started by taking the VRPTW code from the examples repo. I am trying to add to it to solve my problem. However, I'm unable to return a solution that honors the precedence/vehicle constraints (pickups must be done before deliveries, and both must be done by the same vehicle).
I am consistently returning a solution where the hard score is what I would expect for such a solution (i.e. I can add up all the violations in a small sample problem and see that the hard score matches the penalties I assigned for these violations).
The first approach I tried was following the steps outlined by Geoffrey De Smet here - https://stackoverflow.com/a/19087210/351400
Each Customer has a variable customerType that describes whether it is a pickup (PU) or a delivery (DO). It also has a variable called parcelId that indicates which parcel is either being picked up or delivered.
I added a shadow variable to the Customer named parcelIdsOnboard. This is a HashSet that holds all the parcelIds that the driver has with him when he visits a given Customer.
My VariableListener that keeps parcelIdsOnboard updated looks like this:
public void afterEntityAdded(ScoreDirector scoreDirector, Customer customer) {
if (customer instanceof TimeWindowedCustomer) {
updateParcelsOnboard(scoreDirector, (TimeWindowedCustomer) customer);
}
}
public void afterVariableChanged(ScoreDirector scoreDirector, Customer customer) {
if (customer instanceof TimeWindowedCustomer) {
updateParcelsOnboard(scoreDirector, (TimeWindowedCustomer) customer);
}
}
protected void updateParcelsOnboard(ScoreDirector scoreDirector, TimeWindowedCustomer sourceCustomer) {
Standstill previousStandstill = sourceCustomer.getPreviousStandstill();
Set<Integer> parcelIdsOnboard = (previousStandstill instanceof TimeWindowedCustomer)
? new HashSet<Integer>(((TimeWindowedCustomer) previousStandstill).getParcelIdsOnboard()) : new HashSet<Integer>();
TimeWindowedCustomer shadowCustomer = sourceCustomer;
while (shadowCustomer != null) {
updateParcelIdsOnboard(parcelIdsOnboard, shadowCustomer);
scoreDirector.beforeVariableChanged(shadowCustomer, "parcelIdsOnboard");
shadowCustomer.setParcelIdsOnboard(parcelIdsOnboard);
scoreDirector.afterVariableChanged(shadowCustomer, "parcelIdsOnboard");
shadowCustomer = shadowCustomer.getNextCustomer();
}
}
private void updateParcelIdsOnboard(Set<Integer> parcelIdsOnboard, TimeWindowedCustomer customer) {
if (customer.getCustomerType() == Customer.PICKUP) {
parcelIdsOnboard.add(customer.getParcelId());
} else if (customer.getCustomerType() == Customer.DELIVERY) {
parcelIdsOnboard.remove(customer.getParcelId());
} else {
// TODO: throw an assertion
}
}
I then added the following drool rule:
rule "pickupBeforeDropoff"
when
TimeWindowedCustomer((customerType == Customer.DELIVERY) && !(parcelIdsOnboard.contains(parcelId)));
then
System.out.println("precedence violated");
scoreHolder.addHardConstraintMatch(kcontext, -1000);
end
For my example problem I create a total of 6 Customer objects (3 PICKUPS and 3 DELIVERIES). My fleet size is 12 vehicles.
When I run this I consistently get a hard score of -3000 which matches my output where I see two vehicles being used. One vehicle does all the PICKUPS and one vehicle does all the DELIVERIES.
The second approach I used was to give each Customer a reference to its counterpart Customer object (e.g. the PICKUP Customer for parcel 1 has a reference to the DELIVERY Customer for parcel 1 and vice versa).
I then implemented the following rule to enforce that the parcels be in the same vehicle (note: does not fully implement precedence constraint).
rule "pudoInSameVehicle"
when
TimeWindowedCustomer(vehicle != null && counterpartCustomer.getVehicle() != null && (vehicle != counterpartCustomer.getVehicle()));
then
scoreHolder.addHardConstraintMatch(kcontext, -1000);
end
For the same sample problem this consistently gives a score of -3000 and an identical solution to the one above.
I've tried running both rules in FULL_ASSERT mode. The rule using parcelIdsOnboard does not trigger any exceptions. However, the rule "pudoInSameVehicle" does trigger the following exception (which is not triggered in FAST_ASSERT mode).
The corrupted scoreDirector has no ConstraintMatch(s) which are in excess.
The corrupted scoreDirector has 1 ConstraintMatch(s) which are missing:
I'm not sure why this is corrupted, any suggestions would be much appreciated.
It's interesting that both of these methodologies are producing the same (incorrect) solution. I'm hoping someone will have some suggestions on what to try next. Thanks!
UPDATE:
After diving into the asserts that were being triggered in FULL_ASSERT mode I realized that the problem was with the dependent nature of the PICKUP and DELIVERY Customers. That is, if you make a move that removes the hard penalty on a DELIVERY Customer you also have to remove the penalty associated with the PICKUP Customer. In order to keep these in sync I updated my VehicleUpdatingVariableListener and my ArrivalTimeUpdatingVariableListener to trigger score calculation callbacks on both Customer objects. Here's the updateVehicle method after updating it to trigger score calculation on both the Customer that was just moved and the counterpart Customer.
protected void updateVehicle(ScoreDirector scoreDirector, TimeWindowedCustomer sourceCustomer) {
Standstill previousStandstill = sourceCustomer.getPreviousStandstill();
Integer departureTime = (previousStandstill instanceof TimeWindowedCustomer)
? ((TimeWindowedCustomer) previousStandstill).getDepartureTime() : null;
TimeWindowedCustomer shadowCustomer = sourceCustomer;
Integer arrivalTime = calculateArrivalTime(shadowCustomer, departureTime);
while (shadowCustomer != null && ObjectUtils.notEqual(shadowCustomer.getArrivalTime(), arrivalTime)) {
scoreDirector.beforeVariableChanged(shadowCustomer, "arrivalTime");
scoreDirector.beforeVariableChanged(((TimeWindowedCustomer) shadowCustomer).getCounterpartCustomer(), "arrivalTime");
shadowCustomer.setArrivalTime(arrivalTime);
scoreDirector.afterVariableChanged(shadowCustomer, "arrivalTime");
scoreDirector.afterVariableChanged(((TimeWindowedCustomer) shadowCustomer).getCounterpartCustomer(), "arrivalTime");
departureTime = shadowCustomer.getDepartureTime();
shadowCustomer = shadowCustomer.getNextCustomer();
arrivalTime = calculateArrivalTime(shadowCustomer, departureTime);
}
}
This solved the score corruption issue I had with my second approach, and, on a small sample problem, produced a solution that satisfied all the hard constraints (i.e. the solution had a hard score of 0).
I next tried to run a larger problem (~380 Customers), but the solutions are returning very poor hard scores. I tried searching for a solution for 1 min, 5 mins, and 15 mins. The score seems to improve linearly with runtime. But, at 15 minutes, the solution is still so bad that it seems like it would need to run for at least an hour to produce a viable solution.
I need this to run in 5-10 minutes at the most.
I learned about Filter Selection. My understanding is that you can run a function to check whether the move you are about to make would result in breaking a built in hard constraint, and if it would, then this move is skipped.
This means that you don't have to re-run score calculations or explore branches that you know will not be fruitful. For example, in my problem I don't want you to ever be able to move a Customer to a Vehicle unless its counterpart is assigned to that Vehicle or not assigned a Vehicle at all.
Here is the filter I implemented to check for that. It only runs for ChangeMoves, but I suspect I need this to implement a similar function for SwapMoves as well.
public class PrecedenceFilterChangeMove implements SelectionFilter<ChangeMove> {
#Override
public boolean accept(ScoreDirector scoreDirector, ChangeMove selection) {
TimeWindowedCustomer customer = (TimeWindowedCustomer)selection.getEntity();
if (customer.getCustomerType() == Customer.DELIVERY) {
if (customer.getCounterpartCustomer().getVehicle() == null) {
return true;
}
return customer.getVehicle() == customer.getCounterpartCustomer().getVehicle();
}
return true;
}
}
Adding this filter immediately led to worse scores. That makes me think I have implemented the function incorrectly, though it's not clear to me why it is incorrect.
Update 2:
A co-worker pointed out the problem with my PrecedenceFilterChangeMove. The correct version is below. I've also included PrecedenceFilterSwapMove implementation. Together, these have enabled me to find a solution to the problem where no hard constraints are violated in ~10 minutes. There are a couple of other optimizations I think I might be able to make to reduce this even further.
I will post another update if those changes are fruitful. I'd still love to hear from someone in the optaplanner community about my approach and whether they think there are better ways to model this problem!
PrecedenceFilterChangeMove
#Override
public boolean accept(ScoreDirector scoreDirector, ChangeMove selection) {
TimeWindowedCustomer customer = (TimeWindowedCustomer)selection.getEntity();
if (customer.getCustomerType() == Customer.DELIVERY) {
if (customer.getCounterpartCustomer().getVehicle() == null) {
return true;
}
return selection.getToPlanningValue() == customer.getCounterpartCustomer().getVehicle();
}
return true;
}
PrecedenceFilterSwapMove
#Override
public boolean accept(ScoreDirector scoreDirector, SwapMove selection) {
TimeWindowedCustomer leftCustomer = (TimeWindowedCustomer)selection.getLeftEntity();
TimeWindowedCustomer rightCustomer = (TimeWindowedCustomer)selection.getRightEntity();
if (rightCustomer.getCustomerType() == Customer.DELIVERY || leftCustomer.getCustomerType() == Customer.DELIVERY) {
return rightCustomer.getVehicle() == leftCustomer.getCounterpartCustomer().getVehicle() ||
leftCustomer.getVehicle() == rightCustomer.getCounterpartCustomer().getVehicle();
}
return true;
}

There's mixed pickup and delivery VRP experimental code here, which works. We don't have a polished out-of-the-box example yet, but we it's on long-term roadmap.

Related

Should I use Java String Pool for synchronization based on unique customer id?

We have server APIs to support clients running on ten millions devices. Normally clients call server once a day. That is about 116 clients seen per second. For each client (each with unique ID), it may make several APIs calls concurrently. Server then need to sequence those API calls from the same client. Because, those API calls will update the same document in the Mongodb database. For example: last seen time and other embedded documents.
Therefore, I need to create a synchronization mechanism based on client's unique Id. After some research, I found String Pool is appealing and easy to implement. But, someone made a comment that locking on String Pool may conflict with other library/module which also use it. And, therefore, String Pool should never be used for synchronization purpose. Is the statement true? Or should I implement my own "String Pool" by WeakHashMap as mentioned in the link below?
Good explanation of String Pool implementation in Java:
http://java-performance.info/string-intern-in-java-6-7-8/
Article stating String Pool should not be use for synchronization:
http://www.journaldev.com/1061/thread-safety-in-java
==================================
Thanks for BeeOnRope's suggestion, I will use Guava's Interner to explain the solution. This way, client that don't send multiple requests at the same time will not be blocked. In addition, it guarantees only one API request from one client is processed at the same time. By the way, we need to use a wrapper class as it's bad idea to lock on String object as explained by BeeOnRope and the link he provided in his answer.
public class Key {
private String id;
public Key(String id) {
this.id = id;
}
public String getId() {
return id;
}
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ( (id == null) ? 0 : id.hashCode());
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj) return true;
if (obj == null) return false;
if (getClass() != obj.getClass()) return false;
Key other = (Key)obj;
if (id == null) {
if (other.id != null) return false;
} else if (!id.equals(other.id)) return false;
return true;
}
}
Interner<Key> myIdInterner = Interners.newWeakInterner();
public void processApi1(String clientUniqueId, RequestType1 request) {
synchronized(myIdInterner.intern(new Key(clientUniqueId))) {
// code to process request
}
}
public void processApi2(String clientUniqueId, RequestType2 request) {
synchronized(myIdInterner.intern(new Key(clientUniqueId))) {
// code to process request
}
}
Well if your strings are unique enough (e.g., generated via a cryptographic hash1) synchronizing on client IDs will probably work, as long as you call String.intern() on them first. Since the IDs are unique, you aren't likely to run into conflicts with other modules, unless you happen to pass your IDs in to them and they follow the bad practice of locking on them.
That said, it is probably a bad idea. In addition to the small chance of one day running into unnecessary contention if someone else locks on the same String instance, the main problem is that you have to intern() all your String objects, and this often suffers from poor performance because of the native implementation of the string intern table, it's fixed size, etc. If you really need to lock based only on a String, you are better off using Guava's Interners.newWeakInterner() interner implementation, which is likely to perform much better. Wrap your string in another class to avoid clashing on the built-in String lock. More details on that approach in this answer.
Besides that, there is often another natural object to lock on, such as a lock in a session object, etc.
This is quite similar to this question which has more fleshed out answers.
1 ... or, at a minimum, have at least have enough bits to make collision unlikely enough and if your client IDs aren't part of your attack surface.

Titan Cassandra - Ghost Vertices and Inconsistent Read Behavior Until Restart

Deleting vertices from Titan leads to inconsistent read behavior. I'm testing this on a single machine running Cassandra, here's my conf.properties:
storage.backend=cassandra
storage.hostname=localhost
storage.cassandra.keyspace=test
The following method deletes the appropriate vertex:
public void deleteProfile(String uuid, String puuid) {
for(Person person : this.graph.getVertices("uuid", uuid, Person.class)) {
if (person != null) {
for (Profile profile : this.graph.getVertices("uuid", puuid, Profile.class)) {
person.removeProfile(profile);
graph.removeVertex(profile.asVertex());
}
}
}
this.graph.getBaseGraph().commit();
}
When the following method gets called it returns two different sets of results:
public Iterable<ProfileImpl> getProfiles(String uuid) {
List<ProfileImpl> profiles = new ArrayList<>();
for(Person person : this.graph.getVertices("uuid", uuid, Person.class)) {
if (person != null) {
for (Profile profile : person.getProfiles()) {
profiles.add(profile.toImpl());
}
}
}
return profiles;
}
One result will be as expected - it will not contain the deleted profile. However, when I run it enough times - it sometimes will contain one extra profile - the one which was deleted.
Attempting to delete the same vertex again shows that no vertex exists with that 'uuid', the iterator's hasNext() returns false.
After the program is restarted, however, it never returns the deleted vertex. How can I fix this inconsistent behavior?
The problem is that on some threads, transactions had been opened for the graph already. Reading from the graph opens up a transaction, even if nothing is changed. These transactions need to be closed in order to ensure that the behavior is consistent.
According http://s3.thinkaurelius.com/docs/titan/0.9.0-M2/tx.html#tx-config you should set checkInternalVertexExistence

Cyclomatic Complexity, joining conditions and readability

Consider the following method (in Java - and please just ignore the content):
public boolean equals(Object object) {
if (this == object) {
return true;
}
if (object == null) {
return false;
}
if (getClass() != object.getClass()) {
return false;
}
if (hashCode() != object.hashCode()) {
return false;
}
return true;
}
I have some plugin that calculates: eV(g)=5 and V(g)=5 - that is, it calculates Essential and common CC.
Now, we can write the above method as:
public boolean equals2(Object object) {
if (this == object) {
return true;
}
if (object == null || getClass() != object.getClass()) {
return false;
}
return hashCode() == object.hashCode();
}
and this plugin calculates eV(g)=3 and V(g)=3.
But how I do understand CC, the values should be the same! CC is not about counting the lines of code, but the independent paths. Therefore, joining two if in one line does not really reduces CC. In fact, it only can make things less readable.
Am I right?
EDIT
Forgot to share this small convenient table for calculating CC quickly: Start with a initial (default) value of one (1). Add one (1) for each occurrence of each of the following:
if statement
while statement
for statement
case statement
catch statement
&& and || boolean operations
?: ternary operator and ?: Elvis operator.
?. null-check operator
EDIT 2
I proved that my plugin is not working well, since when I inline everything in one line:
public boolean equals(Object object) {
return this == object || object != null && getClass() == object.getClass() && hashCode() == object.hashCode();
}
it returns CC == 1, which is clearly wrong. Anyway, the question remains: is CC reduced
[A] 5 -> 4, or
[B] 4 -> 3
?
Long story short...
Your approach is a good approach to calculate CC, you just need to decide what you really want to do with it, and modify accordingly, if you need so.
For your second example, both CC=3 and CC=5 seem to be good.
The long story...
There are many different ways to calculate CC. You need to decide what is your purpose, and you need to know what are the limitations of your analysis.
The original definition from McCabe is actually the cyclomatic complexity (from graph theory) of the control flow graph. To calculate that one, you need to have a control flow graph, which might require a more precise analysis than your current one.
Static analyzers want to calculate metrics fast, so they do not analyze the control flow, but they calculate a complexity metric that is, say, close to it. As a result, there are several approaches...
For example, you can read a discussion about the CC metric of SonarQube here or another example how SourceMeter calculates McCC here.
What is common, that these tools count conditional statements, just like you do.
But, these metrics wont be always equal with the number of independent execution paths... at least, they give a good estimation.
Two different ways to calculate CC (McCabe and Myers' extension):
V_l(g) = number of decision nodes + 1
V_2(g) = number of simple_predicates in decision nodes + 1
If your goal is to estimate the number of test cases, V2 is the one for you. But, if you want to have a measure for code comprehension (e.g. you want to identify methods that are hard to maintain and should be simplified in the code), V1 is easier to calculate and enough for you.
In addition, static analyzers measure a number of additional complexity metrics too (e.g. Nesting Level).
Converting this
if (hashCode() != object.hashCode()) {
return false;
}
return true;
to this
return hashCode() == object.hashCode();
obviously reduces CC by one, even by your quick table. There is only one path through the second version.
For the other case, while we can't know exactly how your plugin calculates those figures, it is reasonable to guess that it is treating if (object == null || getClass() != object.getClass()) as "if a non-null object's class matches then ...", which is a single check and thus adds just one to CC. I would consider that a reasonable shortcut since null checks can be rolled up into "real" checks very easily, even within the human brain.
My opinion is that the main aim of a CC-calculating IDE plugin should be to encourage you to make your code more maintainable by others. While there is a bug in the plugin (that inlined single-line conditional is not particularly maintainable), the general idea of rewarding a developer by giving them a better score for more readable code is laudable, even if it is slightly incorrect.
As to your final question: CC is 5 if you strictly consider logical paths; 4 if you consider cases you should consider writing unit tests for; and 3 if you consider how easy it is for someone else to quickly read and understand your code.
In the second method
return hashCode() == object.hashCode(); costs 0 so you win 1. It's considered as calculation and not logical branch.
But for the first method I don't know why it's cost 5, I calculate 4.
As far as style is concerned, I consider the following the most readable:
public boolean equals(Object object) {
return this == object || (object != null && eq(this, object));
};
private static boolean eq(Object x, Object y) {
return x.getClass() == y.getClass()
&& x.hashCode() == y.hashCode(); // safe because we have perfect hashing
}
In practice, it may not be right to exclude subclasses from being equal, and generally one can not assume that equal hash codes imply equal objects ... therefore, I'd rather write something like:
public boolean equals(Object object) {
return this == object || (object instanceof MyType && eq(this, (MyType) object));
}
public static boolean eq(MyType x, MyType y) {
return x.id.equals(y.id);
}
This is shorter, clearer in intent, just as extensible and efficient as your code, and has a lower cyclomatic complexity (logical operators are not commonly considered branches for counting cyclomatic complexity).

Java Server Client, shared variable between threads

I am working on a project to create a simple auction server that multiple clients connect to. The server class implements Runnable and so creates a new thread for each client that connects.
I am trying to have the current highest bid stored in a variable that can be seen by each client. I found answers saying to use AtomicInteger, but when I used it with methods such as atomicVariable.intValue() I got null pointer exception errors.
What ways can I manipulate the AtomicInteger without getting this error or is there an other way to have a shared variable that is relatively simple?
Any help would be appreciated, thanks.
Update
I have the AtomicInteger working. The problem is now that only the most recent client to connect to the server seems to be able to interact with it. The other client just sort of freeze.
Would I be correct in saying this is a problem with locking?
Well, most likely you forgot to initialize it:
private final AtomicInteger highestBid = new AtomicInteger();
However working with highestBid requires a great deal of knowledge to get it right without any locking. For example if you want to update it with new highest bid:
public boolean saveIfHighest(int bid) {
int currentBid = highestBid.get();
while (currentBid < bid) {
if (highestBid.compareAndSet(currentBid, bid)) {
return true;
}
currentBid = highestBid.get();
}
return false;
}
or in a more compact way:
for(int currentBid = highestBid.get(); currentBid < bid; currentBid = highestBid.get()) {
if (highestBid.compareAndSet(currentBid, bid)) {
return true;
}
}
return false;
You might wonder, why is it so hard? Image two threads (requests) biding at the same time. Current highest bid is 10. One is biding 11, another 12. Both threads compare current highestBid and realize they are bigger. Now the second thread happens to be first and update it to 12. Unfortunately the first request now steps in and revert it to 11 (because it already checked the condition).
This is a typical race condition that you can avoid either by explicit synchronization or by using atomic variables with implicit compare-and-set low-level support.
Seeing the complexity introduced by much more performant lock-free atomic integer you might want to restore to classic synchronization:
public synchronized boolean saveIfHighest(int bid) {
if (highestBid < bid) {
highestBid = bid;
return true;
} else {
return false;
}
}
I wouldn't look at the problem like that. I would simply store all the bids in a ConcurrentSkipListSet, which is a thread-safe SortedSet. With the correct implementation of compareTo(), which determines the ordering, the first element of the Set will automatically be the highest bid.
Here's some sample code:
public class Bid implements Comparable<Bid> {
String user;
int amountInCents;
Date created;
#Override
public int compareTo(Bid o) {
if (amountInCents == o.amountInCents) {
return created.compareTo(created); // earlier bids sort first
}
return o.amountInCents - amountInCents; // larger bids sort first
}
}
public class Auction {
private SortedSet<Bid> bids = new ConcurrentSkipListSet<Bid>();
public Bid getHighestBid() {
return bids.isEmpty() ? null : bids.first();
}
public void addBid(Bid bid) {
bids.add(bid);
}
}
Doing this has the following advantages:
Automatically provides a bidding history
Allows a simple way to save any other bid info you need
You could also consider this method:
/**
* #param bid
* #return true if the bid was successful
*/
public boolean makeBid(Bid bid) {
if (bids.isEmpty()) {
bids.add(bid);
return true;
}
if (bid.compareTo(bids.first()) <= 0) {
return false;
}
bids.add(bid);
return true;
}
Using an AtomicInteger is fine, provided you initialise it as Tomasz has suggested.
What you might like to think about, however, is whether all you will literally ever need to store is just the highest bid as an integer. Will you never need to store associated information, such as the bidding time, user ID of the bidder etc? Because if at a later stage you do, you'll have to start undoing your AtomicInteger code and replacing it.
I would be tempted from the outset to set things up to store arbitrary information associated with the bid. For example, you can define a "Bid" class with the relevant field(s). Then on each bid, use an AtomicReference to store an instance of "Bid" with the relevant information. To be thread-safe, make all the fields on your Bid class final.
You could also consider using an explicit Lock (e.g. see the ReentrantLock class) to control access to the highest bid. As Tomasz mentions, even with an AtomicInteger (or AtomicReference: the logic is essentially the same) you need to be a little careful about how you access it. The atomic classes are really designed for cases where they are very frequently accessed (as in thousands of times per second, not every few minutes as on a typical auction site). They won't really give you any performance benefit here, and an explicit Lock object might be more intuitive to program with.

Java code PMD Complains about Cyclomatic Complexity , of 20

When i ran PMD on my Java Code , one of the Error Message it is showing is
"The class STWeb has a Cyclomatic Complexity , of 20 " .
Typically my java class is of this way
public class STWeb implements STWebService {
public String getData(RequestData request)
{
validate(request);
}
public boolean validate(Data[] formdata)
{
if(formdata.length==1)
//do this
else if(formdata.length==3)
//do this
else if(formdata.length==4)
//do this
else if(formdata.length>4)
//do this
else if(formdata.length==2)
{
if(formdata[0].getName.equals("OIY"))
{
}
/ And many more if else here
}
}
}
As you can see , as per my business requirements , i need to code the class
with many if's and if else so the reason the cyclocomplexity has ncreased , please tell me
what is feasible approach as per the standard for this ??
Cyclomatic Complexity measurements shouldn't be used for quality control, but rather as an indicator/warning for bad code. You should focus more on the code behind it rather than the value of the CC itself.
Although you can reduce the complexity of the validate method by splitting it into smaller methods through refactoring, the class as a whole will still have the same CC.
As long as the code is readable and makes sense to the next person that has to look at it, then having a higher CC shouldn't matter so much.
It helps if you have something like this:
if (a) {
return true;
} else if (b) {
return true;
} else if (c) {
return true;
} else {
return false;
}
then, you replace it with this:
return a || b || c;
Just wanted to add, that sometimes it's possible to resolve such problems with object- or structure-building. You could declare a "Wrapper-Class" for your data that is supposed to be returned. But there are always cases when you can't apply this without bloating the code with tons of objects, which in return also results in unreadable code ^^"
EDIT: this SO-post is a [nice example with ENUMS]
Cyclomatic complexity seems to indicate the amount of code paths that exist. So if your requirements say you must use many ifs and if elses, then you can ignore that message.
If this is mandatory - yes this happens despite it's futil - you can often reduce the class cyclomatic complexity by introducing base classes and move distribute the functions into the base classes until the per class cyclomatic complexity is ok.
Or simpler: add // NOPMD to your class:
public class VeryComplexStuff { // NOPMD
...

Categories