Timeout on Neo4j traversal framework - java

I have a very large graph with hundreds of millions of nodes and relationships, where I need to make a traversal to find if a specific node is connected with another one containing a particular property.
The data is highly interconnected, and for a pair of nodes there can be multiple relationships linking them.
Given that this operation needs to run on a real time system I have very strict time constraints, requiring no more than 200ms to find possible results.
So I have created the following TraversalDescriptor:
TraversalDescription td = graph.traversalDescription()
.depthFirst()
.uniqueness(Uniqueness.NODE_GLOBAL)
.expand(new SpecificRelsPathExpander(requiredEdgeProperty)
.evaluator(new IncludePathWithTargetPropertyEvaluator(targetNodeProperty));
The Evaluator checks for each path if the end node is my target, including and pruning the path if that's the case or excluding it and continuing if it's not.
Also, I set a limit on the time spent traversing and the maximum number of results to find.
Everything can be seen in the code below:
private class IncludePathWithTargetPropertyEvaluator implements Evaluator {
private String targetProperty;
private int results;
private long startTime, curTime, elapsed;
public IncludePathWithTargetPropertyEvaluator(String targetProperty) {
this.targetProperty = targetProperty;
this.startTime = System.currentTimeMillis();
this.results = 0;
}
public Evaluation evaluate(Path path) {
curTime = System.currentTimeMillis();
elapsed = curTime - startTime;
if (elapsed >= 200) {
return Evaluation.EXCLUDE_AND_PRUNE;
}
if (results >= 3) {
return Evaluation.EXCLUDE_AND_PRUNE;
}
String property = (String) path.endNode().getProperty("propertyName");
if (property.equals(targetProperty)) {
results = results + 1;
return Evaluation.INCLUDE_AND_PRUNE;
}
return Evaluation.EXCLUDE_AND_CONTINUE;
}
Finally I written a custom PathExpander because each time we need to traverse only edges with a specific property value:
private class SpecificRelsPathExpander implements PathExpander {
private String requiredProperty;
public SpecificRelsPathExpander(String requiredProperty) {
this.requiredProperty = requiredProperty;
}
public Iterable<Relationship> expand(Path path, BranchState<Object> state) {
Iterable<Relationship> rels = path.endNode().getRelationships(RelTypes.FOO, Direction.BOTH);
if (!rels.iterator().hasNext())
return null;
List<Relationship> validRels = new LinkedList<Relationship>();
for (Relationship rel : rels) {
String property = (String) rel.getProperty("propertyName");
if (property.equals(requiredProperty)) {
validRels.add(rel);
}
}
return validRels;
}
// not used
public PathExpander<Object> reverse() {
return null;
}
The issue is that the traverser keep going even long after the 200ms have passed.
From what I understood the evaluator behavior is to enqueue all following branches for each path evaluated with EXCLUDE_AND_CONTINUE, and the traverser itself won’t stop until it has visited all subsequent paths in the queue.
So what can happen is: if I have even few nodes with a very high degree it will result in thousands of paths to be traversed.
In that case, is there a way to make the traverser abruptly stop when the timeout has been reached and return possible valid paths that have been found in the while?

I would go with the following line of thought:
Once the timeout was elapsed stop expanding the graph.
private class SpecificRelsPathExpander implements PathExpander {
private String requiredProperty;
private long startTime, curTime, elapsed;
public SpecificRelsPathExpander(String requiredProperty) {
this.requiredProperty = requiredProperty;
this.startTime = System.currentTimeMillis();
}
public Iterable<Relationship> expand(Path path, BranchState<Object> state) {
curTime = System.currentTimeMillis();
elapsed = curTime - startTime;
if (elapsed >= 200) {
return null;
}
Iterable<Relationship> rels = path.endNode().getRelationships(RelTypes.FOO, Direction.BOTH);
if (!rels.iterator().hasNext())
return null;
List<Relationship> validRels = new LinkedList<Relationship>();
for (Relationship rel : rels) {
String property = (String) rel.getProperty("propertyName");
if (property.equals(requiredProperty)) {
validRels.add(rel);
}
}
return validRels;
}
// not used
public PathExpander<Object> reverse() {
return null;
}
I think taking a look at Neo4J TraversalDescription Definition might be beneficial for you too.

I would implement the expander to keep the lazy nature of the traversal framework, also for it's simpler code. This would prevent the traversal eagerly collecting all relationships for a node, Like this:
public class SpecificRelsPathExpander implements PathExpander, Predicate<Relationship>
{
private final String requiredProperty;
public SpecificRelsPathExpander( String requiredProperty )
{
this.requiredProperty = requiredProperty;
}
#Override
public Iterable<Relationship> expand( Path path, BranchState state )
{
Iterable<Relationship> rels = path.endNode().getRelationships( RelTypes.FOO, Direction.BOTH );
return Iterables.filter( this, rels );
}
#Override
public boolean accept( Relationship relationship )
{
return requiredProperty.equals( relationship.getProperty( "propertyName", null ) );
}
// not used
#Override
public PathExpander<Object> reverse()
{
return null;
}
}
Also the traversal will continue as long as the client, i.e. the one holding the Iterator received from starting the traversal calls hasNext/next. There will be no traversal on its own, it all happens in hasNext/next.

Related

Trouble to call a method of another class without have to instantiate it

The function overall should be like this:
Work() of NodeManager is a blocking method that call doYouWork(counter) on Node[0], and wait
until method WorkDone() it's called by doyourWork()
Node[i] do some work, simulated by Thread.sleep, then if counter is > 0, call doYourWork on the
next Node (Nodes order is 0..9->0), or if counter is <=0 Node call the method WorkDone of NodeManager
My problem is that i can't call WorkDone in the class Node without instantiate an object of NodeManager, and i can't even declare WorkDone a static method because then i can't call NotifyAll.
What should i do?
public class NodeManager implements NodeManager {
private final int N_NODES = 10;
private static Node[] nodes;
private int _counter;
public NodeManager(int counter)
{
nodes = new Node[N_NODES];
this._counter = counter;
for(int i=0; i<N_NODES ;)
{
nodes[i] = new Node(i, ++i);
}
Work(_counter);
}
public static Node[] getNodes() {
return nodes;
}
#Override
public synchronized void Work(int counter) {
nodes[0].doYourWork(counter);
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
#Override
public void WorkDone() {
notifyAll();
}
}
and this is Node class
public class Node extends AbstractNode {
private int _id;
private int _next_id;
private Node[] nodes;
public Node(int id, int next_id) {
this._id = id;
this._next_id = next_id;
if(id == 9)
this._next_id = 0;
}
#Override
public void doYourWork(int counter) {
Random rand = new Random();
long millis = rand.nextInt((150-100) + 1) + 100;
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
e.printStackTrace();
}
if(counter > 0) {
nodes = NodeManager.getNodes();
nodes[_next_id].doYourWork(--counter);
}
else {
//NodeManager.WorkDone(); i would do this
}
}
}
I'm going to address a few things about your code first, then see if I can't get to the meat of a solution.
First, please take a peek at Java naming standards. Class names should be UpperCase, as you've done, but method names should be lowerCase, which you seem to do inconsistently.
Second, this is rather odd: public class NodeManager implements NodeManager
You have both a class and an interface named NodeManager. That seems like a really, really bad naming scheme.
Also, this loop makes me squeamish:
for(int i=0; i<N_NODES ;)
{
nodes[i] = new Node(i, ++i);
}
This is a future bug waiting to happen. While the code is doing what you want now, this is a dangerous pattern. Nearly always your for-loops should auto-increment inside the for() part. Doing it embedded like this is going to bite you in the ass if you make a habit of this. Please be careful.
And then you also did this:
public Node(int id, int next_id) {
this._id = id;
this._next_id = next_id;
if(id == 9)
this._next_id = 0;
}
Gross. Pass in the proper next_id. Your node shouldn't magically know you're making 10 of these guys. See previous comment about how you're initializing these guys.
As for your solution...
Let's start with NodeManager. You need to decide if you're dealing with static or non-static fields. You're mixing and matching what you're doing. Some of your fields are static, some are not.
Here's what I would do.
1a. I would eliminate the static in the field definitions in NodeManager.
1b. Or, if it's appropriate, I would apply the Singleton pattern to NodeManager, and make it possible to only have 1.
I would make it so Node doesn't know a thing about NodeManager.
Now, how do you do #2?
2a. Define an interface called something like NotifyWhenDone. It has a method called workDone.
2b. Node implements NotifyWhenDone.
2c. Pass NodeManager to doYourWork()
2d. doYourWork() expects not a NodeManager object, but a NotifyWhenDone object.
2e. Then instead of NodeManager.workDone(), you call objectToNotify.workDone().
Or there's another solution.
Take a look at promises. It's complicated, but it's a GREAT way to do this. Do a Google of Java Promise. It's really probably the right way.

Inserting a new element in a Java Map or Updating and existing element by its key

The business request is that I construct a List<ResponsePOJO> from a List<RequestPOJO>. This seems simple enough.
The request actually needs some(more) processing, meaning that I need to save a couple of parameters, then for every element do a request to a CassandraMicroservice that returns a List<CassandraPOJO>. Every element of this <List<CassandraPOJO> has it's own List<DataPOJO> that needs to fall in a category represented by some specific characteristics in the List<ResponsePOJO>. Essentially for every element in the List<RequestPOJO> I am building a List<List<DataPOJO>> that needs to be dealt with. Unfortunately, everything is stuck as it is so I cannot change the architecture.
In short, my problem is that I am unable to find a simple createOrUpdate on Map. I tried to make an updateOrCreate of the type BiFunction. I think it should be enough to do something like a BiFunction, Map> that should look like (pseudocode):
private BiFunction<ResponsePOJO, Map<Integer, ResponsePOJO>, Map<Integer, ResponsePOJO>> updateOrCreate(?*) {
return (newValue, currentResult) -> {
if (currentResult.contains(newValue))
currentResult.updateParams(newValue);
else
currentResult.put(newValue);
return currentResult;
};
}
?* I noticed that the call to a BiFunction is parameterless, how does it know what type its parameters are? (not my main question, but I think one reason for my problem is the lack of truly understanding BiFunction together with Map.compute)
The (almost) complete code snippet is:
// the POJOs, using lombok (anotations skipped)
public class RequestPOJO {
private Long id;
private Long idEntity;
private Long idInventory;
// some are omitted for brevity
}
#Builder(toBuilder = true)
public class ResponsePOJO {
private Integer id;
private Long noInventory;
private String nameSpecies;
private Double g1;
private Double g2;
private Double g3;
// some are omitted for brevity
public void updateParams(ResponsePOJO resp) {
// only these fields need updating, because of business logic
this.g1 += resp.getG1();
this.g2 += resp.getG2();
this.g3 += resp.getG3();
}
}
public class CassandraPOJO {
private Long id;
private List<DataPOJO> detailsDataPojo;
private Long noInventory;
// some are omitted for brevity
}
public class DataPOJO {
private Long idSpecies;
private Long idQualityClass;
private Double height;
private Double diameter;
private Double noCount;
// some are omitted for brevity
}
// the business logic
public List<ResponsePOJO> compute(List<RequestPOJO> requestPojoList, List<SpeciesPOJO>speciesList) {
List<ResponsePOJO> responseList = new ArrayList<>();
for (RequestPOJO requestPojo : requestPojoList) {
Long idEntity = requestPojo.getIdEntity();
Long noInventory = requestPojo.getIdInventory(); // yes I know this is wrong, stick to the question
List<CassandraPOJO> res = cassandraMicroservice.getByIdEntityFilteredByNoInventory(idEntity, noInventory);
res.stream().forEach(dar -> {
Map<Long, List<DataPOJO>> listDataPojoBySpeciesId =
dar.getDetailsDataPojo().stream().collect(
Collectors.groupingBy(DataPOJO::getIdSpecies, Collectors.toList())
);
responseList.addAll(
classifyDataPojo(listDataPojoBySpeciesId, speciesList, dar.getNoInventory())
);
});
}
Comparator<ResponsePOJO> compareResponsePojo = Comparator.comparing(ResponsePOJO::getNameSpecies);
return responseList.stream()
.sorted(compareResponsePojo).collect(Collectors.toList());
}
private List<ResponsePOJO> classifyDataPojo(Map<Long, List<DataPOJO>> listDataPojoBySpeciesId, List<SpeciesPOJO> speciesList, Long noInventory) {
Map<Integer, ResponsePOJO> result = new HashMap();
for (Long speciesId : listDataPojoBySpeciesId.keySet()) {
String nameSpecies = speciesList.stream().filter(s -> s.getIdSpecies() == speciesId).findFirst().get().getNameSpecies(); // it's guaranteed to be found
for (DataPOJO dataP : listDataPojoBySpeciesId.get(speciesId)) {
Double volumeUnit = getVolumeUnit(dataP);
Double equivalenceCoefficient = getEquivalentClass(dataP, speciesList);
CustomTypeEnum customType = getCustomType(speciesList, dataP.getDiameter, speciesId);
resp = ResponsePOJO.builder()
.noInventory(noInventory)
.nameSpecies(nameSpecies)
.build();
switch (customType) {
case G1:
resp.setG1(volumeUnit * equivalenceCoefficient * dataP.getNoCount());
break;
case G2:
resp.setG2(volumeUnit * equivalenceCoefficient * dataP.getNoCount());
break;
case G3:
resp.setG3(volumeUnit * equivalenceCoefficient * dataP.getNoCount());
break;
default:
break;
}
}
int diameter = ComputeUtil.getDiamenterClass(resp.getDiameter());
// doesn't compile, it says Wrong 2nd argument type. Found: BiFunction<ResponsePOJO, Map<Integer, ResponsePOJO>, Map<Integer,ResponsePOJO>> Required: BiFunction<? super Integer,? super ResponsePOJO,? extends ResponsePOJO>
result.compute(diameter, updateOrCreate());
// I have fiddled with reduce and merge but to no avail
// result.values().stream().reduce(new ArrayList<>(), updateOrCreate(), combiningFunction());
// result.merge(diameter, update())
}
return result.values().stream().collect(Collectors.toList());
}
I chose Map because I want it to be as fast as possible, this method compute(...) is called pretty often and I don't want to search through all the response list everytime I need something updated. I am reluctant to change the POJOs, especially the CassandraPOJO that has the DataPOJO.
As you can see this is a mixture of both classical for and java8 stream. I intend to change all the code according to java8 but it took me a considerable amount of time to write this (convoluted, hard-to-follow, easier-to-understand) code.
I am firmly convinced that there is an easier solution, but I can't figure it out on my own.
Amongst that lengthy code, it looks to me you are simply looking for a merge operation such as:
result.merge(diameter, resp, (a, b) -> {
a.updateParams(b);
return a;
});
return new ArrayList<>(result.values());
which can be abstracted out as
private BiFunction<ResponsePOJO, ResponsePOJO, ResponsePOJO> mergeResponse() {
return (a, b) -> {
a.updateParams(b);
return a;
};
}
and used further as
result.merge(diameter, resp, mergeResponse());

How to write a Java sequence generator for UniqueId, which will Auto-Reset to '1' (base seq. number) at midnight everyday

We have a Spring boot service where we receive files everyday, because of some issue (on producer) we are receiving multiple files with same name & date appended.
The new file overwriting the old, to handle it we want to append a sequence (starting from 1) at every file name.
But sequence should be auto-reset to '1' at midnight everyday.
Can anybody suggest a API or a way to reset the sequence.
To generate auto sequence, we are using AtomicSequenceGenerator , but we are unable to implement a simple auto-reset logic.
public class AtomicSequenceGenerator implements SequenceGenerator {
private AtomicLong value = new AtomicLong(1);
#Override
public long getNext() {
return value.getAndIncrement();
}
}
To not receive twice a 1:
public class AtomicSequenceGenerator implements SequenceGenerator {
private AtomicLong value = new AtomicLong(1);
private volatile LocalDate lastDate = LocalDate.now();
#Override
public long getNext() {
LocalDate today = LocalDate.now();
if (!today.equals(lastDate)) {
synchronized(this) {
if (!today.equals(lastDate)) {
lastDate = today;
value.set(1);
}
}
}
return value.getAndIncrement();
}
}
That is a bit ugly, so try a single counter:
public class AtomicSequenceGenerator implements SequenceGenerator {
private static long countWithDate(long count, LocalDate date) {
return (((long)date.getDayOfYear()) << (63L-9)) | count;
}
private static long countPart(long datedCount) {
return datedCount & ((1L << (63L-9)) - 1);
}
private static boolean dateChanged(long datedCount, LocalDate date) {
return (int)(datedCount >>> (63L-9)) != date.getDayOfYear();
}
private AtomicLong value = new AtomicLong(countWithDate(1, LocalDate.now()));
#Override
public long getNext() {
long datedCount = value.getAndIncrement();
LocalDate today = LocalDate.now();
if (dateChanged(dateCount, today)) {
long next = countWithDate(1L, today);
if (value.compareAndSet(datedCount+1, next)) {
datedCount = next;
} else {
datedCount = getNext();
}
}
return datedCount;
}
}
This uses an AtomicLong with the day-of-year packed into the counter.
One pulls a next counter.
If the date changed then:
when one could set the next day's 1, then give it.
when not, someone earlier with probably an earlier counter took the 1,
and then we need to take the next again.
You could create a singleton instance of your generator which resets itself as soon as a new date is passed.
Something like this:
public class AtomicSequenceGenerator implements SequenceGenerator {
// Private constructor in order to avoid the creation of an instance from outside the class
private AtomicSequenceGenerator(){}
private AtomicLong value = new AtomicLong(1);
#Override
public long getNext() {
return value.getAndIncrement();
}
// This is where the fun starts
// The T indicates some type that represents the file date
private static T prev_file_date = null;
private static AtomicSequenceGenerator instance = new AtomicSequenceGenerator();
public static synchronized long getNext(T file_date)
{
if ((prev_file_date == null) || (!prev_file_date.equals(file_date)))
{
instance.value.set(1);
prev_file_date = file_date;
}
return (instance.getNext());
}
}
As requested by #JoopEggen my version of his first solution :
public class AtomicSequenceGenerator implements SequenceGenerator {
private final Clock clock;
private final Object lock = new Object();
#GuardedBy("lock")
private long value;
#GuardedBy("lock")
private LocalDate today;
public AtomicSequenceGenerator(Clock clock) {
this.clock = clock;
synchronized (lock) {
value = 1;
today = LocalDate.now(clock);
}
}
#Override
public long getNext() {
synchronized (lock) {
LocalDate date = LocalDate.now(clock);
if (!date.equals(today)) {
today = date;
value = 1;
}
return value++;
}
}
}
The main differences are :
This uses just a private monitor to protect both the LocalDate and the value.
value is now a plain long, since it's guarded by a lock, it doesn't need to be AtomicLong anymore.
I inject a Clock object (for easier testing)
No double checked locking. Arguably double checked locking can be faster, but I don't know if it's really needed, until you do some performance testing.

How can I schedule to recycle the object after several minutes [duplicate]

This question already has answers here:
Java time-based map/cache with expiring keys [closed]
(8 answers)
Closed 8 years ago.
I have a defined a class to manage the game room. When a user creates a new room, I generate a new room with unique room number and add it to the hashset.
Now ,I hope to remove that Room object from the hashset and recycle the Room object for perfarmance issue, say 24 hours , or the abandoned Room object will spend most of my mememory
How can I achieve this? Also, any suggestion to improve the performance will be highly appreciated.
My class is as follows:
public class RoomService {
private RoomService(){
super();
}
private HashSet<Room> roomSet =new HashSet<Room>();
private static RoomService instance =new RoomService();
public static RoomService getServiceInstance(){
return instance;
}
private static Integer generateRandom(int length) {
Random random = new Random();
char[] digits = new char[length];
digits[0] = (char) (random.nextInt(9) + '1');
for (int i = 1; i < length; i++) {
digits[i] = (char) (random.nextInt(10) + '0');
}
return Integer.decode(new String(digits));
}
/**
* Generate new Room with an unique Room number
* #return
*/
public Room newRoom(){
Room newRoom;
do{
newRoom =new Room(generateRandom(4));
}
while(!roomSet.add(newRoom));
return newRoom;
}}
public class Room {
private Integer roomNum;
private Date createTime=new Date();
private String creatorId;
/*
* constructor
*/
public Room(Integer roomNum) {
super();
this.roomNum = roomNum;
}
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((roomNum == null) ? 0 : roomNum.hashCode());
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
Room other = (Room) obj;
if (roomNum == null) {
if (other.roomNum != null)
return false;
} else if (!roomNum.equals(other.roomNum))
return false;
return true;
}
//getter and setter
//
//
public String getCreatorId() {
return creatorId;
}
public void setcreatorId(String creatorId) {
this.creatorId = creatorId;
}
public Integer getRoomNum() {
return roomNum;
}
public Date getCreateTime() {
return createTime;
}
}
You can do this yourself by using a Timer. Every time somebody creates a new room you create a new TimerTask instance that will delete the id again and schedule that task to be executed using public void schedule(TimerTask task, Date time). It could look something like this:
private final Timer timer; // Initialised somewhere
public Integer newRoomNum() {
Integer newRoomNum = ... // Create id
Date date = ... // Create date when the id should be deleted again
timer.schedule(new DeleteKeyTask(newRoomNum), date);
return newRoomNum;
}
Where DeleteKeyTask is a custom subclass of TimerTask that deletes the given id.
private class DeleteKeyTask extends TimerTask {
private final Integer id;
public DeleteKeyTask(Integer id) {
this.id = id;
}
#Override
public void run() {
// remove id
}
You could use different approaches to save space:
Instead of having a task per key, you can store the date along side the integer key. For example you can use a HashMap<Integer, Date> (or store milliseconds instead of date). The keys of the map form your previous set. The values indicate the time the key was inserted or expires.
You can then schedule the timer to remove the next expiring key, at that time you look for the next expiring key and schedule to remove that etc. This will cost you O(n) time to compute the next expiring key. Your run method for the task would look something like
public void run() {
map.remove(id);
Integer next = ... // Find next expiring key
timer.schedule(new DeleteKeyTask(next), map.get(next));
}
And you would need to adapt the creation method:
public Integer create() { // Previously newRoomNum()
Integer newRoomNum = ... // Create id
Date date = ... // Create date when the id should be deleted again
if(map.isEmpty()) // Only schedule when empty
timer.schedule(new DeleteKeyTask(newRoomNum), date);
map.put(newRoomNum, date);
return newRoomNum;
}
This way you will just need to store a date per integer. If the O(n) overhead is too much for you when calculating the next, you can make it faster by using more space: use a Queue and insert new keys. The queue will allow you to retrieve the next expiring key, making looking up the next expiring key O(1).

Populate parent List elements based on child values

Consider the following code:
CLASS AuditProgressReport:
public class AuditProgressReport
{
private List<AuditProgressReport> audit_progress_reports = null;
private String name = null;
private String description = null;
private int compliant;
private int non_compliant;
private int not_completed ;
/**
*
*/
public AuditProgressReport()
{
super();
}
public AuditProgressReport(
String name_param,
int compliant_param,
int non_compliant_param,
int not_completed_param)
{
super();
this.name = name_param;
this.compliant = compliant_param;
this.non_compliant = non_compliant_param;
this.not_completed = not_completed_param;
}
public void addToCompliant(int compl_to_add_param)
{
this.compliant += compl_to_add_param;
}
public void addToNonCompliant(int non_compl_to_add_param)
{
this.non_compliant += non_compl_to_add_param;
}
public void addToNotCompleted(int not_compl_param)
{
this.not_completed += not_compl_param;
}
public void setAuditProgressReports(List<AuditProgressReport> report_category_nodes_param)
{
this.audit_progress_reports = report_category_nodes_param;
}
public List<AuditProgressReport> getAuditProgressReports()
{
return this.audit_progress_reports;
}
public void setCompliant(int compliantParam)
{
this.compliant = compliantParam;
}
public int getCompliant()
{
return this.compliant;
}
public void setNonCompliant(int nonCompliantParam)
{
this.non_compliant = nonCompliantParam;
}
public int getNonCompliant()
{
return this.non_compliant;
}
public void setNotCompleted(int notCompletedParam)
{
this.not_completed = notCompletedParam;
}
public int getNotCompleted()
{
return this.not_completed;
}
public void setName(String name_param)
{
this.name = name_param;
}
public String getName()
{
return this.name;
}
public void setDescription(String description_param)
{
this.description = description_param;
}
public String getDescription()
{
return this.description;
}
#Override
public String toString()
{
return ("Compliant["+this.compliant+
"] Non-Compliant["+this.non_compliant+
"] Not-Completed["+this.not_completed+"]");
}
}
And CLASS Tester:
public class Tester
{
public static void main(String[] args)
{
List<AuditProgressReport> main_level = new ArrayList<AuditProgressReport>();
AuditProgressReport ar_1_1 = new AuditProgressReport("ar_1_1",0,0,0);
AuditProgressReport ar_1_2 = new AuditProgressReport("ar_1_2",0,0,0);
AuditProgressReport ar_1_1_1 = new AuditProgressReport("ar_1_1_1",0,0,0);
AuditProgressReport ar_1_1_2 = new AuditProgressReport("ar_1_1_2",15,65,20);
AuditProgressReport ar_1_1_3 = new AuditProgressReport("ar_1_1_3",20,30,50);
AuditProgressReport ar_1_1_1_1 = new AuditProgressReport("ar_1_1_1_1",5,5,90);
AuditProgressReport ar_1_1_1_2 = new AuditProgressReport("ar_1_1_1_2",55,5,40);
AuditProgressReport ar_1_1_1_3 = new AuditProgressReport("ar_1_1_1_3",35,35,30);
List<AuditProgressReport> arl_1_1_1 = new ArrayList<AuditProgressReport>();
arl_1_1_1.add(ar_1_1_1_1);
arl_1_1_1.add(ar_1_1_1_2);
arl_1_1_1.add(ar_1_1_1_3);
ar_1_1_1.setAuditProgressReports(arl_1_1_1);
List<AuditProgressReport> arl_1_1 = new ArrayList<AuditProgressReport>();
arl_1_1.add(ar_1_1_1);
arl_1_1.add(ar_1_1_2);
arl_1_1.add(ar_1_1_3);
AuditProgressReport ar_1_2_1 = new AuditProgressReport("ar_1_2_1",10,30,60);
AuditProgressReport ar_1_2_2 = new AuditProgressReport("ar_1_2_2",20,20,60);
List<AuditProgressReport> arl_1_2 = new ArrayList<AuditProgressReport>();
arl_1_2.add(ar_1_2_1);
arl_1_2.add(ar_1_2_2);
ar_1_1.setAuditProgressReports(arl_1_1);
ar_1_2.setAuditProgressReports(arl_1_2);
main_level.add(ar_1_1);
main_level.add(ar_1_2);
Tester tester = new Tester();
for(AuditProgressReport prog_rep : main_level)
{
tester.populateParents(prog_rep, null);
}
//TODO Now check the values...
}
private void populateParents(
AuditProgressReport audit_progress_param,
AuditProgressReport parent_param)
{
List<AuditProgressReport> audit_progress =
audit_progress_param.getAuditProgressReports();
System.out.println("name["+audit_progress_param.getName()+"]");
if(parent_param != null)
{
int compl = audit_progress_param.getCompliant();
int nonCompl = audit_progress_param.getNonCompliant();
int notCompleted = audit_progress_param.getNotCompleted();
parent_param.addToCompliant(compl);
parent_param.addToNonCompliant(nonCompl);
parent_param.addToNotCompleted(notCompleted);
}
if(audit_progress != null && ! audit_progress.isEmpty())
{
for(AuditProgressReport prog_rep : audit_progress)
{
this.populateParents(prog_rep,audit_progress_param);
}
}
}
}
When you run this, you will note that the values of the parent elements in the list is updated with the sum of the values in the child list.
The problem I am facing is that I want to have it updated all the way through the tree instead of just the immediate parent.
Is there a pattern that would help me achieve this?
See illustration below:
Like others suggested I would use the Observer pattern. Each parent node listens for changes on the childrens.
But my solution differs from that of #zmf because if you have a a big tree with lot of children node and at each update you have to sum each value, you would spend a lot of processing time.
What if you send only the difference between the old value and the new value each time you update a child node. Let's make an example. You start with this tree:
[12]--+--[10]-----[10]
|
+--[ 2]--+--[ ]
|
+--[ 2]
and you update a children like this
[12]--+--[10]-----[10]
|
+--[ 2]--+--[ 3]
|
+--[ 2]
the node that gets updated with the value "3" send its change to the parent with the method call parent.updateNode(3). The parent have only to sum its current value (in this example "2") with the value it receives from the child node. So it will update to the value "5"
[12]--+--[10]-----[10]
|
+--[ 5]--+--[ 3]
|
+--[ 2]
the node with the new value "5" will call parent.updateNode(3) and the final solution will be
[15]--+--[10]-----[10]
|
+--[ 5]--+--[ 3]
|
+--[ 2]
IMHO this solution is better because each updateNode() method have only to sum its own current value with the change received from its child node and call its parent with the same value received. You do not have to get the value from each one of your children and sum all the values. This will save you a lot of time if you have a big tree. So in this example when you change the value from 0 to 3. You will get 2 call to parent.updateNode(3) and each parent will get updated.
public void updateNode(int value) {
if (value != this.value) {
this.value = value;
if (getParent() != null) {
int sum = 0;
for (Node n : getParent().getChildren()) {
sum += n.getValue();
}
getParent.updateNode(sum);
}
}
}
Other poster's suggested the use of the Observer pattern. The Observer pattern is a subset of a Pub/Sub pattern. I recommend using this over an Observer pattern.
The main difference between an Observer pattern and a Pub/Sub pattern is that in an Observer pattern, an Observer is both a publisher of ChangeEvents and a dispatcher of messages. It's essentially making every Observable into an EventDispatcher. In a traditional Pub/Sub pattern, Observables are only publisher's of ChangeEvents. ChangeEvents are published to a separate EventDispatchingService which handles what Subscribers the Events need to be sent to.
Attempting to track global changes with an Observer pattern is difficult to do. For example, if you want to count the number of times time the addToCompliant() method was called, you would have to add the Observer on every instance of the Observable. With an Event Pub/Sub, your observer class can just subscribe to listen on the type of ChangeEvent and it will receive all of them. The best (IMHO) Event Pub/Sub library I've used is Google Guava's Event Bus. In your particular case, I'd do something like the following.
public class EventBusSingleton {
public static final EventBus INSTANCE = new EventBus("My Event Bus");
}
public class ComplianceChange {
private AuditProgressReport changedReport;
private int delta;
public ComplianceChange(AuditProgressReport changedReport, int delta) {
this.changedReport = changedReport;
this.delta = delta;
}
...
}
public class AuditProgressReport {
...
private AuditProgressReport parent;
public AuditProgressReport getParent() {
return parent;
}
public void addToCompliant(int delta) {
this.compliant += delta;
ComplianceChange change = new ComplianceChange(this, delta);
EventBusSingleton.INSTANCE.post(change);
}
...
}
public class ComplianceChangeHandler {
#Subscribe
public void notifyParent(ComplianceChange event) {
AuditProgressReport parent = event.getChangedReport().getParent();
int delta = event.getDelta();
parent.addToCompliant(delta);
}
#Subscribe
public void somethingElse(ComplianceChange event) {
// Do Something Else
}
}
// Somewhere during initialization
EventBusSingleton.INSTANCE.register(new ComplianceChangeHandler());
Based on your class name, I guess you want to see your audit progression live when running. So my hypothesis:
the tree structure does not change too much, almost fixed after creation
node values change often, counters initial states are 0
Here is an efficient implementation:
each node maintains the full list of its parent nodes
nodes are inserted with 0 value
when a node value is changed or simply increased, the parents' values from the node's list are updated by applying the delta between the previous node value
As a consequence the structure is always up-to-date, node insertion is still possible and does not impact the existing nodes.
If many audit threads run concurrently and report values into the structure, you have to take care to concurrency issues and use AtomicInteger as counter holders.
This is a pragmatic design and sincerely I have not found any matching pattern. Like for sort algorithms, trying to use patterns in such a context may be counter-productive.

Categories