How to get a Hazelcast map inside a Camel route? - java

I'm really fresh to Camel (and hazelcast, at that), and I've been playing around with it a bit recently. A seemingly simple operation is causing me a lot of trouble, and I'm struggling to find any waypointers anywhere.
I have a listener watching for changes on a hazelcast map. If said changes match a certain criteria, I want to grab the entire map and send it to a processor. Something like this:
from(hazelcast:map:someMap?hazelcastInstance=#hazelcastInstance)
.filter().method(SomeFilter.class, filterMethod)
.???
// If the filter cafeterias are met, get the entire map and send it to a processor
But I am really not sure how to, well, get entire map itself, especially using Java DSL. The closest thing I've found is to get the map's keySet and then call getAll(keySet) on it, which seems needlessly contrived for such a simple thing? If this is really the preferred method, there is another issue - how do you pass said keySet as the parameter to the getAll operation? I.e.:
<snip>
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastConstants.GET_KEYS_OPERATION))
.to(hazelcast:map:someMap?hazelcastInstance=#hazelcastInstance) // Gets the keySet just fine.
.setHeader(HazelcastConstants.OPERATION, constants(HazelcastConstants.GET_ALL_OPERATION))
.????
// I've tried .setHeader(HazelcastConstants.OPERATION_PARAM, new
// SimpleExpression("${body}")> here,
// amongst many other things, but, I just get an empty object back, so it's
// pretty clear I'm messing up
// the format or parameter choice here.
.to(hazelcast:map:someMap?hazelcastInstance=#hazelcastInstance)
Using camel 2.18.0 here, by the way.

Well after bit more trial and error, I got it working. I swear it was one of the first things I tried, but I must've bungled it up on the first attempt and got stumped. All you really do is get the keySet and then put into OBJECT_ID. So something like this:
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastConstants.GET_KEYS_OPERATION))
.to(hazelcast:map:someMap?hazelcastInstance=#hazelcastInstance)
.setHeader(HazelcastConstants.OPERATION, constants(HazelcastConstants.GET_ALL_OPERATION))
.setHeader(HazelcastConstants.OBJECT_ID, new SimpleExpression("${body}"))
.to(hazelcast:map:someMap?hazelcastInstance=#hazelcastInstance)
(Note: it's hazelcast-map in newer versions.)
And ta-dah, you've got a Hashmap to work with. The alternative seems to use QUERY, but I couldn't figure out the proper syntax for, well, the query. I'd love to hear if anyone has a working example with a QUERY header...

Related

updateStateByKey from RDD

I am a bit new to Spark-graphx, so please forgive if this is a stupid question. I also would prefer to do this in Java, rather than Scala, if at all possible.
I need to run a graphx calculation on the RDDs of a JavaDStream, but I need to roll the results back into my state object.
I am doing the graphx calculation inside of foreachRDD, since I do not know of another way to get the RDDs from the JavaDStream;
updateStateByKey only works on the JavaDStream;
Each graph vertex maps 1-1 to each state object, so if there is a way to access the state object inside of the foreachRDD, then this would solve it. But just passing a reference to the object inside of the vertex and calling the update function inside of there strikes me as bad practise, but I could be wrong?
How would you solve this problem in Java? I am ready to restructure the calculations to a different logical flow, if there is a better way to do this.
To make this more visual, the structure looks like this:
JavaDStream<StateObject> stream = inputDataStream.updateStateByKey(function);
stream.foreachRDD(rdd -> {
Graph<Vertex, EdgeProperty> graph = GraphImpl.apply(/* derive the Vertex and EdgeProperty from the rdd */);
JavaRDD<Vertex> updatedVertices = graphOperation(graph);
// How to put the contents of updatedVertices back into stream?
});
I put my graph calculation in as a transform and got things up and running up to the point of hanging during fold (in Pregel) and errors from Scala when running JavaConverters.asScalaIteratorConverter that there was no appropriate iterator...
In short, after reading online that Graphframes is potentially more stable than graphx for Java, since it is apparently easier to wrap the Scala in Java context for Dataframes, I have abandoned this approach and moved to Graphframes. For others who have run into similar problems, I apologize that I have no solution to offer, but I am finding the Dataframe approach to work must better with my algorithm.

Duplicate planning entities in the solution

I'm new to Optaplanner, and I try to solve a quite simple problem (for now, I will add more constraints eventually).
My model is the following: I have tasks (MarkerNesting), that must run one at a time on a VirtualMachine; the goal is to assign a list of MarkerNestings to VirtualMachines, having all machines used (we can consider that we have more tasks than machines as a first approximation). As a result, I expect each task to have a start and a end date (as shadow variables - not implemented yet).
I think I must use a chained variable, with the VirtualMachine being the anchor (chained through time pattern) - am I right?
So I wrote a program inspired by some examples (tsp and coach and shuttle) with 4 machines and 4 tasks, and I expect each machine having one task when it is solved. When running it, though, I get some strange results : not all machines are used, but the worst is that I have duplicate MarkerNesting instances (output example):
[VM 1/56861999]~~~>[Nesting(155/2143571436)/[Marker m4/60s]]~~~>[Nesting(816/767511741)/[Marker m2/300s]]~~~>[Nesting(816/418304857)/[Marker m2/300s]]~~~>[Nesting(980/1292472219)/[Marker m1/300s]]~~~>[Nesting(980/1926764753)/[Marker m1/300s]]
[VM 2/1376400422]~~~>[Nesting(155/1815546035)/[Marker m4/60s]]
[VM 3/1619356001]
[VM 4/802771878]~~~>[Nesting(111/548795052)/[Marker m3/180s]]
The instances are different (to read the log: [Nesting(id/hashcode)]), but they have the same id, so they are the same entity in the end. If I understand well, Optaplanner clones the solution whenever it finds a best one, but I don't know why it mixes instances like that.
Is there anything wrong in my code? Is it a normal behavior?
Thank you in advance!
Duplicate MarkerNesting instances that you didn't create, have the same content, but a different memory address, so are != from each other: that means something when wrong in the default solution cloner, which is based on reflection. It's been a while since anyone ran into an issue there. See docs section on "planning clone". The complex model of chained variables (which will be improved) doesn't help here at all.
Sometimes a well placed #DeepPlanningClone fixes it, but in this case it might as well be due to the #InverseRelationShadowVariable not being picked.
In any case, those system.out's in the setter method are misleading - they can happen both by the solution cloner as well as by the moves, so without the solution hash (= memory address), they tell nothing. Try doing a similar system.out in either your best solution change events, or in the BestSolutionRecaller call to cloneWorkingSolution(), for both the original as well as the clone.
As expected, I was doing something wrong: in Schedule (the PlanningSolution), I had a getter for a collection of VirtualMachine, which calculate from another field (pools : each Pool holds VirtualMachines). As a result, there where no setter, and the solution cloner was probably not able to clone the solution properly (maybe because pools is not annotated as a problem fact or a planning entity?).
To fix the problem, I removed the Pool class (not really needed), leaving a collection of VirtualMachines in Schedule.
To sum up, never introduce too many classes before you need them ^_^'
I pushed the correct version of my code on github.

how do i call a procedure (In and out parameter) using java with property file and make it more generic?

I have a procedure that has one in parameter and two out parameters.
I would like to make my code more generic in such a way that,
if in future any procedure will come then I will just create one properties file of it and update.
Code will automatically work accordingly and setString and RegisterOut parameters.
Short Answer: Stop reinventing the wheel.
Use an existing tool to solve this problem.
Accept This:
You will never encounter a problem that has never before been encountered.
You will never come up with a totally new and novel technique for solving a problem.
Every problem we will ever encounter was solved in the 1960s;
all we encounter is new variations of problems.
All we will ever do is create variations of solutions that already exist.
Whether or not you believe the statement above,
you are planning to do nothing new.
Everything you will ever want to do with a database has already been solved by Hibernate, MyBatis, and other JDBC tools.
Note that these are more than "just JDBC tools",
but they cover all of the JDBC stuff you will ever need or want to do.
Choose one of those and
read the documentation.
MyBatis is likely to be a good option,
since it is lighter weight than Hibernate.

Java function on following function

Yesterday I made this question: Java function on function
For help and get marked as Duplicate but I think I didn't get understand there what I want and now I try again.
I want methods can be only called on methods for example we have the class Roads and on the road we will go a Way.
Roads.Way1()
After we choose the Way1 we will go to Path1
Roads.Way1().Path1()
But if we choose Way2
Roads.Way2()
We are not able to go to Path1() cause Way2() goes to Garden1() so
Roads.Way2().Garden1()
So what I try to say you can only use the methods(functions) in a wanted way and I saw this on different API or Library. So for the good understanding
Way1 goes to Path1 and ISN't able to go to Garden1
Way2 goes to Garden1 and ISN't able to go to Path1
So how to manager that I can make different roads that has there own ways so I could make like
Roads.Way1().
/*
Goes to:
Path1()
Fountain()
Market()
*/
And Way to cant access them and can only use there own destinations.
I think what you are asking for is: how can I express "control flow" using "language features". And well, there would be ways to get there: you would need a lot of different classes (or maybe interfaces); so something that is a "Way2" would simply not offer a "Path1" method.
But: doing so sounds like bad idea. It will work fine initially, but as soon as you start extending your system, you will be running into problems all the time:
"Hmm, I need to change Way2; it should now allow to go Path1; but uups; there are some Way2-thingies that should not allow Path1; so I actually need a Way3-thingy" and so on. Chances are extremely high that maintaining this code will turn into a nightmare very soon.
I know, this is just an opinion, but my gut feeling is: you are looking for the wrong solution to your problem. Instead, you should spent time on identifying what your actual problem is; and then think about better ways to solve that problem!

Using Stream API for organising application pipeline

As far as I know Stream API is intended to be applied on collections. But I like the idea of them so much that I try to apply them when I can and when I shouldn't.
Originally my app had two threads communicating through BlockingQueue. First would populate new elements. Second make transformations on them and save on disk. Looked like a perfect stream oportunity for me at a time.
Code I ended up with:
Stream.generate().flatten().filter().forEach()
I'd like to put few maps in there but turns out I have to drag one additional field till forEach. So I either have to create meaningless class with two fields and obscure name or use AbstractMap.SimpleEntry to carry both fields through, which doesn't look like a great deal to me.
Anyway I'd rewritten my app and it even seems to work. However there are some caveats. As I have infinite stream 'the thing' can't be stopped. For now I'm starting it on daemon thread but this is not a solution. Business logic (like on connection loss/finding, this is probably not BL) looks alienated. Maybe I just need proxy for this.
On the other hand there is free laziness with queue population. One thread instead of two (not sure how good is this). Hopefully familiar pattern for other developers.
So my question is how viable is using of Stream API for application flow organising? Is there more underwather roks? If it's not recomended what are alternatives?

Categories