Elasticsearch access fields in a document before matching - java

I have the following mappings in my elasticsearch index
{
"test-index" : {
"mappings" : {
"properties" : {
"factor" : {
"type" : "long"
},
"price_range" : {
"type" : "integer_range",
"doc_values" : false
},
// other_fields
}
}
}
}
}
The elasticsearch query is an AND query where there will be conditions for "other_fields" and price. Price will need to be calculated on the fly. I want to use the value of "factor" to calculate the price and then match the price with the "price_range" field. Here are the matching conditions:
If "price_range" is indexed, use "factor" to calculate the price and match against "price_range"
If "price_range" is indexed but "factor" is not, use a default value for factor and match against "price_range"
If "price_range" is not indexed, ignore price calculation
Please let me know how can I go about this. Thanks!

Related

Printf and regex to format Java output

I need to use printf to print data from a text file (the HashMap map is already populated, the Account values aren't null, everything is fine). But how do I use regex to not only maintain the even spacing of 20 spaces between my columns, but also round the last column, "Balance," to only 1 decimal point? I've looked at similar threads, but those formatting rules either threw errors or just had no effect. I tried using "%20s %20s %20s %20s.1f \n" for the first argument, but I ended up with this:
ID First Name Last Name Balance.1f
107H Alessandro Good 6058.14
110K Kolby Cain 8628.4
100A Matias Kane 290.99
103D Macie House 631.12
108I Allan Turner 914.89
106G Nancy Avery 5201.38
105F Semaj Olsen 344.63
109J Wilson Hudson 771.65
102C Alana Farmer 2004.5
101B Johnathan Burgess 457.35
104E Andres Rivers 3487.87
The total balance in all the accounts is: $28790.92
So it apparently only worked for a few of the data points?
Here's my code:
public void printData() {
System.out.printf("%20s %20s %20s %20s%.1f \n", "ID", "First Name", "Last Name", "Balance");
for(Account a : map.keySet()) {
// could do a.getID(), but rather get value from ArrayList
ArrayList<String> myList = map.get(a); // retrieve the VALUE (ArrayList) using the key (Account a)
for(String s : myList) {
System.out.printf("%20s", s);
}
System.out.println();
}
}
In your code, the decimals on each row under the column "balance" are simply being printed as the strings they are. The %.1f in the first line is only being used for the header row as formatting for a 5th argument it isn't receiving, and is not being applied to each following row.
Try this:
System.out.printf("%20s %20s %20s %20s\n", "ID", "First Name", "Last Name", "Balance");
for(Account a : map.keySet()) {
ArrayList<String> myList = map.get(a);
//print the first 3 columns as strings
for(int i = 0; i < myList.size()-1; i++) {
String s = myList.get(i);
System.out.printf("%20s", s);
}
//print the last column as a decimal rounded to 1 place after the decimal
float balance = Float.parseFloat(myList.get(myList.size()-1));
System.out.printf("%20s%.1f", "", balance);
System.out.println();
}

How to transform a flat map to complex JSONObject?

I want to transform a flat key-value map into a complex json object.
Structure is as follows:
the map-keys of course represent the json keys.
nested elements are separated by a dot
list elements are signaled by a digit at the end of the key. there may be multiple digits at the end. The keys unfortunately stand in reversed order. Mean that the first "key-fragment" maps to the last digit, and the innermost key-fragment maps to the first digit.
Following example:
service.fee.1.1=a
service.fee.2.1=b
service.fee.3.1=c
Here the "service" key maps always to index=1. This means "service" is an array, but with only one element in this case.
The one element has the key "fee", with 3 values inside.
The resulting json should thus be:
{
"service": [
{
"fee": ["a", "b", "c"]
}
]
}
Another example:
service.fee.name.1.1=a
service.fee.age.2.1=b
service.fee.test.2.1=c
{
"service": [
{
"fee": [
{
"name": "a"
},
{
"age": "b",
"test": "c"
}
]
}
]
}
That's what I started with, but I cannot get the point where I probably have to use recursion to handle nested objects and lists:
JSONObject json = new JSONObject();
for (Map.Entry<String, String> entry : map.entrySet()) {
String key = entry.getKey();
if (endswithdigit(key)) {
} else {
if (key.contains("-")) {
//complex object
JSONObject subjson = new JSONObject();
json.put(key, subjson);
//TODO probably have to apply some kind of recursion here with subjson??
} else {
//plain value
json.put(key, entry.getValue());
}
}
}
Maybe you could give advise how to properly build a nested JSONObject with nested lists and recursion?
If you need to tackle this problem yourself (IE: a library cannot handle it), then I would break it down so that it can be coherently tackled with Composite Pattern.
I'll address this answer in two parts: first, a proposed solution to create the heirarchy; and second, how to utilize the Composite pattern to turn your internal heirarchy into the JSON you want.
Part 1: Creating the Heirarhcy
One approach for this would be to iteratively create objects by dividing elements in to bins (starting with a common composite root object that contains every element). This will form the composite structure of your data. The flow will be:
For each element in the bin of the object composite:
Strip off the top-level identifier from the left of the element
Create an identifier to be associated with it.
If it is keyed:
Strip off the key from the right
Create a composite array for the identifier (if it does not exist).
If there is further data left of the = of the element:
Create a composite object for the element bin associated with that array index (if it does not exist).
Place the element in a bin for that object.
Otherwise create a leaf node for the value of the index.
Otherwise, if there is further data left of the = of the element:
Create a composite object for the element bin associated with that array index (if it does not exist).
Place the element in a bin for that object.
Otherwise, create a leaf node for the value of the identifier.
Repeat for all new bins.
For example's sake, lets assume we are working with the given dataset:
x.y.z.1.1=A
x.y.z.3.1=B
x.y.w.1.1=C
x.u.1=D
a.b.1=E
a.c.1=F
e.1=G
e.2=H
i=I
m.j=J
m.k=K
The process would then follow as:
ITERATION 0 (initialize):
root // x.y.z.1.1=A, x.y.z.3.1=B, x.y.w.1.1=C, x.u.1=D, a.b.1=E, a.c.1=F, e.1=G, e.2=H, i=I, m.j=J, m.k=K
ITERATION 1:
root :
x[1] // y.z.1=A, y.z.3=B, y.w.1=C, u=D
a[1] // b=E, c=F
e[1] : "G"
e[2] : "H"
i : "I"
m : // j=J, k=K
ITERATION 2:
root :
x[1] :
y[1] // z=A, w=C
y[3] // z=B
u : "D"
a[1] :
b : "E"
c : "F"
e[1] : "G"
e[2] : "H"
i : "I"
m :
j : "J"
k : "K"
ITERATION 3:
root :
x[1] :
y[1] :
z : "A"
w : "C"
y[3] :
z : "B"
u: "D"
a[1] :
b : "E"
c : "F"
e[1] : "G"
e[2] : "H"
i : "I"
m :
j : "J"
k : "K"
Part 2: Composite Pattern
At this point, we've iteratively divided our data into a heirarchical composite structure; now we just need to get our internalized data structure into JSON. This is where the Composite pattern will come in handy; each of your objects will implement the following interface:
// All objects in the composite tree must implement this.
public interface Jsonable {
// The non-leaf objects will need to have their implementation of this
// call it for each child object (and handle gaps).
JsonObject toJsonObject();
}
If following the above, we would likely have three implementations of this interface: ArrayComposite, ObjectComposite, and ValueLeaf.
Calling toJsonObject() on your root element will give you your complete JsonObject. A textural representation of that for the above example is below (notice the added gap in the y array; this needs to be handled in the toJsonObject() call of your array composites):
{
"x" : [
{
"y" : [
{
"z" : "A",
"w" : "C"
},
"",
{
"z" : "B"
}
]
}
],
"a" : [
{
"b" : "D",
"c" : "E"
}
],
"e" : [
"F",
"G"
]
"i" : "I"
"m" : {
"j" : "J",
"k" : "K"
}
}
Which, neglecting white spacing, seems to be what you're looking for.
Note that this assumes a data set does not contain elements that would result in invalid JSON. IE: the dataset could not contain the following:
i=I
i.1=I
As it would be saying that i is both an array and a value.
Please try the Gson library and use new Gson().toJson(yourmap); this will convert your map to JSON format.
Perhaps you resolve it by splitting the key where the numbers start, and using a LIFO for the subkeys and a FIFO for the value and indexes. Instead for splitting it can be done by parsing the key and detecting where the numbers start:
For example:
x.y.z.2.1 = val
This is split to show how it would work, but it can be done just parsing the string (: is to delimit the separation).
x.y.z : 2.1 : val
Then put the subkeys in a LIFO (x goes in first, z last):
LIFO
head: z
y
x
and a FIFO for the value and indexes (2 goes in first, val goes last)
Fifo
top:2
1
val
Then you can pop out from the LIFO and match it to the pop of the FIFO. The first assignment will be for the value of the map, then, the assignments will be done to the object or array of the last step.
z = val
y[1] = z
x[2] = y

In Mule/Dataweave how do I transform a HashMap into a Array when both key & value are numbers

In Mule/Dataweave how do I convert/transform a HashMap into a Array. I have got a HashMap whose key & value are dynamic numbers.
Example:
{"3.2" : 1, "22" : 8, "2.0" : 1}
I want to transform it to this structure:
[
{
"name": "app-a",
"value1": 3.2,
"value2": 1
},
{
"name": "app-a",
"value1": 22,
"value2": 8
},
{
"name": "app-a",
"value1": 2,
"value2": 1
}
]
Solution (Thanks to #Sulthony H)
%dw 1.0
%output application/json
---
payload pluck $$ map {
value1: ($ as :string) as :number,
value2: payload[$]
}
In order to transform a HashMap into an Array, I will do the following steps:
Iterate the HashMap by its key using pluck operator in DataWeave: payload pluck $$ map {}
Transform the key as number: value1: ($ as :string) as :number
Get the value based on that key: value2: payload[$]
Another different solution:
%dw 1.0
%output application/json
---
payload map {
($ mapObject {
name: "app-a",
value1: $$ as :string,
value2: $
})
}
1 - Use the map operator to iterate on the list of elements. payload map
2 - Use mapObject on each of the element of your array $ mapObject. With mapObject, $$ refers to the key name, $ refers to the value
3 - print out the values with value1: $$ as :string, value2: $
Even more simple...
payload pluck (
{
value1:$$,
value2:$
}
)

How to replace String value with regex

MOrris Example
Morris.Bar({
element: 'bar-example',
data: [
{ month: "2006", amount: 100},
{ month: "2007", amount: 75}
],
xkey: "month",
ykeys: ["amount"],
labels: ["amount"]
});
amount in ykeys and lables should be same as it is mentioned in Morris Example. Otherwise graph would not display because of error in amount format.
At moment my amount value is in this way
String amount = "[amount]";
amount="[amount]"
and i want value in this way
["amount"]
What would be easiest and preferred way to replace these values?
Why regex?
If you can achieve your task with simple calls to APIs which don't require regex, stick to them.
amount.subString(1,amount.length()-1);
You don't need to use regex.
amount = amount.replace("[", "[\"").replace("]","\"]");
Edited. OP cleared up what was needed. The above code is replacing [ with [" and ] with "].
The " is escaped within java with \

hadoop distinct count of a field

i have a file whose format is like below:
1,5321234567
1,5324564321
1,5324564321
2,1234567643
2,1234567666
2,9875422345
3,5344435345
3,5344435345
3,5344435345
3,5344435345
3,5345345312
3,8767564564
At the end of the reduce process, i want a distinct counts of the second field with the first field is the key. e.g.
1,2
2,3
3,3
What are the simplest map and reduce functions in Java for this purpose?
Tnx.
If I understand your goal correctly you'll need to :
Make the values per key unique
Count the number of distinct items per "key"
So the simplest way to get there would be something like this:
Assume the input is {A,B}
MAP 1:
Output Key : {A,B}
Output Value: 1
REDUCE 1:
Input Key : {A,B}
Input Values: {1,1,1,...}
Output Key : A
Output Value: B
MAP 2:
Output Key : A
Output Value: 1
REDUCE 2:
Input Key : A
Input Values: {1,1,1,...}
Output Key : A
Output Value: SUM of all the values
As I understand you need count of unique values for a key and not to preserve values.
It would we simple by creating key from record, rest hadoop framwork will take care of sorting unique records for you.
map (IntWritable key, Text value, Context context) {
context.write(value, new IntWritable(1));
}
reduce (Text key, Iterable<IntWritable> values, Context context) {
long count = 0;
for (Iterator<IntWritable> iterator = values.iterator(); iterator.hasNext();) {
count+= iterator.next().get();
}
context.write(key, new LongWritable(count));
}
Reducer can be used as combiner as well.
Just do Sorting. Get all the inputs in the arraylist and do sorting.
This would help you
Array

Categories