combining parameters from an array field in Solrj - java

I have data index in Solr. One parameter is an array and looks like this:
<arr name="sm_vid_Code_of_Federal_Regulations">
<str>Section 1.13</str>
<str>Subpart A</str>
<str>Part 1</str>
<str>Subtitle A</str>
<str>Title 7</str>
</arr>
I need to restrict the results using two or more of these fields.
I tried the following but it does not seem to be working correctly:
params.set("fq", "(sm_vid_Code_of_Federal_Regulations:\"Part " + "1" +"\")" + " OR "
+ "(sm_vid_Code_of_Federal_Regulations:\"Title " + "7" +"\")");
Is this the right approach to combine elements from a field array?

Turns out that I should use the class SolrQuery and the method addField.
So the correct code will be:
SolrQuery query = new SolrQuery();
query.addFilterQuery("(sm_vid_Code_of_Federal_Regulations:\"Title " + "7" +"\")",
"(sm_vid_Code_of_Federal_Regulations:\"Part " + "1" +"\")");
You can add apply multiple query filters using the addFilterQuery method.
I am very grateful to the anonymous person who wrote the fantastic Solr tutorial:http://www.solrtutorial.com/solrj-tutorial.html

Related

How to convert custom Mongo DB update query to Java code

I was finding on the internet how to update all the document field values with lowercase.
I luckily found a query which I modified as per my requirement and it is working correctly.
db.messages.updateMany({},
[
{
$set: {
recipientEmail: {
$toLower: '$recipientEmail'
},
senderEmail: {
$toLower: '$senderEmail'
}
}
}
],{ multi: true })
But now I am trying to convert this query into Java code, I am not able to convert it.
I again started looking into the internet, but couldn’t find any code.
So, can anyone help me convert this query to Java code so that I can use it in my Spring Boot application?
Thanks in advance.
You can use #Query annotation in your repository interface and pass your query as it is (above the method signature).
Here is an example :
#Query("{$and:["
+ " {'id': ?0},"
+ " {$or:["
+ " {'customerId': ?1},"
+ " {'specificCode': ?4}"
+ " ]},"
+ " {'beginDate' : { $gte: ?2}},"
+ " {$or:["
+ " {'endDate' : { $lte: ?2}},"
+ " {'endDate' : {$exists: false}}"
+ " ]},"
+ " {'numberOfTimesUsed': { $lt: ?3}}"
+ "]}")
You can try something like this:
Query query = new Query();
Update update = new Update();
update.set("recipientEmail", StringOperators.valueOf("recipientEmail").toUpper());
update.set("senderEmail", StringOperators.valueOf("senderEmail").toUpper());
mongoTemplate.updateMulti(query, update, Messages.class);
Since you are aggregation pipeline form of update, you can try this:
Query query = new Query();
AggregationUpdate update = AggregationUpdate.update().set("recipientEmail").toValue(StringOperators.valueOf("recipientEmail").toUpper()).set("senderEmail").toValue(StringOperators.valueOf("senderEmail").toUpper());
mongoTemplate.updateMulti(query, update, Messages.class);

Group by inside otherwise clause, spark java

I have this process in SparkJava (IntelliJ app) where I have a problem that I don`t know how to resolve yet. First I declare the dataset:
private static final String CONTRA1 = "contra1";
query = "select contra1, ..., eadfinal, , ..., data_date" + FROM + dbSchema + TBLNAME " + WHERE fech = '" + fechjmCto2 + "' AND s1emp=49";
Dataset<Row> jmCto2 = sql.sql(query);
Then I have the calculations, I analyze some fields to assign some literal values. My problem is in the aggegate function:
Dataset<Row> contrCapOk1 = contrCapOk.join(jmCto2,
contrCapOk.col(CONTRA1).equalTo(jmCto2.col(CONTRA1)),LEFT)
.select(contrCapOk.col("*"),
jmCto2.col("ind"),
functions.when(jmCto2.col(CONTRA1).isNull(),functions.lit(NUEVES))
.when(jmCto2.col("ind").equalTo("N"),functions.lit(UNOS))
.otherwise(jmCto2.groupBy(CONTRA1).agg(functions.sum(jmCto2.col("eadfinal")))).as("EAD"),
What I want is to make the sum in the otherwise part. But when I execute the cluster give me this message in the log.
User class threw exception: java.lang.RuntimeException: Unsupported literal type class org.apache.spark.sql.Dataset [contra1: int, sum(eadfinal): decimal(33,6)]
in the line 211, the otherwise line.
Do you know what the problem could be?.
Thanks.
You cannot use groupBy and aggregation function in a column clause. To do what you want to do, you have to use a window.
For you case, you can define the following window:
import org.apache.spark.sql.expressions.Window;
import org.apache.spark.sql.expressions.WindowSpec;
...
WindowSpec window = Window
.partitionBy(CONTRA1)
.rangeBetween(Window.unboundedPreceding(), Window.unboundedFollowing());
Where
partitionBy is the equivalent of groupBy for aggregation
rangeBetween determine which rows of the partition will be used by aggregation function, here we take all rows
And then you use this window when calling your aggregation function, as follow:
import org.apache.spark.sql.functions;
...
Dataset<Row> contrCapOk1 = contrCapOk.join(
jmCto2,
contrCapOk.col(CONTRA1).equalTo(jmCto2.col(CONTRA1)),
LEFT
)
.select(
contrCapOk.col("*"),
jmCto2.col("ind"),
functions.when(jmCto2.col(CONTRA1).isNull(), functions.lit(NUEVES))
.when(jmCto2.col("ind").equalTo("N"), functions.lit(UNOS))
.otherwise(functions.sum(jmCto2.col("eadfinal")).over(window))
.as("EAD")
)

Spring data elasticsearch to create index dynamically based on request parameter, percolator support and create index via Elasticsearch Operations

I read through https://docs.spring.io/spring-data/elasticsearch/docs/current/reference/html/#reference to begin with
My requirements
I want to use percolator. Is there any support for it in spring data elasticsearch? I don't see any in the above link although I understand that percolating is same as indexing (technically from using spring data elasticsearch's perspective). So I can use the indexing part of spring data elasticsearch but just checking if there are any that are specific to percolator.
I want to create an index dynamically. I do understand I can achieve that using SpEL template expression as mentioned in https://docs.spring.io/spring-data/elasticsearch/docs/current/reference/html/#elasticsearch.mapping.meta-model.annotations but my case is slightly different, I will get the index name via the RequestParam as part of the API call. So this means as of my knowledge I cannot use SpEL or try something like https://stackoverflow.com/a/33520421/4068218
I see I can use ElasticsearchOperations or ElasticsearchRepository to create Index. Because of #2 (i.e index name via request parameter) I think ElasticsearchOperations better suits but I see IndexOperations facilitating createMapping, createSettings but not both together. I see putMapping too but I dont see anything that says both mapping and settings. The reason I want both is I want to create something like below to begin with
"settings" : {
"index" : {
"number_of_shards" : 1,
"number_of_replicas" : 0
}
},
"mappings": {
"properties": {
"message": {
"type": "text"
},
"query": {
"type": "percolator"
}
}
}
Bottom line :- How do I create an index (name of the index will be dynamic via request param) with mappings, settings using ElasticsearchOperations?
Any lead/help is much appreciated
First of all thank you very much #P.J.Meisch. Upvoted both your comments as a token of gratitude.
Below worked for me. Below might help others in future
Document mapping = Document.create().fromJson("{\n" +
"\n" +
" \"properties\": {\n" +
" \"message\": {\n" +
" \"type\": \"text\"\n" +
" },\n" +
" \"query\": {\n" +
" \"type\": \"percolator\"\n" +
" }\n" +
" }\n" +
"\n" +
"}");
Map<String, Object> settings = ImmutableMap.of( "number_of_shards" ,2,"number_of_replicas",1);
elasticsearchOperations.indexOps(IndexCoordinates.of("whatever-indexname-you-need")).create(settings,mapping);

Impossible to load data set[country_name] due to the following errors the paramerter has no value[par_country];

Step 1
I have created a Dataset which has parametric query
select city from country where country=$P{par_country}
I have added the attributes and on preivew it is working fine.
Step 2
Now i have created a LOV(List of values) with query
select cust_country from country
and on Testing it is giving me all the countries
Step 3
Added that LOV to AD(Analyical Drivers)
Step 4
Create a new cockpit with Data source created and then select a pie chart and i am getting this error
You have to set the parameters at the "filters editor" (funnel icon) when you are edditing the cockpit. It shows a list with the parameters of the dataset, and you can set the default values, etc.
That's the theory... but my parameter's list its empty, so i can't do nothing with the dataset parameters...
After spending days on this finally this has worked for me
I have made the changes by follwing this link https://www.spagoworld.org/jforum/posts/list/4272.page with some additional changes.
in below files:
1.SpagoBICockpitEngine/WebContent/js/src/ext4/sbi/cockpit/MainPanel.js
on first row function onShowFilterEditorWizard
*
config.stores = Sbi.storeManager.getStoreIds();
Sbi.trace("[MainPanel.onShowAssociationEditorWizard]: config.stores is equal to [" + Sbi.toSource(config.stores) + "]");
and uncomment the same lines in onShowFontEditorWizard method.
2.SpagoBICockpitEngine/js/src/ext4/sbi/widgets/grid/InMemoryPagingGridPanel.js
Row 96 in function loadStore, comment line as this:
//this.store.loadPage(1);
i have used javascript to add string parameter.
Query:select country,cnt from country_duns PLACEHOLDER_COUNTRY
and in edit scrpit
country = parameters.get('par_country');
if (country == null) {
placeholder = " ";
}
else {
placeholder = "where country = '" + country + "'";
}
query = query.replace("PLACEHOLDER_COUNTRY", placeholder);
parameter is par_country of string type

Get Accumulo column family from api?

Learning Accumulo at the moment and I noticed there wasn't a direct call that I found for figuring out the column family for an entry. I need data from an Accumulo table in the format of
for example:
{key:"XPZ-878-S12",
columns:[{name:"NAME",value:"FOO BAR"},
{name:"JOB",value:"ENGINEER"}
]
}
And these spots are where I am trying to take data from:
{key:"key value from table",
columns:[{name:"name of column family",value:"value from table"},
{name:"name of column family",value:"value from table"}
]
}
So obviously key and value are easy to get ahold of, but what I call the "name" is extremely important to me as well, aka the column family name.
Yes it is possible. For example take a look at this:
for (Entry<Key, Value> entry : scan) {
Text key = entry.getKey().getRow();
Value val = entry.getValue();
returnVal.append("KEY" + key + " " + entry.getKey().getColumnFamily() + ": " + val + "\n");
}
The solution being for whatever entry you are looking at do entry.getKey().getColumnFamily()

Categories