Hibernate multiple parameters setString generated Java - java

I'm using hibernate, and trying to do a LIKE on certain fields.
I'm splitting a string, and then generating the HQL, with
table.entry LIKE :argsearch_0 OR table.entry LIKE :argsearch_0 OR
table.entry LIKE :argsearch_1 OR table.entry LIKE :argsearch_1
(0 and 1 is in fact incremented with a counter).
But i get :
Not all named parameters have been set: [argsearch_0]
First question :
Can I used 2 named parameter and do only 1 setParameter (or setString) :
String nameParam = "argsearch_"+i;
q.setParameter(nameParam, "%"+args[i]+"%");
Second question :
Why my parameters are not working ?

Depends what you mean when you ask "Can I used 2 named parameter and do only 1 setParameter".
In your original query you have 2 named parameters ('argsearch_0' and 'argsearch_1') and each has 2 usages in the query. So you have to call set for both 'argsearch_0' and 'argsearch_1'. But you only call set once for each (actually you can call set multiple times for each parameter if you really want, but only the last once is used.
As for your second question, as someone already pointed out, its because you have a bug in your code. You are not setting the value for the 'argsearch_0' parameter.

You can Try this
**Step 1--:** Add How many parameter you need just add in hashmap
-------------------------------------------------------------------
HashMap param_List=new HashMap();
param_List.put("contactNo",22);
**Step 2--:** Just You pass your Query
-------------------------------------------------------------------
Query query1 = session.createQuery("select * from emailTemplate where c.contactNo =:contactNo");
**Step 3--:** What ever Data type is no matters but get an output.
-------------------------------------------------------------------
for(Object paramKey : param_List.keySet())
{
query1.setParameter(paramkey.toString(), param_List.get(paramKey);
}
**Step 4--:**
-------------------------------------------------------------------
String finalResult=query1.getSingleResult().toString();

Related

Rest controller request multiple path variables with multiple query parameters

How can we create a Rest API (Spring controller) which allows multiple path variables to have query parameters?
Where
1) function is a path variable and id=functionname is query parameter
2) subfunction is a path variable and id=subfuntionname is query parameter
Request URL : /content/v1/clients/clientname/function?id=functionname&subfunction?id=subfunctionname
Update I am using matrix variations suggested by
/content/v1/clients/clientname/function;id=functionname/subfunction;id=subfunctionname
The method shown below is not working as expected.
What should the method definition look like?
public HashMap<String, List<Model>> getContent(
#PathVariable String clientname,
#MatrixVariable(name="id", pathVar="function") List<String> capabilitiesId,
#MatrixVariable(name="id", pathVar="subfunction") List<String> subcapabilitiesId) {
}
Error : Missing matrix variable 'id' for method parameter of type List
It's not possible.
In REST controller you have two type of parameters:
Path parameter: parameter usefull to select a resource. (a you class's method)
Query parameter: parameter useful to send other information.
In your case I think that is a good idea send all this informations inside payload, using POST or PUT http method.
If you can't use payload you can obtain the following solution:
Request URL : /content/v1/clients/clientname/function1/function2?id1=functionnamec&id2=subfunctionaname
In this way you can create your controller with 2 path parameters and 2 query parameters:
#GET
#Path("/basePath/{funct1}/{funct2}")
public Response <methodName>(#PathParam("funct1") String funct1, #PathParam("funct2") String funct2, #QueryParam("id1") String id1, #QueryParam("id2") String id2)
/content/v1/clients/clientname/function?id=functionnamec&subfunction?id=subfunctionaname
The parsing of URI is defined by RFC 3986. In particular, U+003F QUESTION MARK is a reserved character, the first instance of which serves a the delimiter between the relative-part and the query.
So your example breaks would parse as
path: /content/v1/clients/clientname/function
query: id=functionnamec&subfunction?id=subfunctionaname
And if we were to parse the query, as though it were an application/x-www-form-urlencoded value....
>>> import urllib.parse
>>> urllib.parse.parse_qs("id=functionnamec&subfunction?id=subfunctionaname")
{'id': ['functionnamec'], 'subfunction?id': ['subfunctionaname']}
We see that the second question mark becomes part of the parameter name.
In short, it's a perfectly valid URI, but it isn't likely to produce the results that you are hoping for.
/content/v1/clients/clientname/function/subfunction?id=functionnamec&id=subfunctionaname
This might be usable, but there's likely to be some confusion about the duplicate id query parameters
>>> urllib.parse.parse_qs("id=functionnamec&id=subfunctionaname")
{'id': ['functionnamec', 'subfunctionaname']}
/content/v1/clients/clientname/function/subfunction?function.id=functionnamec&subfunction.id=subfunctionaname
>>> urllib.parse.parse_qs("function.id=functionnamec&subfunction.id=subfunctionaname")
{'function.id': ['functionnamec'], 'subfunction.id': ['subfunctionaname']}
That might be easier.
I think it would be common to take the data out of the query and put it on the path instead
/content/v1/clients/clientname/function/functionname/subfunction/subfunctionaname
And then extract the path parameters you need.

Spark reduceByKey function seems not working with single one key

I have a 5 row records in mysql, like
sku:001 seller:A stock:UK margin:10
sku:002 seller:B stock:US margin:5
sku:001 seller:A stock:UK margin:10
sku:001 seller:A stock:UK margin:3
sku:001 seller:A stock:UK margin:7
And I've this rows read into spark and transformed them into
JavaPairRDD<Tuple3<String,String,String>, Map>(<sku,seller,stock>, Map<margin,xxx>).
Seems like works fine until now.
However, When I used the reduceByKey function to sum the margin as the structure like:
JavaPairRDD<Tuple3<String,String,String>, Map>(<sku,seller,stock>, Map<marginSummary, xxx>).
the final result got 2 elements
JavaPairRDD<Tuple3<String,String,String>, Map>(<sku,seller,stock>, Map<margin,xxx>).
JavaPairRDD<Tuple3<String,String,String>, Map>(<sku,seller,stock>, Map<marginSummary, xxx>).
seems like the row2 didn't enter the reduceByKey function body. I was wondering why?
It is expected outcome. func is called only when objects for a single key are merged. If there is only one key, there is no reason to call it.
Unfortunately it looks like you have a bigger problem, which can be inferred from you question. You are trying to change the type of the value in reduceByKey. In general it shouldn't even compile as reduceByKey takes Function2<V,V,V> - input and output types have to be identical.
If you want to change a type, you should use either combineByKey
public <C> JavaPairRDD<K,C> combineByKey(Function<V,C> createCombiner,
Function2<C,V,C> mergeValue,
Function2<C,C,C> mergeCombiners)
or aggregateByKey
public <U> JavaPairRDD<K,U> aggregateByKey(U zeroValue,
Function2<U,V,U> seqFunc,
Function2<U,U,U> combFunc)
Both can change the types and fixed your current problem. Please refer to Java test suite for examples: 1 and 2.

CouchDB How to make queries with multiple complex keys

I am trying to make a CouchDB view to obtain some document that is in set 1 and in set 2. For example, when I have a single key I can make some query like:
dbname/_design_doc/viewName?keys=[value1, value2, value3]
and it returns all the documents where it finds either the value1, 2 or 3. What I want is something like this but for a complex key.
For example,
dbname/_design_doc/viewName?keys=[[key1, key12, key13],[key21, key22]]
where key1x is a value for the first key and key2x is a value for the second key, meaning I would like to get every document that has key11 and key21, key11 and key22, key12 and key21, key12 and key22 and so on.
My view is this one:
"twokeys": {
"map": "function(doc) {\n if (doc.uid && doc.hid){\n
emit([doc.uid, doc.hid], doc);\n }\n}"
}
Is this possible?
Thanks in advance
You can query with the keys parameter using complex keys if you follow this answer.
Unfortunately, you can't query both the startkey or the endkey with the keys.

How to remove duplicate columns after a JOIN in Pig?

Let's say I JOIN two relations like:
-- part looks like:
-- 1,5.3
-- 2,4.9
-- 3,4.9
-- original looks like:
-- 1,Anju,3.6,IT,A,1.6,0.3
-- 2,Remya,3.3,EEE,B,1.6,0.3
-- 3,akhila,3.3,IT,C,1.3,0.3
jnd = JOIN part BY $0, original BY $0;
The output will be:
1,5.3,1,Anju,3.6,IT,A,1.6,0.3
2,4.9,2,Remya,3.3,EEE,B,1.6,0.3
3,4.9,3,akhila,3.3,IT,C,1.3,0.3
Notice that $0 is shown twice in each tuple. EG:
1,5.3,1,Anju,3.6,IT,A,1.6,0.3
^ ^
|-----|
I can remove the duplicate key manually by doing:
jnd = foreach jnd generate $0,$1,$3,$4 ..;
Is there a way to remove this dynamically? Like remove(the duplicate key joiner).
Have faced the same kind of issue while working on Data Set Joining and other data processing techniques where in output the column names get repeated.
So was working on UDF which will remove the duplicates column by using schema name of that field and retaining the first unique column occurrence data.
Pre-Requisite:
Name of all the fields should be present
You need to download this UDF file and make it jar so as to use it.
UDF file location from GitHub :
GitHub UDF Java File Location
We will take the above question as example.
--Data Set A contains this data
-- 1,5.3
-- 2,4.9
-- 3,4.9
--Data Set B contains this data
-- 1,Anju,3.6,IT,A,1.6,0.3
-- 2,Remya,3.3,EEE,B,1.6,0.3
-- 3,Akhila,3.3,IT,C,1.3,0.3
PIG Script:
REGISTER /home/user/
DSA = LOAD '/home/user/DSALOC' AS (ROLLNO:int,CGPA:float);
DSB = LOAD '/home/user/DSBLOC' AS (ROLLNO:int,NAME:chararray,SUB1:float,BRANCH:chararray,GRADE:chararray,SUB2:float);
JOINOP = JOIN DSA BY ROLLNO,DSB BY ROLLNO;
We will get column name after joining as
DSA::ROLLNO:int,DSA::CGPA:float,DSB::ROLLNO:int,DSB::NAME:chararray,DSB::SUB1:float,DSB::BRANCH:chararray,DSB::GRADE:chararray,DSB::SUB2:float
For making it to
DSA::ROLLNO:int,DSA::CGPA:float,DSB::NAME:chararray,DSB::SUB1:float,DSB::BRANCH:chararray,DSB::GRADE:chararray,DSB::SUB2:float
DSB::ROLLNO:int is removed.
We need to use the UDF as
JOINOP_NODUPLICATES = FOREACH JOINOP GENERATE FLATTEN(org.imagine.REMOVEDUPLICATECOLUMNS(*));
Where org.imagine.REMOVEDUPLICATECOLUMNS is the UDF.
This UDF removes duplicate columns by using Name in schema.So DSA::ROLLNO:int is retained and DSB::ROLLNO:int is removed from the dataset.

Faceting using SolrJ and Solr4

I've gone through the related questions on this site but haven't found a relevant solution.
When querying my Solr4 index using an HTTP request of the form
&facet=true&facet.field=country
The response contains all the different countries along with counts per country.
How can I get this information using SolrJ?
I have tried the following but it only returns total counts across all countries, not per country:
solrQuery.setFacet(true);
solrQuery.addFacetField("country");
The following does seem to work, but I do not want to have to explicitly set all the groupings beforehand:
solrQuery.addFacetQuery("country:usa");
solrQuery.addFacetQuery("country:canada");
Secondly, I'm not sure how to extract the facet data from the QueryResponse object.
So two questions:
1) Using SolrJ how can I facet on a field and return the groupings without explicitly specifying the groups?
2) Using SolrJ how can I extract the facet data from the QueryResponse object?
Thanks.
Update:
I also tried something similar to Sergey's response (below).
List<FacetField> ffList = resp.getFacetFields();
log.info("size of ffList:" + ffList.size());
for(FacetField ff : ffList){
String ffname = ff.getName();
int ffcount = ff.getValueCount();
log.info("ffname:" + ffname + "|ffcount:" + ffcount);
}
The above code shows ffList with size=1 and the loop goes through 1 iteration. In the output ffname="country" and ffcount is the total number of rows that match the original query.
There is no per-country breakdown here.
I should mention that on the same solrQuery object I am also calling addField and addFilterQuery. Not sure if this impacts faceting:
solrQuery.addField("user-name");
solrQuery.addField("user-bio");
solrQuery.addField("country");
solrQuery.addFilterQuery("user-bio:" + "(Apple OR Google OR Facebook)");
Update 2:
I think I got it, again based on what Sergey said below. I extracted the List object using FacetField.getValues().
List<FacetField> fflist = resp.getFacetFields();
for(FacetField ff : fflist){
String ffname = ff.getName();
int ffcount = ff.getValueCount();
List<Count> counts = ff.getValues();
for(Count c : counts){
String facetLabel = c.getName();
long facetCount = c.getCount();
}
}
In the above code the label variable matches each facet group and count is the corresponding count for that grouping.
Actually you need only to set facet field and facet will be activated (check SolrJ source code):
solrQuery.addFacetField("country");
Where did you look for facet information? It must be in QueryResponse.getFacetFields (getValues.getCount)
In the solr Response you should use QueryResponse.getFacetFields() to get List of FacetFields among which figure "country". so "country" is idenditfied by QueryResponse.getFacetFields().get(0)
you iterate then over it to get List of Count objects using
QueryResponse.getFacetFields().get(0).getValues().get(i)
and get value name of facet using QueryResponse.getFacetFields().get(0).getValues().get(i).getName()
and the corresponding weight using
QueryResponse.getFacetFields().get(0).getValues().get(i).getCount()

Categories