I'm looking to perform a query on my Couchbase database using the Java client SDK, which will return a list of results that include the document id for each result. Currently I'm using:
Statement stat = select("*").from(i("myBucket"))
.where(x(fieldIwantToGet).eq(s(valueIwantToGet)));
N1qlQueryResult result = bucket.query(stat);
However, N1qlQueryResult seems to only return a list of JsonObjects without any of the associated meta data. Looking at the documentation it seems like I want a method that returns a list of Document objects, but I can't see any bucket methods that I call that do the job.
Anyone know a way of doing this?
You need to use the below query to get Document Id:
Statement stat = select("meta(myBucket).id").from(i("myBucket"))
.where(x(fieldIwantToGet).eq(s(valueIwantToGet)));
The above would return you an array of Document Id.
Related
Using the Elasticsearch High Level REST Client for Java v7.3
I have a few fields in the schema that look like this:
{
"document_type" : ["Utility", "Credit"]
}
Basically one field could have an array of strings as the value. I not only need to query for a specific document_type, but also a general string query.
I've tried the following code:
QueryBuilder query = QueryBuilders.boolQuery()
.must(QueryBuilders.queryStringQuery(terms))
.filter(QueryBuilders.termQuery("document_type", "Utility"));
...which does not return any results. If I remove the ".filter()" part the query returns fine, but the filter appears to prevent any results from coming back. I'm suspecting it's because document_type is a multi-valued array - maybe I'm wrong though. How would I build a query query all documents for specific terms, but also filter by document_type?
I think, the reason is the wrong query. Consider using the terms query instead of term query. There is also a eqivalent in the java api.
Here is a good overview of the query qsl queries and their eqivalent in the high level rest client: https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high-query-builders.html
How can I find a document and retrieve it if found, but insert and retrieve it if not found in one command?
I have an outline for the formats I wish my documents to look like for a user's data. Here is what it looks like
{
"username": "HeyAwesomePeople",
"uuid": "0f91ede5-54ed-495c-aa8c-d87bf405d2bb",
"global": {},
"servers": {}
}
When a user first logs in, I want to store the first two values of data (username and uuid) and create those empty values (global and servers. Both those global and servers will later on have more information filled into them, but for now they can be blank). But I also don't want to override any data if it already exists for the user.
I would normally use the insertOne or updateOne calls to the collection and then use the upsert (new UpdateOptions().upsert(true)) option to insert if it isn't found but in this case I also need to retrieve the user's document aswell.
So in a case in which the user isn't found in the database, I need to insert the outlined data into the database and return the document saved. In a case where the user is found in the database, I need to just return the document from the database.
How would I go about doing this? I am using the latest version of Mongo which has deprecated the old BasicDBObject types, so I can't find many places online that use the new 'Document' type. Also, I am using the Async driver for java and would like to keep the calls to the minimum.
How can I find a document and retrieve it if found, but insert and retrieve it if not found in one command?
You can use findOneAndUpdate() method to find and update/upsert.
The MongoDB Java driver exposes the same method name findOneAndUpdate(). For example:
// Example callback method for Async
SingleResultCallback<Document> printDocument = new SingleResultCallback<Document>() {
#Override
public void onResult(final Document document, final Throwable t) {
System.out.println(document.toJson());
}
};
Document userdata = new Document("username","HeyAwesomePeople")
.append("uuid", "0f91ede5")
.append("global", new Document())
.append("servers", new Document());
collection.findOneAndUpdate(userdata,
new Document("$set", userdata),
new FindOneAndUpdateOptions()
.upsert(true)
.returnDocument(ReturnDocument.AFTER),
printDocument);
The query above will try to find a document matching userdata; if found set it to the same value as userdata. If not found, the upsert boolean flag will insert it into the collection. The returnDocument option is to return the document after the action is performed.
The upsert and returnDocument flags are part of FindOneAndUpdateOptions
See also MongoDB Async Java Driver v3.4 for tutorials/examples. The above snippet was tested with current version of MongoDB v3.4.x.
In AEM 6.2 I created a Java servlet where I use QueryBuilder to query the JCR for relevant content.
I'm want to limit the search results to nodes that do not have the 'sling:resourceType' property or, if they do have it, it cannot be equal to 'social/qna/components/hbs/post'. Writing the XPATH query out like this works
//*[jcr:contains(., 'searchTerm') and ((not (#sling:resourceType)) or (#sling:resourceType != 'social/qna/components/hbs/post')) and ((#jcr:primaryType = 'cq:Page') or (#jcr:primaryType = 'social:asiResource'))]
But I can't figure out how to use the QueryBuilder API to create this query. This is what my QueryBuilder code looks like.
group.p.or=true
group.1_path=/content/myproject
group.2_path=/content/usergenerated/asi/jcr/content/myproject
1_group.p.or=true
1_group.1_type=cq:Page
1_group.2_type=social:asiResource
fulltext=searchTerm
property.p.or=true
property=sling:resourceType
property.operation=exists
property.value=false
property.1_operation=unequals
property.1_value=social/qna/components/hbs/post
How can I rewrite the property section so it only returns results where sling:resourceType property doesn't exist or does not equal 'social/qna/components/hbs/post'?
I am trying to create a junit test. Scenario:
setUp: I'm adding two json documents to database
Test: I'm getting those documents using view
tearDown: I'm removing both objects
My view:
function (doc, meta) {
if (doc.type && doc.type == "UserConnection") {
emit([doc.providerId, doc.providerUserId], doc.userId);
}
}
This is how I add those documents to database and make sure that "add" is synchronous:
public boolean add(String key, Object element) {
String json = gson.toJson(element);
OperationFuture<Boolean> result = couchbaseClient.add(key, 0, json);
return result.get();
}
JSON Documents that I'm adding are:
{"userId":"1","providerId":"test_pId","providerUserId":"test_pUId","type":"UserConnection"}
{"userId":"2","providerId":"test_pId","providerUserId":"test_pUId","type":"UserConnection"}
This is how I call the view:
View view = couchbaseClient.getView(DESIGN_DOCUMENT_NAME, VIEW_NAME);
Query query = new Query();
query.setKey(ComplexKey.of("test_pId", "test_pUId"));
ViewResponse viewResponse = couchbaseClient.query(view, query);
Problem:
Test fails due to invalid number of elements fetched from view.
My observations:
Sometimes tests are passing
Number of elements that are fetched from view is not consistent(from 0 to 2)
When I've added those documents to database instead of calling setUp the test passed every time
Acording to this http://www.couchbase.com/docs/couchbase-sdk-java-1.1/create-update-docs.html documentation I'm adding those json documents synchronously by calling get() on returned Future object.
My question:
Is there something wrong with how I've approached to fetching data from view just after this data was inserted to DB? Is there any good practise for solving this problem? And can someone explain it to me please what I've did wrong?
Thanks,
Dariusz
In Couchbase 2.0 documents are required to be written to disk before they will show up in a view. There are three ways you can do an operation with the Java SDK. The first is asynchronous which means that you just send the data and at a later time check to make sure that the data was received correctly. If you do an asynchronous operation and then immediately call .get() as you did above then you have created a synchronous operation. When an operation returns success in these two cases above you are only guaranteed that the item has been written into memory. Your test passed sometimes only because you were lucky enough that both items were written to disk before did your query.
The third way to do an operation is with durability requirements and this is the one you want to do for your tests. Durability requirements allow you to say that you want an item to be written to disk or replicated before success is returned to the client. Take a look at the following function.
https://github.com/couchbase/couchbase-java-client/blob/1.1.0/src/main/java/com/couchbase/client/CouchbaseClient.java#L1293
You will want to use this function and set the PersistedTo parameter to MASTER.
I am using Solrj to add new documents to a Solr instance. In my document schema the id is a UUID (solr.UUIDField). Each time a document is created the id is filled with the unique id, which is exactly what I want. Sometimes it's necessary in my application that I can retrieve this unique id to add it as a field value when inserting another document. So my question is, how can I retrieve this generated uuid from solr after adding a document?
Solrj returns me this UpdateResponse object after commiting, but I don't know how to get the new uuid out of it.
I am adding a document like this
CommonsHttpSolrServer server = new CommonsHttpSolrServer(MY_SERVER_URL);
SolrInputDocument doc = new SolrInputDocument();
// [...] multiple addField calls
server.add(doc);
UpdateResponse ur = server.commit();
AFAIK you aren't going to ever get a UUID from an add or a commit. When you do an add or commit, the update request handler gives you back query time and status, but not much else (assuming it is successful). You can actually see what is in the HTTP response by running a manual add/commit like so:
http://localhost:8983/solr/update?stream.body=<add><doc><field name="id">test</field><field name="title">test title</field></doc></add>
http://localhost:8983/solr/update?stream.body=<commit/>
If you run those queries in a web browser, they will submit a test document and commit it, respectively. You will then be able to see what information is available to SolrJ (not much).
You could write your own (modified) update handler in Java, but that seems like a ton of work. You could also enable the "timestamp" field in your Solr schema so you can query solr by last modified date and find the items you just committed.
Both of those methods would be major hacks, though. Your best bet is to figure out a unique ID for your documents before you submit them to Solr, then use that unique ID to retrieve them. Using a generated UUID is more of a "fire and forget about this" method. Since you don't want to forget, you will need to generate your own UUID.
Since you're using Java, it should be dead simple to do with UUID, using some code like this:
CommonsHttpSolrServer server = new CommonsHttpSolrServer(MY_SERVER_URL);
SolrInputDocument doc = new SolrInputDocument();
UUID uuid = UUID.randomUUID();
doc.addField("id", uuid.toString());
// [...] multiple addField calls
server.add(doc);
UpdateResponse ur = server.commit();