Cassandra Saving JSON data in Text Column - java

In Cassandra DB I have column name as custom_extensions which can contain List<AppEncoded> where AppEncoded is an UDT. The UDT has following fields
type -> TEXT
code -> TEXT
value - TEXT
While saving data to DB the value field can expect input as object.
CurrencyTO:
field -> amount
field -> Symbol
field -> formattedAmount
The implementation to save the column value in DB is as follows:
JacksonJsonCodec<CurrencyTO> jacksonJsonCodec = new JacksonJsonCodec<>(CurrencyTO.class);
appEncodedValue.setValue(jacksonJsonCodec.format(CurrencyTO.getValue()));
CurrencyTo is extending TranserObject which has following attributes as well.
If I see in DB I am seeing the following results:
"value": "'{\"serviceResult\":{\"messagesResult\":[]},\"attributeNames\":[\"amount\",\"isoCode\",\"symbol\",\"decimalValue\",\"formattedAmount\"],\"metadata\":null,\"this\":null,\"amount\":\"45\",\"isoCode\":\"USD\",\"symbol\":\"$\",\"decimalValue\":2.0,\"formattedAmount\":null}'"
The result added the serviceResult, messageResult, metadata and some \ characters as well.
The Expected Result Should be similar as following in DB
"value": {
"amount": 90,
"Symbol": "$",
"formattedAmount" : "90.00"
}
The Reference I followed for implementation is:
custom_codecs

Related

Elasticsearch 7.13 - elastic search response with old data after update api

We using elastic 7.13
we are doing periodical update to index using upsert
The sequence of operations
create new index with dynamic mapping all strings mapped as text
"dynamic_templates": [
{
"strings_as_keywords": {
"match_mapping_type": "string",
"mapping": {
"type": "text",
"analyzer": "autocomplete",
"search_analyzer": "search_term_analyzer",
"copy_to": "_all",
"fields": {
"keyword": {
"type": "keyword",
"normalizer": "lowercase_normalizer"
}
}
}
}
}
]
upsert bulk with the attached code (I don't have equivalent with rest)
doing search on specific filed
localhost:9200/mdsearch-vitaly123/_search
{
"query": {
"match": {
"fullyQualifiedName": `value_test`
}
}
}
got 1 result
upsert again now "fullyQualifiedName": "value_test1234" (as in step 2)
do search as in step 3
got 2 results 1 doc with "fullyQualifiedName": "value_test"
and other "fullyQualifiedName": "value_test1234"
snippet below of upsert (step 2):
#Override
public List<BulkItemStatus> updateDocumentBulk(String indexName, List<JsonObject> indexDocuments) throws MDSearchIndexerException {
BulkRequest request = new BulkRequest().setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
ofNullable(indexDocuments).orElseThrow(NullPointerException::new)
.forEach(x -> {
var id = x.get("_id").getAsString();
x.remove("_id");
request.add(new UpdateRequest(indexName, id)
.docAsUpsert(true)
.doc(x.toString(), XContentType.JSON)
.retryOnConflict(3)
);
});
BulkResponse bulk = elasticsearchRestClient.bulk(request, RequestOptions.DEFAULT);
return stream(bulk.getItems())
.map(r -> new BulkItemStatus(r.getId(), isSuccess(r), r.getFailureMessage()))
.collect(Collectors.toList());
}
I can search by updated properties.
But the problem is that searches retrieve "updated fields" and previous one as well.
How can I solve it ?
maybe limit somehow the version number to be only 1.
I set setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) but it didn't helped
Here in picture we can see result
P.S - old and updated data retrieved as well
Suggestions ?
Regards,
What is happening is that the following line must yield null:
var id = x.get("_id").getAsString();
In other words, there is no _id field in the JSON documents you pass in indexDocuments. It is not allowed to have fields with an initial underscore character in the source documents. If it was the case, you'd get the following error:
Field [_id] is a metadata field and cannot be added inside a document. Use the index API request parameters.
Hence, your update request cannot update any document (since there's no ID to identify the document to update) and will simply insert a new one (i.e. what docAsUpsert does), which is why you're seeing two different documents.

Save ServerValue.TIMESTAMP as string in realtime database

I'm trying to save the firebase timestamp as a string, I'm using realtime database, I want to arrange my nodes based on their post date and I want to exclude those posts that are older than 24 h.
my node structure is something like this
"posts" : {
"b" : {
"text" : "5",
"timestamp" : "{.sv=timestamp}",
"uid" : "KpZvp0bhOlPJI3KKwe1AF7Apb2U2"
},
"a" : {
"text" : "hey",
"timestamp" : "1559912589250",
"uid" : "KpZvp0bhOlPJI3KKwe1AF7Apb2U2"
}
}
I have no problem arranging the posts based on the date but when I want to exclude the posts that are older than 24 h that's when I start having trouble
firebaseDatabase.reference.child("posts").orderByChild("timestamp").startAt((System.currentTimeMillis() - 86400000))
the "startAt()" doesn't take long or integers it only takes boolean, strings and double as parameters.
this is how I create the nodes
firebaseDatabase.reference.child("posts").push().setValue(mapOf(
"by" to uid,
"text" to "hey",
"timestamp" to ServerValue.TIMESTAMP)
)
I tried changing the ServerValue.TIMESTAMP to string but it saves it as "{".sv" : "timestamp"}" I looked around and I found out that if I send a map with an index that is named ".sv" and has the value of "timestamp" so that the database will recognize it and replace the value with the server timestamp
something like this
firebaseDatabase.reference.child("posts").push().setValue(mapOf(
"by" to uid,
"text" to "hello",
"timestamp" to mapOf(".sv" to "timestamp")
the timestamp value will be the server timestamp as a long number, so basically the ServerValue.TIMESTAMP is like a map, that's why when I tried changing it to a string it gave me "{".sv" : "timestamp"}" value.
So how can I save the server timestamp as a string? or how can I query the data as a Long number without changing it to string

Fetch Paginated view data after applying reduce in couchdb with ektorp

Hi I want to fetch data from couchdb-view by applying reduce and pagination.
My view gives reduce function result as complex key as follows
{"rows":[
{"key":{"attribute":"Attribute1"},"value":20},
{"key":{"attribute":"Attribute2"},"value":1}
{"key":{"attribute":"Attribute3"},"value":1}
]}
I am trying to fetch data from couchdb using ektorp, check following code
PageRequest pageRequest = PageRequest.firstPage(10);
ViewQuery query = new ViewQuery()
.designDocId("_design/medesign")
.viewName("viewname")
.includeDocs(false)
.reduce(true)
.group(true);
Page<ViewResult> rs1 = db.queryForPage(query, pageRequest, ViewResult.class);
rs1.forEach(v -> {
System.out.println(v.getSize());
});
I am getting following error
org.ektorp.DbAccessException: com.fasterxml.jackson.databind.JsonMappingException:
Can not construct instance of org.ektorp.ViewResult:
no int/Int-argument constructor/factory method to deserialize from Number value (20)
at [Source: N/A; line: -1, column: -1]
CouchDB doesn't Give paginated details if you want paginated reduced data.
Request with paginated include docs
group=false & reduce=false & include_docs=true
URL : http://localhost:5984/dn_anme/_design/design_name/_view/viewname?include_docs=true&reduce=false&skip=0&group=false&limit=2
Response :
{
"total_rows":81,
"offset":0,
"rows":[
{
"id":"906a74b8019716f1240a7117580ec172",
"key":{
"attribute":"BuildArea"
},
"value":1,
"doc":{
"_id":"906a74b8019716f1240a7117580ec172",
"_rev":"3-7e0a1da0c2260040f8a9787636385785",
"country":"POL",
"recordStatus":"MATCHED"
}
},
{
"id":"906a74b8019716f1240a7117580eaefb",
"key":{
"attribute":"Area",
},
"value":1,
"doc":{
"_id":"906a74b8019716f1240a7117580eaefb",
"_rev":"3-165ea3a3ed07ad8cce1f3e095cd476b5",
"country":"POL",
"recordStatus":"MATCHED"
}
}
]
}
Request with Reduce
group=true& reduce=true& include_docs=false
URL : http://localhost:5984/dn_anme/_design/design_name/_view/viewname?include_docs=false&reduce=true&group=true&limit=2
Resoonse :
{
"rows":[
{
"key":[
"BuildArea"
],
"value":1
},
{
"key":[
"Area"
],
"value":1
}
]
}
Difference in between both Request:
Request with paginated include docs gives page data {"total_rows":81, "offset":0, rows":[{...},{...}]}
AND
Request with reduce give {"rows":[{...},{..}]}
How you can get paginated reduce data:
Step 1: Request rows_per_page + 1 rows from the view
Step 2: if in response one extra records than page_size then there are more records
Step 3: calculate and update skip value and got to step 1 for next page
Note: adding skip is not good option for lots of records instead of that find start key and add start key, its good for better perforamance

MongoDB Reading from Nested Documents

I have a document with nested documents within, I thought as per a filter I would be able to specify something like data.sms.mobileNumber. However that doesn't work.
How would I read the data in the data.sms.mobileNumber field, using the the standard Document getString request?
Example Document:
{ "_id" : ObjectId("59b850bd81bacd0013d15085"), "data" : { "sms" : { "message" : "Your SMS Code is ABCDEFG", "mobileNumber" : "+447833477560" } }, "id" : "b0a3886d69fc7319dbb4f4cc21a6039b422810cd875956bfd681095aa65f6245" }
Example Field get String request:
document.getString("data.sms.message")
The 'path' data.sms.message refers to a structure like this:
+- data
|
+- sms
|
+- message
To read this using the Java driver you have to read the data document, then the sms sub document then the message attribute of that sub document.
For example:
Document data = collection.find(filter).first();
Document sms = (Document) data.get("sms");
String message = sms.getString("message");
Or, the same thing with shortcuts:
String message = collection.find(filter).first()
.get("sms", Document.class)
.getString("message");
Update 1 in answer to this question: "I have a case where I have an array of documents in a document, how would I go about getting a field from a document in the array?" Let's assume you have a document with an array field named details and each detail has name and age. Something like this:
{"employee_id": "1", "details": [{"name":"A","age":"18"}]}
{"employee_id": "2", "details": [{"name":"B","age":"21"}]}
You could read the array element like so:
Document firstElementInArray = collection.find(filter).first()
// read the details as an Array
.get("details", ArrayList.class)
// focus on the first element in the details array
.get(0);
String name = firstElementInArray.getString("name");

Automatic field creation for Bulk Write operations mongodb

Hi i need to execute bulk insert with automatic field createdon [ timestamp ] ,
as of now i am manually creating createdon timestamp and set it into document and then calling insert method.
for inserting single document this is ok,
But this is not efficient for executing bulkinsert with lacks of records ,
Can anyone show how to execute bulkInsert data with automatic field
or suggest some better solution for creating automatic field in mongodb
Actual Data for BulkWrite insert:
BulkInsertList:
[
{
fieldOne:val,
fieldTwo:val,
fieldThree:val,
fieldFour:val
},
{
fieldOne:val,
fieldTwo:val,
fieldThree:val,
fieldFour:val
},
{
..
}
..,
]
Bulk write should Insert data with createdOn timestamp,
expected result,
BulkInsertList:
[
{
fieldOne:val,
fieldTwo:val,
fieldThree:val,
fieldFour:val,
createdOn: Timestamp
},
{
..
},
{
..
}
]

Categories