DynamoDB updated only partially - java

I want to update my DynamoDB through java using dynamoDBMapper Library.
What I did is to push messages(updates I want to executed) to one SQS, and let my java code to consume these messages and update my dynamoDB.
I found that when I push more than 150 messages in a short using a script, all the data can be consumed but only parts of the record in DynamoDB was updated.
The code to update DynamoDB like this:
#Service
public class PersistenceMessageProcessingServiceImpl implements PersistenceMessageProcessingService{
#Override
public void process(TextMessage textMessage){
String eventData = textMessage.getText();
updateEventStatus(eventData);
}
/*
each input is a caseDetail messages in Event Table
get data, parse data and update relative records partially in dynamodb.
finally check if still any open cases, if not change state of event
*/
private void updateEventStatus(String eventData) throws ParseException, IOException {
RetryUtils retryUtils = new RetryUtils(maxRetries, waitTimeInMilliSeconds, influxService);
SNowResponse serviceNowResponse = parseData(eventData);
EventCaseMap eventCaseMap = eventCaseMapRepository.findBySysId(sysId);
if (eventCaseMap != null) {
Event event = eventRepository.findByEventId(eventCaseMap.getSecurityEventManagerId());
CaseManagementDetails caseManagementDetails = event.getCaseManagementDetails();
Case existingCaseDetails = getCaseByCaseSystemId(caseManagementDetails, sysId);
caseDetails.setCaseStatus('Resolved');
caseDetails.setResolution(serviceNowResponse.getCloseCode());
caseDetails.setResolvedBy("A");
caseDetails.setAssessment(serviceNowResponse.getAssessment());
caseDetails.setResolutionSource("SEM");
retryUtils.run(() -> {
return eventRepository.updateEvent(event); }, RETRY_MEASUREMENT);
}
boolean stillOpen = false;
for(Case existingCase : caseManagementDetails.getCases()){
if(("OPEN").equals(existingCase.getCaseStatus().toString())){
stillOpen = true;
break;
}
}
if(!stillOpen){
event.setState('CLOSED');
}
}
private Case getCaseByCaseSystemId(CaseManagementDetails caseManagementDetails, String sysId) {
Case caseDetails = null;
if (caseManagementDetails != null) {
List<Case> caseList = caseManagementDetails.getCases();
for (Case c : caseList) {
if (c.getCaseSystemId() != null && c.getCaseSystemId().equalsIgnoreCase(sysId)) {
caseDetails = c;
break;
}
}
}
return caseDetails;
}
}
/* EventCaseMap Table in my DynamoDB
data model is like this for EventCaseMap Table:
{
"caseSystemId": "bb9cc488dbf67b40b3d57709af9619f8",
"securityEventManagerId": "756813a4-4e48-4abb-b37e-da00e931583b"
}
*/
#Repository
public class EventCaseMapRepositoryImpl implements EventCaseMapRepository {
#Autowired
DynamoDBMapper dynamoDBMapper;
#Override
public EventCaseMap findBySysId(String sysId) {
EventCaseMap eventCaseMap = new EventCaseMap();
eventCaseMap.setCaseSystemId(sysId);
return dynamoDBMapper.load(eventCaseMap, DynamoDBMapperConfig.ConsistentReads.CONSISTENT.config());
}
}
/*
data model is like this for Event Table:
{
"caseManagementDetails": {
"cases": [
{
"caseId": "SIR0123456",
"caseStatus": "OPEN",
},
{
"caseId": "SIR0654321",
"caseStatus": "OPEN",
},
{
many other cases(about two hundreds).....
}
]
},
"state": "OPEN",
"securityEventManagerId": "756813a4-4e48-4abb-b37e-da00e931583b"
}
*/
#Repository
public class EventRepositoryImpl implements EventRepository {
#Autowired
DynamoDBMapper dynamoDBMapper;
#Override
public Event findByEventId(String eventId) {
Event event = new Event();
event.setSecurityEventManagerId(eventId);
return dynamoDBMapper.load(event, DynamoDBMapperConfig.ConsistentReads.CONSISTENT.config());
}
#Override
public boolean updateEvent(Event event) {
dynamoDBMapper.save(event, DynamoDBMapperConfig.SaveBehavior.UPDATE_SKIP_NULL_ATTRIBUTES.config());
return false;
}
}
I already try to push the message and consume the message one by one in both 'RUN' and 'DEBUG' model in my Intellij. evertything works fine, all the cases can be updated.
So I was wondering if any inconsistency problems in DynamoDB, but I have already using Strong Consistency in my code.
So do any body know what happened in my code?
There is the input, output, expected output:
input:
many json files like this:
{
"number": "SIR0123456",
"state": "Resolved",
"sys_id": "bb9cc488dbf67b40b3d57709af9619f8",
"MessageAttributes": {
"TransactionGuid": {
"Type": "String",
"Value": "093ddb36-626b-4ecc-8943-62e30ffa2e26"
}
}
}
{
"number": "SIR0654321",
"state": "Resolved",
"sys_id": "bb9cc488dbf67b40b3d57709af9619f7",
"MessageAttributes": {
"TransactionGuid": {
"Type": "String",
"Value": "093ddb36-626b-4ecc-8943-62e30ffa2e26"
}
}
}
output for Event Table:
{
"caseManagementDetails": {
"cases": [
{
"caseId": "SIR0123456",
"caseStatus": "RESOLVED",
},
{
"caseId": "SIR0654321",
"caseStatus": "OPEN"
},
{
many other cases(about two hundreds).....
}
]
},
"state": "OPEN",
"securityEventManagerId": "756813a4-4e48-4abb-b37e-da00e931583b"
}
Expected output for Event Table:
{
"caseManagementDetails": {
"cases": [
{
"caseId": "SIR0123456",
"caseStatus": "RESOLVED",
},
{
"caseId": "SIR0654321",
"caseStatus": "RESOLVED"
},
{
many other cases(about two hundreds).....
}
]
},
"state": "OPEN",
"securityEventManagerId": "756813a4-4e48-4abb-b37e-da00e931583b"
}

I think the problem lies in that when DynamoDB persistent data, it run in a multi-threading way. so if we consume all these data in a short time, there may some threads didn't finish. So the result we saw was just the result of the last thread not that of all threads.

Related

How to "Explicitly order highlighted fields" using ElasticSearch Java API client v7.16?

I want to explicitly order highlighted fields using the Elasticsearch Java API Client 7.16.
In other words I want to build the following request
GET /_search
{
"highlight": {
"fields": [
{ "a": {} },
{ "b": {} },
{ "c": {} }
]
}
}
Unfortunately the following code ignores the insertion order:
new Highlight.Builder()
.fields("a", new HighlightField.Builder().build())
.fields("b", new HighlightField.Builder().build())
.fields("c", new HighlightField.Builder().build());
Actually all available fields() methods eventually put the data in the unordered map. So my request actually is following:
GET /_search
{
"highlight": {
"fields": {
"b": {},
"c": {},
"a": {}
}
}
}
Is there any other Java API that allows to control the order of highlighted fields?
As i know this is not possible and this is not issue of elasticsearch but it is how JSON work. Below is mentioned in JSON documentation.
An object is an unordered set of name/value pairs
I am not sure why you want to rely on order. You should not rely on the ordering of elements within a JSON object.
You can pass Map of field like below for order, just check javadoc:
Map<String, HighlightField> test = new HashMap();
test.put("a", new HighlightField.Builder().build());
test.put("b", new HighlightField.Builder().build());
test.put("b", new HighlightField.Builder().build());
Builder highlight = new Highlight.Builder().fields(test);
There is class HighlightBuilder present in package
package org.elasticsearch.search.fetch.subphase.highlight;
Which has following property as member variable,
private boolean useExplicitFieldOrder = false;
The you can build
List<HighlightBuilder.Field> fields1 = new ArrayList<>();
fields1.add(new HighlightBuilder.Field("a"));
fields1.add(new HighlightBuilder.Field("b"));
fields1.add(new HighlightBuilder.Field("c"));
HighlightBuilder highlightBuilder = new HighlightBuilder(null,null,fields1).useExplicitFieldOrder(true);
Another way:
To Generate the explicit ordering Json :
boolean useExplicitFieldOrder = true;
XContentBuilder builder = XContentFactory.jsonBuilder();
builder.prettyPrint();
builder.startObject();
builder.startObject("highlight");
if (fields != null) {
if (useExplicitFieldOrder) {
builder.startArray("fields");
} else {
builder.startObject("fields");
}
for (HighlightField field : fields) {
if (useExplicitFieldOrder) {
builder.startObject();
}
builder.startObject(field.field());
builder.endObject();
if (useExplicitFieldOrder) {
builder.endObject();
}
}
if (useExplicitFieldOrder) {
builder.endArray();
} else {
builder.endObject();
}
}
builder.endObject();
builder.endObject();
String json = Strings.toString(builder);
System.out.println(json);
It will o/p as follow:
{
"highlight" : {
"fields" : [
{
"a" : { }
},
{
"b" : { }
},
{
"c" : { }
}
]
}
}

How to do a 'join' lookup using Spring Data MongoDB 1.8.2?

I need to do a "join" between two different tables, my query is this:
db.Summary.aggregate([
{
$lookup: {
from: "OriginFilter",
localField: "origin",
foreignField: "code",
as: "originObject"
}
},
{
$project: {
"hub": 1,
"moment": 1,
"origin": { $arrayElemAt: [ "$originObject.displayName", 0 ] },
"_id": 0
}
}
]);
And work perfectly, but I have no idea how I can do it in code using java with spring.
I have the LookupAggregationOperation class:
public class LookupAggregationOperation implements AggregationOperation {
private final DBObject operation;
public LookupAggregationOperation(String from, String localField,
String foreignField, String as) {
this.operation = new BasicDBObject("$lookup",
new BasicDBObject("from", from)
.append("localField", localField)
.append("foreignField", foreignField)
.append("as", as));
}
#Override
public DBObject toDBObject(AggregationOperationContext context) {
return context.getMappedObject(operation);
}
and I'm using it this way:
LookupAggregationOperation getOriginDisplayName = new LookupAggregationOperation("OriginFilter","origin","code","originObject");
then, I have a groupOperation:
GroupOperation groupByDay = group("moment.year", "moment.month", "moment.day", "moment.hour", "countType", "origin", "originObject")
.sum("$count").as("count");
and finally I have a ProjectOperation:
ProjectionOperation renameFields = project("count", "countType")
.and("_id.originObject.displayName").as("originDisplayName").and("_id.year").as("moment.year")
.and("_id.month").as("moment.month")
.and("_id.day").as("moment.day")
.and("_id.hour").as("moment.hour");
but and("_id.originObject.displayName").as("originDisplayName") isnt working. How can I do the query above?

java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to java.lang.Comparable exception when trying sort a List

While trying to sort a list of WlMatch Type as in below code, I am seeing the exception:
java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to java.lang.Comparable
I tried using both comparable and comparator sort but resulted in same error. Can some one with more expertise guide where I would be going wrong.
ReadContext readContext = JsonPath.parse(responseJson);
List<WlMatch> wlMatchs = readContext.read("$.response.watchList.searchResult.records.resultRecord[0].watchlist.matches.wlmatch");
if (wlMatchs != null) {
// wlMatchs.sort(Comparator.comparingInt(WlMatch::id));
wlMatchs.sort(new Comparator<WlMatch>() {
#Override
public int compare(WlMatch w1, WlMatch w2) {
if (w1.getId() == w2.getId()) {
return 0;
}
return w1.getId() - w2.getId();
}
});
In case if you want to see what is in list. wlMatchs value in debug mode is
[
{
addresses=address,
bestName=null,
reasonListed=null,
countryDetails=country,
matchReAlert=null,
phones=null,
resultDate=0,
acceptListID=acceptlist,
bestNameScore=0,
error=null,
bestCountry=null,
trueMatch=null,
doBs=null,
file={
published=3123123213,
build=111,
name=file1,
id=456,
type=txt,
custom=true
},
entityDetails={
akAs=null,
addresses={
entityAddress=[
{
stateProvinceDistrict=null,
country=India,
comments=null,
city=Hyderabad,
postalCode=500001,
street1=null,
id=0,
street2=null,
type=null
}
]
},
dateListed=null,
comments=null,
gender=MALE,
listReferenceNumber=null,
reasonListed=reason,
entityType=IND,
additionalInfo={
entityAdditionalInfo=[
{
comments=null,
id=0,
type=DOB,
value=12-12-1989
}
]
},
name=null,
iDs=null,
phones=null
},
entityName=In,
falsePositive=null,
gatewayOFACScreeningIndicatorMatch=null,
previousResultID=null,
conflicts=null,
iDs=null,
entityScore=0,
id=456,
addedToAcceptList=true,
matchXML=null,
secondaryOFACScreeningIndicatorMatch=null,
entityUniqueID=null,
autoFalsePositive=null,
bestCountryScore=null,
citizenships=null,
checkSum=0,
addressName=true,
ofacInfo=null,
bestAddressIsPartial=null,
bestCountryType=null
},
{
addresses=address,
bestName=null,
reasonListed=null,
countryDetails=country,
matchReAlert=null,
phones=null,
resultDate=0,
acceptListID=acceptlist,
bestNameScore=0,
error=null,
bestCountry=null,
trueMatch=null,
doBs=null,
file={
published=3123123213,
build=111,
name=file1,
id=789,
type=txt,
custom=true
},
entityDetails={
akAs=null,
addresses={
entityAddress=[
{
stateProvinceDistrict=null,
country=India,
comments=null,
city=Hyderabad,
postalCode=500001,
street1=null,
id=0,
street2=null,
type=null
}
]
},
dateListed=null,
comments=null,
gender=MALE,
listReferenceNumber=null,
reasonListed=reason,
entityType=IND,
additionalInfo={
entityAdditionalInfo=[
{
comments=null,
id=0,
type=DOB,
value=12-12-1989
}
]
},
name=null,
iDs=null,
phones=null
},
entityName=In,
falsePositive=null,
gatewayOFACScreeningIndicatorMatch=null,
previousResultID=null,
conflicts=null,
iDs=null,
entityScore=0,
id=789,
addedToAcceptList=true,
matchXML=null,
secondaryOFACScreeningIndicatorMatch=null,
entityUniqueID=null,
autoFalsePositive=null,
bestCountryScore=null,
citizenships=null,
checkSum=0,
addressName=true,
ofacInfo=null,
bestAddressIsPartial=null,
bestCountryType=null
},
{
addresses=address,
bestName=null,
reasonListed=null,
countryDetails=country,
matchReAlert=null,
phones=null,
resultDate=0,
acceptListID=acceptlist,
bestNameScore=0,
error=null,
bestCountry=null,
trueMatch=null,
doBs=null,
file={
published=3123123213,
build=111,
name=file1,
id=123,
type=txt,
custom=true
},
entityDetails={
akAs=null,
addresses={
entityAddress=[
{
stateProvinceDistrict=null,
country=India,
comments=null,
city=Hyderabad,
postalCode=500001,
street1=null,
id=0,
street2=null,
type=null
}
]
},
dateListed=null,
comments=null,
gender=MALE,
listReferenceNumber=null,
reasonListed=reason,
entityType=IND,
additionalInfo={
entityAdditionalInfo=[
{
comments=null,
id=0,
type=DOB,
value=12-12-1989
}
]
},
name=null,
iDs=null,
phones=null
},
entityName=In,
falsePositive=null,
gatewayOFACScreeningIndicatorMatch=null,
previousResultID=null,
conflicts=null,
iDs=null,
entityScore=0,
id=123,
addedToAcceptList=true,
matchXML=null,
secondaryOFACScreeningIndicatorMatch=null,
entityUniqueID=null,
autoFalsePositive=null,
bestCountryScore=null,
citizenships=null,
checkSum=0,
addressName=true,
ofacInfo=null,
bestAddressIsPartial=null,
bestCountryType=null
}
]
You can try to do this.
ReadContext readContext = JsonPath.parse(responseJson);
List<HashMap> mapWlMatchs = readContext.read("$.response.watchList.searchResult.records.resultRecord[0].watchlist.matches.wlmatch");
if (mapWlMatchs != null) {
//Transform to List<WlMatch> with Java8 stream (Pseudocode. Depends on key/value objects in LinkedHashMap. Assuming you have <Object, WlMatch> key/value Map.
List<WlMatch> wlMatchs = mapWlMatchs.entrySet().stream().map(w -> {w.getValue()}).collect(Collector.toList());
// wlMatchs.sort(Comparator.comparingInt(WlMatch::id));
wlMatchs.sort(new Comparator<WlMatch>() {
#Override
public int compare(WlMatch w1, WlMatch w2) {
if (w1.getId() == w2.getId()) {
return 0;
}
return w1.getId() - w2.getId();
}
});

How to convert list of filepaths to a JSON object in Java

I've a requirement to convert a set of file structure obtained from DB into a JSON.
For example:
From DB, I get the following path and file attributes:
Record 1:
"path": "/sd/card/camera/pics/selfie.jpg"
"fileName": "selfie.jpg",
"mimeType": "image/jpeg"
Record 2:
"path": "/sd/card/camera/pics/personal/selfie1.jpg"
"fileName": "selfie1.jpg",
"mimeType": "image/jpeg"
and so on..
I need to convert this to a JSON like:
[{
"sd": [{
"card": [{
"camera": [{
"pics": [{
"fileName": "selfie.jpg",
"path": "/sd/card/camera/pics/selfie.jpg",
"mimeType": "image/jpeg"
},
{
"personal": [{
"fileName": "selfie1.jpg",
"path": "/sd/card/camera/pics/personal/selfie1.jpg",
"mimeType": "image/jpeg"
}]
}
]
}]
}]
}]
}]
I'm going to give you a 'jackson' solution.
First, build an object (or more, I let you deal with the Java inheritance, or use any kind of structure you want to use).
Like this one by example :
#JsonSerialize(using = CustomSerializer.class)
public class Something {
private String currentFolder; // Name of the folder if this instance of something is a folder
private Something[] childs;
private Map<String,String> currentPicture; // Picture properties if this instance of something is a picture
public Something() {currentPicture = new HashMap<String,String>();}
public Something[] getChilds() {
return childs;
}
public void setContent(Something[] _childs) {this.childs = _childs;}
public String getCurrentFolder() {return currentFolder;}
public void setCurrentFolder(String _currentFolder) {this.currentFolder = _currentFolder;}
public Map<String,String> getCurrentPicture() {return currentPicture;}
public void setCurrentPicture(Map<String,String> currentPicture) {this.currentPicture = currentPicture;}
}
Then, create the CustomSerializer, that will help you do whatever you want to do :
public class CustomSerializer extends JsonSerializer<Something>{
#Override
public void serialize(Something value, JsonGenerator jgen, SerializerProvider provider)
throws IOException, JsonProcessingException {
jgen.writeStartObject();
// Adding the folder into the json, only if it exists
if(value.getCurrentFolder()!=null){
jgen.writeObjectField(value.getCurrentFolder(), value.getChilds());
}
// Adding properties of the picture, only if they exist
if(value.getCurrentPicture()!= null){
for(String k : value.getCurrentPicture().keySet()){
jgen.writeObjectField(k,value.getCurrentPicture().get(k));
}
}
jgen.writeEndObject();
}
}
Finally (I've not done this one, but you'll do it I'm sure !) create a mapper from what you read to the "Something" class.
I build the object manually here (quickly, so it's not clean):
public static void main(String[] args) throws JsonGenerationException, JsonMappingException, IOException {
Something s = new Something();
s.setCurrentFolder("toto");
Something s2 = new Something();
s2.setCurrentFolder("tata");
Something s2bis = new Something();
s2bis.setCurrentFolder("tataBis");
Something[] s2Group = {s2bis};
s2.setContent(s2Group);
Something s2bispic = new Something();
s2bispic.getCurrentPicture().put("fileName", "ThatPictureOfMysSelfILikeSoMuch.jpg");
s2bispic.getCurrentPicture().put("path", "toto/tata/tataBis/ThatPictureOfMysSelfILikeSoMuch.jpg");
s2bispic.getCurrentPicture().put("mimeType", "image/jpeg");
Something s2bispic2 = new Something();
s2bispic2.getCurrentPicture().put("fileName", "ThatPictureOfMysSelfIDontLike.jpg");
s2bispic2.getCurrentPicture().put("path", "toto/tata/tataBis/ThatPictureOfMysSelfIDontLike.jpg");
s2bispic2.getCurrentPicture().put("mimeType", "image/jpeg");
Something[] s2BisGroup = {s2bispic,s2bispic2};
s2bis.setContent(s2BisGroup);
Something s3 = new Something();
s3.getCurrentPicture().put("fileName", "selfie.jpg");
s3.getCurrentPicture().put("path", "toto/selfie.jpg");
s3.getCurrentPicture().put("mimeType", "image/jpeg");
Something[] sGroup = {s2,s3};
s.setContent(sGroup);
ObjectMapper mapper = new ObjectMapper();
String temp = mapper.writeValueAsString(s);
System.out.println(temp);
}
And this is what I get :
{
"toto":[
{
"tata":[
{
"tataBis":[
{
"path":"toto/tata/tataBis/ThatPictureOfMysSelfILikeSoMuch.jpg",
"fileName":"ThatPictureOfMysSelfILikeSoMuch.jpg",
"mimeType":"image/jpeg"
},
{
"path":"toto/tata/tataBis/ThatPictureOfMysSelfIDontLike.jpg",
"fileName":"ThatPictureOfMysSelfIDontLike.jpg",
"mimeType":"image/jpeg"
}
]
}
]
},
{
"path":"toto/selfie.jpg",
"fileName":"selfie.jpg",
"mimeType":"image/jpeg"
}
]
}
Regards,

Why do I get only three of my total four Youtube-Playlists with the youtube api v3?

The following thing occours during my use of the Youtube API v3. I request a search to get all my Playlists, but the result contains only 3 of 4, the newest playlist is missing.
But when I try the API in the developer page I get all four playlists.
Developer page:
https://developers.google.com/youtube/v3/docs/playlists/list?hl=fr#try-it
My Request:
https://www.googleapis.com/youtube/v3/playlists?part=id&channelId=UCtEV5D0j6LwV0MqCnZ7H1AA&key={YOUR_API_KEY}
In java application I use the API to do the same, here is my code that delievers only 3 of 4 Playlists:
public int execute(PipelineDictionary dict)
throws PipeletExecutionException
{
// Initialize the search
YouTube.Search.List search = null;
// Response item
SearchListResponse response = null;
// Create the youtube object
YouTube youtube = new YouTube.Builder(HTTP_TRANSPORT, JSON_FACTORY, new HttpRequestInitializer() {
public void initialize(HttpRequest request) throws IOException{}}).setApplicationName("GartenXXL").build();
try
{
search = youtube.search().list("id,snippet");
}
catch(IOException e1)
{
throw new PipeletExecutionException("Failure during sending youtube search request");
}
// Set API-Key
search.setKey(cfg_apiKey);
search.setChannelId(cfg_channelId);
// Set Request Query
// search.setQ(queryTerm);
// search.type("playlist"); oder search.type("channel");
search.setType("playlist");
// Set Fields, filters onle the filds that are defined
search.setFields("items(id/playlistId,snippet(title))");
search.setMaxResults(50L);
try
{
response = search.execute();
//System.out.println(response.toPrettyString());
}
catch(IOException e)
{
e.printStackTrace();
}
dict.put(DN_YOUTUBE_LISTS, response);
return PIPELET_NEXT;
}
The resulting Json looks like:
{
"items": [
{
"id": {
"playlistId": "PLWWeimUpYTkhXubzYoPSqetvhCYZocaVj"
},
"snippet": {
"title": "GartenXXL Ratgeber"
}
},
{
"id": {
"playlistId": "PLWWeimUpYTkjCabIwPo0NuvMV_BPkC981"
},
"snippet": {
"title": "Produktvideos - Top-Produkte von GartenXXL"
}
},
{
"id": {
"playlistId": "PLWWeimUpYTkgEBjIryCrqms640yy4D_1d"
},
"snippet": {
"title": "Rasenmäher-Videos"
}
}
]
}

Categories