Getting 400 as status code in cosmo DB batch Operation - java

I am trying to store multiple types of Records in Cosmo DB using batch operation. But I am getting 400 status in CosmoBatchResponse object and errorMessage is null. Internally , one item is giving 400 , all the other operations have a 424 status code (failed dependency). From this [document] https://learn.microsoft.com/en-us/rest/api/cosmos-db/http-status-codes-for-cosmosdb I can see there could be many reason of 400 , but if errorMessage is null then how to find what went wrong . Also , same msg is getting stored via create call ,facing issue while batch save only.
PartitionKey partitionKey = new PartitionKey("customerNo");
CosmosBatch batch = CosmosBatch.createCosmosBatch(partitionKey);
batch.createItemOperation(customer);
I have tried to store via create method only looping on CosmosItemOperation and it is getting stored.
CosmosBatchResponse response=paymentRepository.createBatch(cosmosBatch);
for(CosmosItemOperation itemOp:cosmosBatch.getOperations()) {
System.out.println(paymentRepository.create(itemOp.getItem(),""));// Here it is getting stored.
}
public CosmosBatchResponse createBatch(CosmosBatch cosmosBatch) {
CosmosBatchResponse response = null;
try {
response = container.executeCosmosBatch(cosmosBatch);
System.out.println(response.isSuccessStatusCode()); -- returns false
System.out.println(response.getErrorMessage()); -- returns null
return response;
} catch (final Exception e) {
int statusCode = CosmosUtils.getCosmosStatusCode(e);
if (CONFLICT_RESOURCE == statusCode) {
log.error(
"CosmosCreateDocumentException: Resource already exists for Document : {}",
response.getErrorMessage());
}
shouldRetryOnException(e);
log.error(
"CosmosCreateDocumentException for Document {} - {}, {}", cosmosBatch, e.getMessage(), e);
throw new GenericRepositoryException(e.getMessage(), e);
}
}

Related

stateStore.delete(key) in Kafka is not working

I have what it thought would be a simple statestore use case. We loop through a state store every 10s and try to send to a partner, if we receive 404, we try again next intervall.
If we receive 200, we delete the entry from the state store.
In my test (1 entry in statestore) I first let it run a few loops, where we receive 404, just to test that the retry works. When I switch my mock endpoint to return 200, I can see through the logs that both:
stateStore.delete(key) and stateStore.flush() is called. I even confirm after stateStore.delete(key) that stateStore.get(key) returns a null value (tombstone).
However, the next time the punctuator runs (10s), the object is still in the state store and the entire block is called again. it keeps looping like this, without ever deleting the entry in the statestore
#Override
public void punctuate(long l) {
log.info("PeriodicRetryPunctuator started: " + l);
try(KeyValueIterator<String, TestEventObject> iter = stateStore.all()) {
while(iter.hasNext()) {
KeyValue<String, TestEventObject> keyValue = iter.next();
String key = keyValue.key;
TestEventObject event = keyValue.value;
try {
log.info("Event: " + event);
// Sends event over HTTP. Will throw HttpResponseException if 404 is received
eventService.processEvent(event);
stateStore.delete(key);
stateStore.flush();
// Check that statestore returns null
log.info("Check: " + stateStore.get(key));
} catch (HttpResponseException hre) {
log.info("Periodic retry received 404. Retrying at next interval");
}
catch (Exception e) {
e.printStackTrace();
log.error("Exception with periodic retry: {}", e.getMessage());
}
}
}
}
Update:
It seems to be Confluent's encryption libraries that causes these issues. I've done quite an extensive A/B test, and every time it occurs is with Confluent encryption. Without I never experience this issue.

500 Internal Server Error instead of 404 in Spring Boot

When I try to find out the value that is not there in the database I get 500 Internal Server Error. I have already provided logic to throw ResourceNotFoundException error, but, it's not working for some reason. What do I need to do to get 404 ResourceNotFoundException instead of 500 Internal Server Error.
Here's my code:
#PostMapping("/start/{id}")
public ResponseEntity<String> startEvent(#PathVariable() Long id) {
Event event = this.eventRepository.findById(id).get();
if (event == null) {
throw new ResourceNotFoundException("Event with id " + id + " not found.");
}
event.setStarted(true);
this.eventRepository.save(event);
return ResponseEntity.ok("Event " + event.getName() + " has started");
}
I guess eventRepository.findById(id) //id = 200 returns 500 response as record with id 200 does not exist in the database. What should I do to get ResourceNotFoundException?
eventRepository.findById returns Optional (in Spring Data JPA 2.0.6, see https://docs.spring.io/spring-data/jpa/docs/2.0.6.RELEASE/reference/html/#repositories.core-concepts)
Optional.get on empty optional causes NoSuchElementException (https://docs.oracle.com/javase/8/docs/api/java/util/Optional.html#get--). Your if (event == null) comes too late.
Checking stactrace, you should see that exception comes from line with this.eventRepository.findById and actual exception is NoSuchElementException
To fix that you should change your code to
Optional<Event> optionalEvent= this.eventRepository.findById(id);
if (!optionalEvent.isPresent()) {
throw new ResourceNotFoundException("Event with id " + id + " not found.");
}
Event event=optionalEvent.get();
//the rest of your logic
You may also write your code in more functional way
Event event = this.eventRepository
.findById(id)
.orElseThrow(() -> new ResourceNotFoundException("Event with id " + id + " not found."))
Summary
Do not call get() on Optional without checking if it is present (using isPresent() method)
eventRepository.findById() return an Optional
Therefore you have to test for existence before get()
Optional<Event> optEvent = eventRepository.findById();
if (!optEvent.isPresent()) {
//throw exception here
}

CompletionStage.thenCompose not executing serially

I'm trying to use java 8 CompletionStages to execute 2 asynchronous method serially, so that the second is not executed if the first fails. But when I call thenCompose, the function passed in seems to get started before the previous function is complete (eg: the two function erroneously execute in parallel. Here is the code:
public CompletionStage<Graph> create(Payload payload) {
CompletionStage<BlobInfo> fileFuture = createFile(payload);
CompletionStage<Entity> metadataFuture = createMetadata(payload);
return fileFuture
.thenCompose(ignore -> metadataFuture)
.thenApply(entity ->
buildFromEntity(objectMapper, entity));
}
public CompletionStage<BlobInfo> createFile(Payload payload) {
return CompletableFuture.supplyAsync(() -> {
try {
return
storage.create(
BlobInfo
.newBuilder(payload.bucket, payload.name)
.build(),
payload.data.getBytes());
} catch (StorageException e) {
LOG.error("Failed to write to storage: " + e);
throw new RequestHandlerException(StatusCode.SERVER_ERROR,
"Failed to write to storage.");
}
});
}
public CompletionStage<Entity> createMetadata(Payload payload) {
return CompletableFuture.supplyAsync(() -> createSync(payload));
}
private Entity createMetadataSync(Payload payload) {
Key key = keyFactory.newKey(payload.id);
Entity.Builder entityBuilder = GraphPayload.buildEntityFromGraph(payload, key);
Entity entity = entityBuilder.build();
LOG.error("Metadata.createSync");
try {
datastore.add(entity);
} catch (DatastoreException e) {
LOG.error("Failed to write initial metadata: " + e);
throw new RequestHandlerException(StatusCode.SERVER_ERROR,
"Failed to write initial metadata.");
}
return entity;
}
OUTPUT:
16:57:47.530 [ForkJoinPool.commonPool-worker-3] ERROR com.spotify.nfgraphstore.store.FileStore - CreateFile
16:57:47.530 [ForkJoinPool.commonPool-worker-2] ERROR com.spotify.nfgraphstore.store.MetadataStore - Metadata.createSync
16:57:47.530 [ForkJoinPool.commonPool-worker-3] ERROR com.spotify.nfgraphstore.store.FileStore - Failed to write initial graph to storage: com.google.cloud.storage.StorageException: X
The logged output demonstrates that Metadata.createSync is getting executed before the Storage exception gets thrown. This conclusion is also born out by a test (not shown) which is supposed to show zero interactions with the metadata DB if the write to the file storage DB fails. That test sometimes fails, suggesting a race condition.
So I'm left thinking thenCompose does not guarantee serial execution. But everything I've read in the java docs suggests execution should be serial: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html#thenCompose-java.util.function.Function-
Does anyone know why execution is not guaranteed to be serial, or recommend other functions that might work more as I've intended?
The call to createMetadata launches the task immediately, because it is not called as part of the lambda expression passed to thenCompose.
Perhaps you meant to do this:
.thenCompose(ignore -> createMetadata(payload))

Parallel processing using collection of CompletableFuture supplyAsync then collecting results

//Unit of logic I want to make it to run in parallel
public PagesDTO convertOCRStreamToDTO(String pageId, Integer pageSequence) throws Exception {
LOG.info("Get OCR begin for pageId [{}] thread name {}",pageId, Thread.currentThread().getName());
OcrContent ocrContent = getOcrContent(pageId);
OcrDTO ocrData = populateOCRData(ocrContent.getInputStream());
PagesDTO pageDTO = new PagesDTO(pageId, pageSequence.toString(), ocrData);
return pageDTO;
}
Logic to execute convertOCRStreamToDTO(..) in parallel then collect its results when individuals thread execution is done
List<PagesDTO> pageDTOList = new ArrayList<>();
//javadoc: Creates a work-stealing thread pool using all available processors as its target parallelism level.
ExecutorService newWorkStealingPool = Executors.newWorkStealingPool();
Instant start = Instant.now();
List<CompletableFuture<PagesDTO>> pendingTasks = new ArrayList<>();
List<CompletableFuture<PagesDTO>> completedTasks = new ArrayList<>();
CompletableFuture<<PagesDTO>> task = null;
for (InputPageDTO dcInputPageDTO : dcReqDTO.getPages()) {
String pageId = dcInputPageDTO.getPageId();
task = CompletableFuture
.supplyAsync(() -> {
try {
return convertOCRStreamToDTO(pageId, pageSequence.getAndIncrement());
} catch (HttpHostConnectException | ConnectTimeoutException e) {
LOG.error("Error connecting to Redis for pageId [{}]", pageId, e);
CaptureException e1 = new CaptureException(Error.getErrorCodes().get(ErrorCodeConstants.REDIS_CONNECTION_FAILURE),
" Connecting to the Redis failed while getting OCR for pageId ["+pageId +"] " + e.getMessage(), CaptureErrorComponent.REDIS_CACHE, e);
exceptionMap.put(pageId,e1);
} catch (CaptureException e) {
LOG.error("Error in Document Classification Engine Service while getting OCR for pageId [{}]",pageId,e);
exceptionMap.put(pageId,e);
} catch (Exception e) {
LOG.error("Error getting OCR content for the pageId [{}]", pageId,e);
CaptureException e1 = new CaptureException(Error.getErrorCodes().get(ErrorCodeConstants.TECHNICAL_FAILURE),
"Error while getting ocr content for pageId : ["+pageId +"] " + e.getMessage(), CaptureErrorComponent.REDIS_CACHE, e);
exceptionMap.put(pageId,e1);
}
return null;
}, newWorkStealingPool);
//collect all async tasks
pendingTasks.add(task);
}
//TODO: How to avoid unnecessary loops which is happening here just for the sake of waiting for the future tasks to complete???
//TODO: Looking for the best solutions
while(pendingTasks.size() > 0) {
for(CompletableFuture<PagesDTO> futureTask: pendingTasks) {
if(futureTask != null && futureTask.isDone()){
completedTasks.add(futureTask);
pageDTOList.add(futureTask.get());
}
}
pendingTasks.removeAll(completedTasks);
}
//Throw the exception cought while getting converting OCR stream to DTO - for any of the pageId
for(InputPageDTO dcInputPageDTO : dcReqDTO.getPages()) {
if(exceptionMap.containsKey(dcInputPageDTO.getPageId())) {
CaptureException e = exceptionMap.get(dcInputPageDTO.getPageId());
throw e;
}
}
LOG.info("Parallel processing time taken for {} pages = {}", dcReqDTO.getPages().size(),
org.springframework.util.StringUtils.deleteAny(Duration.between(Instant.now(), start).toString().toLowerCase(), "pt-"));
Please look at my above code base todo items, I have below two concerns for which I am looking for advice over stackoverflow:
1) I want to avoid unnecessary looping (happening in while loop above), what is the best way for optimistically I wait for all threads to complete its async execution then collect my results out of it??? Please anybody has an advice?
2) ExecutorService instance is created at my service bean class level, thinking that, it will be re-used for every requests, instead create it local to the method, and shutdown in finally. Am I doing right here?? or any correction in my thought process?
Simply remove the while and the if and you are good:
for(CompletableFuture<PagesDTO> futureTask: pendingTasks) {
completedTasks.add(futureTask);
pageDTOList.add(futureTask.get());
}
get() (as well as join()) will wait for the future to complete before returning a value. Also, there is no need to test for null since your list will never contain any.
You should however probably change the way you handle exceptions. CompletableFuture has a specific mechanism for handling them and rethrowing them when calling get()/join(). You might simply want to wrap your checked exceptions in CompletionException.

MongoDB: What does getLastError() return

Thought the WriteResult.getLastError() should return null, if the delete
operation was successful.
It returns this
{ "n" : 1 , "connectionId" : 200 , "wtime" : 0 , "err" : null , "ok" : 1.0}
The BatchData Document was deleted successfully, but getLastError() is not null.
How should I write the code to know, if the delete was unsuccessful, in the following snippet:
try {
Query<BatchData> queryDeleteBatchData = mongo.createQuery(BatchData.class);
queryDeleteBatchData.field("uuid").equal(theBatch.uuid);
queryDeleteBatchData.field("senderUuid").equal(on.uuid);
WriteResult del = mongo.delete(queryDeleteBatchData);
if(del.getLastError() != null){
logger.error("ERROR");
}
} catch (Exception e) {
logger.error("ERROR" );
}
The getLastError() command is doing the correct thing. It's telling you that the action was successful (ok:1.0) and that no error occurred ("err":null).
For more details check out the recently updated docs.
getLastError() also has some functionality related to journaling and replication that you may want to investigate.
Edit:
In response to the first comment:
...
if(del.getLastError().ok != 1.0){
logger.error("ERROR");
}
} catch (Exception e) {
logger.error("ERROR" );
}

Categories