I defined the following aspect the measure the time execution of some methods:
#Around("execution(#Metrics * *.*(..))")
public Object metrics(ProceedingJoinPoint pointcut) {
Logger log = LoggerFactory.getLogger(pointcut.getSourceLocation().getWithinType());
long ms = System.currentTimeMillis();
try {
Object result = pointcut.proceed();
ms = System.currentTimeMillis() - ms;
log.info(String.format("Execution of method %s finished in %d ms", pointcut.getSignature().getName(), ms));
return result;
}
catch (Throwable e) {
log.error(String.format("Execution of method %s ended with an error", pointcut.getSignature().getName()), e);
}
return null;
}
The problem comes when I use it in the update method of my daos, which is #Transactional. The results I'm getting do not match the real times. I guess it is only measuring the time execution of the java code, but not the database update performed by Hibernate.
Is it possible to measure the complete execution time?
For more information I am using spring 3.2.9 and hibernate 3.5 in my application.
Related
I am trying to execute a query over a table in BigQuery using its Java client libraries. I create a Job and then get the result of Job using job.getQueryResults().iterateAll() method.
This way is working but for large data like 600k it takes time around 80-120 seconds. I see BigQuery gets data in 40-45k batches which takes around 5-7 sec each.
I want to get the results faster and I found over internet that if we can get the temporary table created by BigQuery from the Job and the read the data in avro or some other format from that table if will be really fast, but in BigQuery API(using version: 1.124.7) I don't see that way.
Does anyone know how to do that in Java, or how to get data faster in case of large number of records.
Any help is appreciated.
Code to Read Table(Takes 20 sec)
Table table = bigQueryHelper.getBigQueryClient().getTable(TableId.of("project","dataset","table"));
String format = "CSV";
String gcsUrl = "gs://name/test.csv";
Job job = table.extract(format, gcsUrl);
// Wait for the job to complete
try {
Job completedJob = job.waitFor(RetryOption.initialRetryDelay(Duration.ofSeconds(1)),
RetryOption.totalTimeout(Duration.ofMinutes(3)));
if (completedJob != null && completedJob.getStatus().getError() == null) {
log.info("job done");
// Job completed successfully
} else {
log.info("job has error");
// Handle error case
}
} catch (InterruptedException e) {
// Handle interrupted wait
}
Code to read same table using Query(Takes 90 Sec)
Job job = bigQueryHelper.getBigQueryClient().getJob(JobId.of(jobId));
for (FieldValueList row : job.getQueryResults().iterateAll()) {
System.out.println(row);
}
I tried certain ways and based on that found the best way of doing it, just thought to post here to help some one in future.
1: If we use job.getQueryResults().iterateAll() on job or directly on table, it takes same time. So if we don't give batch size BigQuery will use batch size of around 35-45k and fetch the data. So for 600k rows (180Mb) it takes 70-100 sec.
2: We can use the temp table details from created job and use extract job feature of table to write the result in GCS, this will be faster and takes around 30-35 sec. This approach would not download on local for that we again need to use ..iterateAll() on temp table and it will be take same time as 1.
Example pseudo code:
try {
Job job = getBigQueryClient().getJob(JobId.of(jobId));
long start = System.currentTimeMillis();
// FieldList list = getFields(job);
Job completedJob =
job.waitFor(
RetryOption.initialRetryDelay(Duration.ofSeconds(1)),
RetryOption.totalTimeout(Duration.ofMinutes(3)));
if (completedJob != null && completedJob.getStatus().getError() == null) {
log.info("job done");
String gcsUrl = "gs://bucketname/test";
//getting the temp table information of the Job
TableId destinationTableInfo =
((QueryJobConfiguration) job.getConfiguration()).getDestinationTable();
log.info("Total time taken in getting schema ::{}", (System.currentTimeMillis() - start));
Table table = bigQueryHelper.getBigQueryClient().getTable(destinationTableInfo);
//Using extract job to write the data in GCS
Job newJob1 =
table.extract(
CsvOptions.newBuilder().setFieldDelimiter("\t").build().toString(), gcsUrl);
System.out.println("DestinationInfo::" + destinationTableInfo);
Job completedJob1 =
newJob1.waitFor(
RetryOption.initialRetryDelay(Duration.ofSeconds(1)),
RetryOption.totalTimeout(Duration.ofMinutes(3)));
if (completedJob1 != null && completedJob1.getStatus().getError() == null) {
log.info("job done");
} else {
log.info("job has error");
}
} else {
log.info("job has error");
}
} catch (InterruptedException e) {
e.printStackTrace();
}
3: This is the best way which I wanted. It downloads/writes the result faster in local file. It downloads data in around 20 sec. This is the new way BigQuery provides and can be checked using below links:
https://cloud.google.com/bigquery/docs/reference/storage#background
List item
https://cloud.google.com/bigquery/docs/reference/storage/libraries#client-libraries-install-java
I have to iterate over 130 Data Transfer Objects, and each time will generate a json to be uploaded to aws S3.
With no improvements, it takes around 90 seconds the complete the whole process. I tried using lamba and not using lamba, same results for both.
for(AbstractDTO dto: dtos) {
try {
processDTO(dealerCode, yearPeriod, monthPeriod, dto);
} catch (FileAlreadyExistsInS3Exception e) {
failedToUploadDTOs.add(e.getLocalizedMessage() + ": " + dto.fileName() + ".json");
}
}
dtos.stream().forEach(dto -> {
try {
processDTO(dealerCode, yearPeriod, monthPeriod, dto);
} catch (FileAlreadyExistsInS3Exception e) {
failedToUploadDTOs.add(e.getLocalizedMessage() + ": " + dto.fileName() + ".json");
}
});
After some investigation, I concluded that the method processDTO takes around 0.650ms per item to run.
My first attempt was to use parallel streams, and the results were pretty good, taking around 15 seconds to complete the whole process:
dtos.parallelStream().forEach(dto -> {
try {
processDTO(dealerCode, yearPeriod, monthPeriod, dto);
} catch (FileAlreadyExistsInS3Exception e) {
failedToUploadDTOs.add(e.getLocalizedMessage() + ": " + dto.fileName() + ".json");
}
});
But I still need to decrease that time.
I researched about improving parallel streams, and discovered the ForkJoinPool trick:
ForkJoinPool forkJoinPool = new ForkJoinPool(PARALLELISM_NUMBER);
forkJoinPool.submit(() ->
dtos.parallelStream().forEach(dto -> {
try {
processDTO(dealerCode, yearPeriod, monthPeriod, dto);
} catch (FileAlreadyExistsInS3Exception e) {
failedToUploadDTOs.add(e.getLocalizedMessage() + ": " + dto.fileName() + ".json");
}
})).get();
forkJoinPool.shutdown();
Unfortunately, the results were a bit confusing for me.
When PARALLELISM_NUMBER is 8, it takes around 13 seconds to complete the whole process. Not a big improve.
When PARALLELISM_NUMBER is 16, it takes around 8 seconds to complete the whole process.
When PARALLELISM_NUMBER is 32, it takes around 5 seconds to complete the whole process.
All tests were done using postman requests, calling the controller method which will end-up iterating the 130 items
I'm satisfied with 5 seconds, using 32 as PARALLELISM_NUMBER, but I'm worried about the consequences.
Is it ok to keep 32?
What is the ideal PARALLELISM_NUMBER?
What do I have to keep in mind when deciding its value?
I'm running on a Mac 2.2GHZ I7
sysctl hw.physicalcpu hw.logicalcp
hw.physicalcpu: 4
hw.logicalcpu: 8
Here's what processDTO does:
private void processDTO(int dealerCode, int yearPeriod, int monthPeriod, AbstractDTO dto) throws FileAlreadyExistsInS3Exception {
String flatJson = JsonFlattener.flatten(new JSONObject(dto).toString());
String jsonFileName = dto.fileName() + JSON_TYPE;;
String jsonFilePath = buildFilePathNew(dto.endpoint(), dealerCode, yearPeriod, monthPeriod, AWS_S3_JSON_ROOT_FOLDER);
uploadFileToS3(jsonFilePath + jsonFileName, flatJson);
}
public void uploadFileToS3(String fileName, String fileContent) throws FileAlreadyExistsInS3Exception {
if (s3client.doesObjectExist(bucketName, fileName)) {
throw new FileAlreadyExistsInS3Exception(ErrorMessages.FILE_ALREADY_EXISTS_IN_S3.getMessage());
}
s3client.putObject(bucketName, fileName, fileContent);
}
The parallelism parameters decides how many threads will be used by ForkJoinPool. That's why by default parallelism value is the available CPU core count:
Math.min(MAX_CAP, Runtime.getRuntime().availableProcessors())
In your case the bottlneck should be checking that a file exists and uploading it to S3. The time here will depend on at least few factors: CPU, network card and driver, operating system, other. It seems that S3 network operation time is not CPU bound in your case as you are observing improvement by creating more simulations worker threads, perhaps the network request are enqueued by the operating system.
The right value for parallelism varies from one workload type to another. A CPU-bound workflow is better with the default parallelism equal to CPU cores due to the negative impact of context switching. A non CPU-bound workload like yours can be speed up with more worker threads assuming the workload won't block the CPU e.g. by busy waiting.
There is no one single ideal value for parallelism in ForkJoinPool.
I managed to reduce to 8 seconds thanks to all your helpful advices and explanations.
Since the bottleneck was the upload to aws s3, and you mentioned a non-blocking API at aws, after some research, I found out that the class TransferManager contains a non-blocking upload.
TransferManager class
So instead of using ForkJoinPool to increase the number of threads, I kept the simple parallelStream:
dtos.parallelStream().forEach(dto -> {
try {
processDTO(dealerCode, yearPeriod, monthPeriod, dto);
} catch (FileAlreadyExistsInS3Exception e) {
failedToUploadDTOs.add(e.getLocalizedMessage() + ": " + dto.fileName() + ".json");
}
});
And the uploadToS3Method changed a bit, instead of using an AmazonS3, I used the TransferManager:
public Upload uploadAsyncFileToS3(String fileName, String fileContent) throws FileAlreadyExistsInS3Exception {
if (s3client.doesObjectExist(bucketName, fileName)) {
throw new FileAlreadyExistsInS3Exception(ErrorMessages.FILE_ALREADY_EXISTS_IN_S3.getMessage());
}
InputStream targetStream = new ByteArrayInputStream(fileContent.getBytes());
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(fileContent.getBytes().length);
return transferManager.upload(bucketName, fileName, targetStream, metadata);
}
This way, when the upload is called, it doesn't wait for it to finish, letting another DTO to be processed. When all DTO are processed, I check their upload status to see possible errors (outside the first forEach)
DropwizardMetricServices#submit() I'm using doesn't submit the gauge metric for second time.
i.e. My use-case is to remove the gauge metric from JMX after reading it. And my application can send the same metric (with different value).
For the first time the gauge metric is submitted successfully (then my application removes it once it reads the metric). But, the same metric is not submitted the second time.
So, I'm a bit confused what would be the reason for DropwizardMetricServices#submit() not to work for the second time?
Below is the code:
Submit metric:
private void submitNonSparseMetric(final String metricName, final long value) {
validateMetricName(metricName);
metricService.submit(metricName, value); // metricService is the DropwizardMetricServices
log(metricName, value);
LOGGER.debug("Submitted the metric {} to JMX", metricName);
}
Code that reads and removes the metric:
protected void collectMetrics() {
// Create the connection
Long currTime = System.currentTimeMillis()/1000; // Graphite needs
Socket connection = createConnection();
if (connection == null){
return;
}
// Get the output stream
DataOutputStream outputStream = getDataOutputStream(connection);
if (outputStream == null){
closeConnection();
return;
}
// Get metrics from JMX
Map<String, Gauge> g = metricRegistry.getGauges(); // metricRegistry is com.codahale.metrics.MetricRegistry
for(Entry<String, Gauge> e : g.entrySet()){
String key = e.getKey();
if(p2cMetric(key)){
String metricName = convertToMetricStandard(key);
String metricValue = String.valueOf(e.getValue().getValue());
String metricToSend = String.format("%s %s %s\n", metricName, metricValue, currTime);
try {
writeToStream(outputStream, metricToSend);
// Remove the metric from JMX after successfully sending metric to graphite
removeMetricFromJMX(key);
} catch (IOException e1) {
LOGGER.error("Unable to send metric to Graphite - {}", e1.getMessage());
}
}
}
closeOutputStream();
closeConnection();
}
I think I found the issue.
As per the DropwizardMetricServices doc - https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/actuate/metrics/dropwizard/DropwizardMetricServices.html#submit-java.lang.String-double- ,
submit() method Set the specified gauge value.
So, I think it's recommended to use DropwizardMetricServices#submit() method to only set the values of any existing gauge metric in JMX and not for adding any new metric to JMX.
So, once I replaced DropwizardMetricServices#submit() with MetricRegistry#register() (com.codahale.metrics.MetricRegistry) method to submit all my metrics it worked as expected and my metrics are readded to JMX (once they were removed by my application).
But, I'm just wondering what makes DropwizardMetricServices#submit() to only add new metrics to JMX and not any metric that's already been removed (from JMX). Does DropwizardMetricServices cache (in memory) all the metrics submitted to JMX? that makes DropwizardMetricServices#submit() method not to resubmit the metric?
I'm trying to use java 8 CompletionStages to execute 2 asynchronous method serially, so that the second is not executed if the first fails. But when I call thenCompose, the function passed in seems to get started before the previous function is complete (eg: the two function erroneously execute in parallel. Here is the code:
public CompletionStage<Graph> create(Payload payload) {
CompletionStage<BlobInfo> fileFuture = createFile(payload);
CompletionStage<Entity> metadataFuture = createMetadata(payload);
return fileFuture
.thenCompose(ignore -> metadataFuture)
.thenApply(entity ->
buildFromEntity(objectMapper, entity));
}
public CompletionStage<BlobInfo> createFile(Payload payload) {
return CompletableFuture.supplyAsync(() -> {
try {
return
storage.create(
BlobInfo
.newBuilder(payload.bucket, payload.name)
.build(),
payload.data.getBytes());
} catch (StorageException e) {
LOG.error("Failed to write to storage: " + e);
throw new RequestHandlerException(StatusCode.SERVER_ERROR,
"Failed to write to storage.");
}
});
}
public CompletionStage<Entity> createMetadata(Payload payload) {
return CompletableFuture.supplyAsync(() -> createSync(payload));
}
private Entity createMetadataSync(Payload payload) {
Key key = keyFactory.newKey(payload.id);
Entity.Builder entityBuilder = GraphPayload.buildEntityFromGraph(payload, key);
Entity entity = entityBuilder.build();
LOG.error("Metadata.createSync");
try {
datastore.add(entity);
} catch (DatastoreException e) {
LOG.error("Failed to write initial metadata: " + e);
throw new RequestHandlerException(StatusCode.SERVER_ERROR,
"Failed to write initial metadata.");
}
return entity;
}
OUTPUT:
16:57:47.530 [ForkJoinPool.commonPool-worker-3] ERROR com.spotify.nfgraphstore.store.FileStore - CreateFile
16:57:47.530 [ForkJoinPool.commonPool-worker-2] ERROR com.spotify.nfgraphstore.store.MetadataStore - Metadata.createSync
16:57:47.530 [ForkJoinPool.commonPool-worker-3] ERROR com.spotify.nfgraphstore.store.FileStore - Failed to write initial graph to storage: com.google.cloud.storage.StorageException: X
The logged output demonstrates that Metadata.createSync is getting executed before the Storage exception gets thrown. This conclusion is also born out by a test (not shown) which is supposed to show zero interactions with the metadata DB if the write to the file storage DB fails. That test sometimes fails, suggesting a race condition.
So I'm left thinking thenCompose does not guarantee serial execution. But everything I've read in the java docs suggests execution should be serial: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html#thenCompose-java.util.function.Function-
Does anyone know why execution is not guaranteed to be serial, or recommend other functions that might work more as I've intended?
The call to createMetadata launches the task immediately, because it is not called as part of the lambda expression passed to thenCompose.
Perhaps you meant to do this:
.thenCompose(ignore -> createMetadata(payload))
//Unit of logic I want to make it to run in parallel
public PagesDTO convertOCRStreamToDTO(String pageId, Integer pageSequence) throws Exception {
LOG.info("Get OCR begin for pageId [{}] thread name {}",pageId, Thread.currentThread().getName());
OcrContent ocrContent = getOcrContent(pageId);
OcrDTO ocrData = populateOCRData(ocrContent.getInputStream());
PagesDTO pageDTO = new PagesDTO(pageId, pageSequence.toString(), ocrData);
return pageDTO;
}
Logic to execute convertOCRStreamToDTO(..) in parallel then collect its results when individuals thread execution is done
List<PagesDTO> pageDTOList = new ArrayList<>();
//javadoc: Creates a work-stealing thread pool using all available processors as its target parallelism level.
ExecutorService newWorkStealingPool = Executors.newWorkStealingPool();
Instant start = Instant.now();
List<CompletableFuture<PagesDTO>> pendingTasks = new ArrayList<>();
List<CompletableFuture<PagesDTO>> completedTasks = new ArrayList<>();
CompletableFuture<<PagesDTO>> task = null;
for (InputPageDTO dcInputPageDTO : dcReqDTO.getPages()) {
String pageId = dcInputPageDTO.getPageId();
task = CompletableFuture
.supplyAsync(() -> {
try {
return convertOCRStreamToDTO(pageId, pageSequence.getAndIncrement());
} catch (HttpHostConnectException | ConnectTimeoutException e) {
LOG.error("Error connecting to Redis for pageId [{}]", pageId, e);
CaptureException e1 = new CaptureException(Error.getErrorCodes().get(ErrorCodeConstants.REDIS_CONNECTION_FAILURE),
" Connecting to the Redis failed while getting OCR for pageId ["+pageId +"] " + e.getMessage(), CaptureErrorComponent.REDIS_CACHE, e);
exceptionMap.put(pageId,e1);
} catch (CaptureException e) {
LOG.error("Error in Document Classification Engine Service while getting OCR for pageId [{}]",pageId,e);
exceptionMap.put(pageId,e);
} catch (Exception e) {
LOG.error("Error getting OCR content for the pageId [{}]", pageId,e);
CaptureException e1 = new CaptureException(Error.getErrorCodes().get(ErrorCodeConstants.TECHNICAL_FAILURE),
"Error while getting ocr content for pageId : ["+pageId +"] " + e.getMessage(), CaptureErrorComponent.REDIS_CACHE, e);
exceptionMap.put(pageId,e1);
}
return null;
}, newWorkStealingPool);
//collect all async tasks
pendingTasks.add(task);
}
//TODO: How to avoid unnecessary loops which is happening here just for the sake of waiting for the future tasks to complete???
//TODO: Looking for the best solutions
while(pendingTasks.size() > 0) {
for(CompletableFuture<PagesDTO> futureTask: pendingTasks) {
if(futureTask != null && futureTask.isDone()){
completedTasks.add(futureTask);
pageDTOList.add(futureTask.get());
}
}
pendingTasks.removeAll(completedTasks);
}
//Throw the exception cought while getting converting OCR stream to DTO - for any of the pageId
for(InputPageDTO dcInputPageDTO : dcReqDTO.getPages()) {
if(exceptionMap.containsKey(dcInputPageDTO.getPageId())) {
CaptureException e = exceptionMap.get(dcInputPageDTO.getPageId());
throw e;
}
}
LOG.info("Parallel processing time taken for {} pages = {}", dcReqDTO.getPages().size(),
org.springframework.util.StringUtils.deleteAny(Duration.between(Instant.now(), start).toString().toLowerCase(), "pt-"));
Please look at my above code base todo items, I have below two concerns for which I am looking for advice over stackoverflow:
1) I want to avoid unnecessary looping (happening in while loop above), what is the best way for optimistically I wait for all threads to complete its async execution then collect my results out of it??? Please anybody has an advice?
2) ExecutorService instance is created at my service bean class level, thinking that, it will be re-used for every requests, instead create it local to the method, and shutdown in finally. Am I doing right here?? or any correction in my thought process?
Simply remove the while and the if and you are good:
for(CompletableFuture<PagesDTO> futureTask: pendingTasks) {
completedTasks.add(futureTask);
pageDTOList.add(futureTask.get());
}
get() (as well as join()) will wait for the future to complete before returning a value. Also, there is no need to test for null since your list will never contain any.
You should however probably change the way you handle exceptions. CompletableFuture has a specific mechanism for handling them and rethrowing them when calling get()/join(). You might simply want to wrap your checked exceptions in CompletionException.