#Transactional drastically slows down Rest API - java

Service gets data from DB in constructor and store it in HashMap and then returns data from HashMap. Please take a look:
#RestController
#RequestMapping("/scheduler/api")
#Transactional(readOnly = true, transactionManager = "cnmdbTm")
public class RestApiController {
private final Set<String> cache;
#Autowired
public RestApiController(CNMDBFqdnRepository cnmdbRepository, CNMTSFqdnRepository cnmtsRepository) {
cache = new HashSet<>();
cache.addAll(getAllFqdn(cnmdbRepository.findAllFqdn()));
cache.addAll(getAllFqdn(cnmtsRepository.findAllFqdn()));
}
#RequestMapping(value = "/fqdn", method = RequestMethod.POST, produces = MediaType.APPLICATION_JSON_VALUE)
public List<SchedulerRestDto> checkFqdn(#RequestBody List<SchedulerRestDto> queryList) throws ExecutionException {
for (SchedulerRestDto item : queryList) {
item.setFound(1);
if (!cache.contains(item.getFqdn())) {
item.setFound(0);
}
}
return queryList;
}
private Set<String> getAllFqdn(List<String> fqdnList) {
Set<String> result = new HashSet<>();
for (String fqdn : fqdnList) {
result.add(fqdn);
}
return result;
}
}
But I always get a result in about 2sec. I thought it's a bit slowly for 35K string which I got from DB.
I tried to find out where problem is. I store serialized HashMap to file and modified constructor to:
#Autowired
public RestApiController(CNMDBFqdnRepository cnmdbRepository, CNMTSFqdnRepository cnmtsRepository) {
try (final InputStream fis = getResourceAsStream("cache-hashset.ser");
final ObjectInputStream ois = new ObjectInputStream(fis)) {
cache = (Set<String>) ois.readObject();
}
}
after that service returned result less than 100 ms.
I think it's related with DB but I don't know exactly how I can fix it.
Any ideas?

After several hours of experiments I realized that the main cause is annotation #Transactional on the class.
When I moved this annotation on a method, service returned response quicker. In my final decision I moved this annotation to a another class. New constructor is
#Autowired
public RestApiController(FqdnService fqdnService, SqsService sqsService) {
Objects.requireNonNull(fqdnService);
cache = fqdnService.getCache();
}
Code is cleaner and there aren't any performance issues.

Related

Sequential processing of multi-threaded results

I am setting up a Spring Boot application (DAO pattern with #Repositories) where I am attempting to write a #Service to asynchronously pull data from a database in multiple threads and merge-process the incoming payloads sequentially, preferably on arrival.
The goal is to utilize parallel database access for requests where multiple non-overlapping sets of filter conditions need to be queried individually, but post-processed (transformed, e.g. aggregated) into a combined result.
Being rather new to Java, and coming from Golang and its comparably trivial syntax for multi-threading and task-communication, I struggle to identify a preferable API in Java and Spring Boot - or determine if this approach is even favorable to begin with.
Question:
Given
a Controller:
#RestController
#RequestMapping("/api")
public class MyController {
private final MyService myService;
#Autowired
public MyController(MyService myService) {
this.myService = myService;
}
#PostMapping("/processing")
public DeferredResult<MyResult> myHandler(#RequestBody MyRequest myRequest) {
DeferredResult<MyResult> myDeferredResult = new DeferredResult<>();
myService.myProcessing(myRequest, myDeferredResult);
return myDeferredResult;
}
a Service:
import com.acme.parallel.util.MyDataTransformer
#Service
public class MyServiceImpl implementing MyService {
private final MyRepository myRepository;
#Autowired
public MyService(MyRepository myRepository) {
this.myRepository = myRepository;
}
public void myProcessing(MyRequest myRequest, MyDeferredResult myDeferredResult) {
MyDataTransformer myDataTransformer = new MyDataTransformer();
/* PLACEHOLDER CODE
for (MyFilter myFilter : myRequest.getMyFilterList()) {
// MyPartialResult myPartialResult = myRepository.myAsyncQuery(myFilter);
// myDataTransformer.transformMyPartialResult(myPartialResult);
}
*/
myDeferredResult.setResult(myDataTransformer.getMyResult());
}
}
a Repository:
#Repository
public class MyRepository {
public MyPartialResult myAsyncQuery(MyFilter myFilter) {
// for the sake of an example
return new MyPartialResult(myFilter, TakesSomeAmountOfTimeToQUery.TRUE);
}
}
as well as a MyDataTransformer helper class:
public class MyDataTransformer {
private final MyResult myResult = new MyResult(); // e.g. a Map
public void transformMyPartialResult(MyPartialResult myPartialResult) {
/* PLACEHOLDER CODE
this.myResult.transformAndMergeIntoMe(myPartialResult);
*/
}
}
how can I implement
the MyService.myProcessing method asynchronously and multi-threaded, and
the MyDataTransformer.transformMyPartialResult method sequential/thread-safe
(or redesign the above)
most performantly, to merge incoming MyPartialResult into one single MyResult?
Attempts:
The easiest solution seems to be to skip the "on arrival" part, and a commonly preferred implementation might e.g. be:
public void myProcessing(MyRequest myRequest, MyDeferredResult myDeferredResult) {
MyDataTransformer myDataTransformer = new MyDataTransformer();
List<CompletableFuture<myPartialResult>> myPartialResultFutures = new ArrayList<>();
for (MyFilter myFilter : myRequest.getMyFilterList()) { // Stream is the way they say, but I like for
myPartialResultFutures.add(CompletableFuture.supplyAsync(() -> myRepository.myAsyncQuery(myFilter));
}
myPartialResultFutures.stream()
.map(CompletableFuture::join)
.map(myDataTransformer::transformMyPartialResult);
myDeferredResult.setResult(myDataTransformer.getMyResult());
}
However, if feasible I'd like to benefit from sequentially processing incoming payloads when they arrive, so I am currently experimenting with something like this:
public void myProcessing(MyRequest myRequest, MyDeferredResult myDeferredResult) {
MyDataTransformer myDataTransformer = new MyDataTransformer();
List<CompletableFuture<myPartialResult>> myPartialResultFutures = new ArrayList<>();
for (MyFilter myFilter : myRequest.getMyFilterList()) {
myPartialResultFutures.add(CompletableFuture.supplyAsync(() -> myRepository.myAsyncQuery(myFilter).thenAccept(myDataTransformer::transformMyPartialResult));
}
myPartialResultFutures.forEach(CompletableFuture::join);
myDeferredResult.setResult(myDataTransformer.getMyResult());
}
but I don't understand if I need to implement any thread-safety protocols when calling myDataTransformer.transformMyPartialResult, and how - or if this even makes sense, performance-wise.
Update:
Based on the assumption that
myRepository.myAsyncQuery takes slightly varying amounts of time, and
myDataTransformer.transformMyPartialResult taking an ever increasing amount of time each call
implementing a thread-safe/atomic type/Object, e.g. a ConcurrentHashMap:
public class MyDataTransformer {
private final ConcurrentMap<K, V> myResult = new ConcurrentHashMap<K, V>();
public void transformMyPartialResult(MyPartialResult myPartialResult) {
myPartialResult.myRows.stream()
.map((row) -> this.myResult.merge(row[0], row[1], Integer::sum)));
}
}
into the latter Attempt (processing "on arrival"):
public void myProcessing(MyRequest myRequest, MyDeferredResult myDeferredResult) {
MyDataTransformer myDataTransformer = new MyDataTransformer();
List<CompletableFuture<myPartialResult>> myPartialResultFutures = new ArrayList<>();
for (MyFilter myFilter : myRequest.getMyFilterList()) {
myPartialResultFutures.add(CompletableFuture.supplyAsync(() -> myRepository.myAsyncQuery(myFilter).thenAccept(myDataTransformer::transformMyPartialResult));
}
myPartialResultFutures.forEach(CompletableFuture::join);
myDeferredResult.setResult(myDataTransformer.getMyResult());
}
is up to one order of magnitude faster than waiting on all threads first, even with atomicity protocol overhead.
Now, this may have been obvious (not ultimately, though, as async/multi-threaded processing is by far not always the better choice), and I am glad this approach is a valid choice.
What remains is what looks to me like a hacky, flexibility lacking solution - or at least an ugly one. Is there a better approach?

How can I do integration flow to be invoked when I pass File dir to messageChannel and InboundFileAdapter reading files from it?

I have integration flow that reads files from specific dir, transform it to pojo and save in list.
Config class:
#Configuration
#ComponentScan
#EnableIntegration
#IntegrationComponentScan
public class IntegrationConfig {
#Bean
public MessageChannel fileChannel(){
return new DirectChannel();
}
#Bean
public MessageSource<File> fileMessageSource(){
FileReadingMessageSource readingMessageSource = new FileReadingMessageSource();
CompositeFileListFilter<File> compositeFileListFilter= new CompositeFileListFilter<>();
compositeFileListFilter.addFilter(new SimplePatternFileListFilter("*.csv"));
compositeFileListFilter.addFilter(new AcceptOnceFileListFilter<>());
readingMessageSource.setFilter(compositeFileListFilter);
readingMessageSource.setDirectory(new File("myFiles"));
return readingMessageSource;
}
#Bean
public CSVToOrderTransformer csvToOrderTransformer(){
return new CSVToOrderTransformer();
}
#Bean
public IntegrationFlow convert(){
return IntegrationFlows.from(fileMessageSource(),source -> source.poller(Pollers.fixedDelay(500)))
.channel(fileChannel())
.transform(csvToOrderTransformer())
.handle("loggerOrderList","processOrders")
.channel(MessageChannels.queue())
.get();
}
}
Transformer:
public class CSVToOrderTransformer {
#Transformer
public List<Order> transform(File file){
List<Order> orders = new ArrayList<>();
Pattern pattern = Pattern.compile("(?m)^(\\d*);(WAITING_FOR_PAYMENT|PAYMENT_COMPLETED);(\\d*)$");
Matcher matcher = null;
try {
matcher = pattern.matcher(new String(Files.readAllBytes(file.toPath()), StandardCharsets.UTF_8));
} catch (IOException e) {
e.printStackTrace();
}
while (!matcher.hitEnd()){
if(matcher.find()){
Order order = new Order();
order.setOrderId(Integer.parseInt(matcher.group(1)));
order.setOrderState(matcher.group(2).equals("WAITING_FOR_PAYMENT")? OrderState.WAITING_FOR_PAYMENT:OrderState.PAYMENT_COMPLETED);
order.setOrderCost(Integer.parseInt(matcher.group(3)));
orders.add(order);
}
}
return orders;
}
}
OrderState enum :
public enum OrderState {
CANCELED,
WAITING_FOR_PAYMENT,
PAYMENT_COMPLETED
}
Order :
public class Order {
private int orderId;
private OrderState orderState;
private int orderCost;
}
LoggerOrderList service:
#Service
public class LoggerOrderList {
private static final Logger LOGGER = LogManager.getLogger(LoggerOrderList.class);
public List<Order> processOrders(List<Order> orderList){
orderList.forEach(LOGGER::info);
return orderList;
}
}
1)How can I do that flow starts when I pass invoke gateway method?
2)How can I read passed message in inbound-channel-adapter(in my case is FileReadingMessageSource)?
The FileReadingMessageSource is based on the polling the provided in the configuration directory. This is the beginning of the flow and it cannot be used in the middle of some logic.
You didn’t explain what is your gateway is, but probably you would like to have similar logic to get content if the dir passed as a payload of sent message. However such a logic doesn’t look like a fit for that message source anyway. It’s goal is to poll the dir all the time for new content. If you want something similar for several dirs, you may consider to have dynamically registered flows for provided dirs: https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl-runtime-flows.
Otherwise you need to consider to have a plain services activator which would call listFiles() on the provided dir. just because without “wait for new content “ feature it does not make sense to abuse FileReeadingMessageSource

Spring web NullPointer exception for optional

I've followed an open Course on Spring web. Written some code to list all orders from a database and return them through a rest api. This works perfectly. Now I'm writing some code to give the ID of the order in the request, find 0 or 1 orders and return them. However, when there is no Order find with the given ID, a nullpointerexception is given. I can't find out what is causing this. I'm assuming the .orElse(null) statement. Please advise
Controller:
#RequestMapping("api/V1/order")
#RestController
public class OrderController {
private final OrderService orderService;
#Autowired
public OrderController(OrderService orderService) {
this.orderService = orderService;
}
#GetMapping(path = "{id}")
public Order getOrderById(#PathVariable("id") int id) {
return orderService.getOrderById(id)
.orElse(null);
}
}
Service:
#Service
public class OrderService {
private final OrderDao orderDao;
#Autowired
public OrderService(#Qualifier("oracle") OrderDao orderDao) {
this.orderDao = orderDao;
}
public Optional<Order> getOrderById(int orderNumber) {
return orderDao.selectOrderById(orderNumber);
}
}
Dao:
#Override
public Optional<Order> selectOrderById(int searchedOrderNumber) {
final String sql = "SELECT \"order\", sender, receiver, patient, orderdate, duedate, paymentref, status, netprice from \"ORDER\" where \"order\" = ?";
Order order = jdbcTemplate.queryForObject(sql, new Object[] {searchedOrderNumber}, (resultSet, i) -> {
int orderNumber = resultSet.getInt( "\"order\"");
String sender = resultSet.getString("sender");
String receiver = resultSet.getString("receiver");
String patient = resultSet.getString("patient");
String orderDate = resultSet.getString("orderdate");
String dueDate = resultSet.getString("duedate");
String paymentRef = resultSet.getString("paymentref");
String status = resultSet.getString("status");
int netPrice = resultSet.getInt("netprice");
return new Order(orderNumber,sender,receiver,patient,orderDate,dueDate,paymentRef,status,netPrice);
});
return Optional.ofNullable(order);
}
For the Jdbcexception, use general query instead of the queryForObject, or use try/catch to convert the Jdbc related exception, else Spring itself will handle these internally using ExceptionTranslater, ExceptionHandler etc.
To handle optional case in controllers, just throw an exception there, for example PostController.java#L63
And handle it in the PostExceptionHandler.
Editing based on comment about stack trace
For your error please check - Jdbctemplate query for string: EmptyResultDataAccessException: Incorrect result size: expected 1, actual 0
To solve problem associated with orderService.getOrderById(id) returning null you can return ResponseEntity.ResponseEntity gives you more flexibility in terms of status code and header. If you can change your code to return ResponseEntitythen you can do something like
#GetMapping(path = "{id}")
public ResponseEntity<?> getOrderById(#PathVariable("id") int id) {
return orderService
.getOrderById(id)
.map(order -> new ResponseEntity<>(order.getId(), HttpStatus.OK))
.orElse(new ResponseEntity<>(HttpStatus.NOT_FOUND));
}
You can even write generic Exception handler using #ControllerAdvice and throw OrderNotFoundException as .orElse(throw new OrderNotFoundException);. Check more information here.

GraphQL and Data Loader Using the graphql-java-kickstart library

I am attempting to use the DataLoader feature within the graphql-java-kickstart library:
https://github.com/graphql-java-kickstart
My application is a Spring Boot application using 2.3.0.RELEASE. And I using version 7.0.1 of the graphql-spring-boot-starter library.
The library is pretty easy to use and it works when I don't use the data loader. However, I am plagued by the N+1 SQL problem and as a result need to use the data loader to help alleviate this issue. When I execute a request, I end up getting this:
Can't resolve value (/findAccountById[0]/customers) : type mismatch error, expected type LIST got class com.daluga.api.account.domain.Customer
I am sure I am missing something in the configuration but really don't know what that is.
Here is my graphql schema:
type Account {
id: ID!
accountNumber: String!
customers: [Customer]
}
type Customer {
id: ID!
fullName: String
}
I have created a CustomGraphQLContextBuilder:
#Component
public class CustomGraphQLContextBuilder implements GraphQLServletContextBuilder {
private final CustomerRepository customerRepository;
public CustomGraphQLContextBuilder(CustomerRepository customerRepository) {
this.customerRepository = customerRepository;
}
#Override
public GraphQLContext build(HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse) {
return DefaultGraphQLServletContext.createServletContext(buildDataLoaderRegistry(), null).with(httpServletRequest).with(httpServletResponse).build();
}
#Override
public GraphQLContext build(Session session, HandshakeRequest handshakeRequest) {
return DefaultGraphQLWebSocketContext.createWebSocketContext(buildDataLoaderRegistry(), null).with(session).with(handshakeRequest).build();
}
#Override
public GraphQLContext build() {
return new DefaultGraphQLContext(buildDataLoaderRegistry(), null);
}
private DataLoaderRegistry buildDataLoaderRegistry() {
DataLoaderRegistry dataLoaderRegistry = new DataLoaderRegistry();
dataLoaderRegistry.register("customerDataLoader",
new DataLoader<Long, Customer>(accountIds ->
CompletableFuture.supplyAsync(() ->
customerRepository.findCustomersByAccountIds(accountIds), new SyncTaskExecutor())));
return dataLoaderRegistry;
}
}
I also have create an AccountResolver:
public CompletableFuture<List<Customer>> customers(Account account, DataFetchingEnvironment dfe) {
final DataLoader<Long, List<Customer>> dataloader = ((GraphQLContext) dfe.getContext())
.getDataLoaderRegistry().get()
.getDataLoader("customerDataLoader");
return dataloader.load(account.getId());
}
And here is the Customer Repository:
public List<Customer> findCustomersByAccountIds(List<Long> accountIds) {
Instant begin = Instant.now();
MapSqlParameterSource namedParameters = new MapSqlParameterSource();
String inClause = getInClauseParamFromList(accountIds, namedParameters);
String sql = StringUtils.replace(SQL_FIND_CUSTOMERS_BY_ACCOUNT_IDS,"__ACCOUNT_IDS__", inClause);
List<Customer> customers = jdbcTemplate.query(sql, namedParameters, new CustomerRowMapper());
Instant end = Instant.now();
LOGGER.info("Total Time in Millis to Execute findCustomersByAccountIds: " + Duration.between(begin, end).toMillis());
return customers;
}
I can put a break point in the Customer Repository and see the SQL execute and it returns a List of Customer objects. You can also see that the schema wants an array of customers. If I remove the code above and put in the resolver to get the customers one by one....it works....but is really slow.
What am I missing in the configuration that would cause this?
Can't resolve value (/findAccountById[0]/customers) : type mismatch error, expected type LIST got class com.daluga.api.account.domain.Customer
Thanks for your help!
Dan
Thanks, #Bms bharadwaj! The issue was on my side in understanding how the data is returned in the dataloader. I ended up using a MappedBatchLoader to bring the data in a map. The key in the map being the accountId.
private DataLoader<Long, List<Customer>> getCustomerDataLoader() {
MappedBatchLoader<Long, List<Customer>> customerMappedBatchLoader = accountIds -> CompletableFuture.supplyAsync(() -> {
List<Customer> customers = customerRepository.findCustomersByAccountId(accountIds);
Map<Long, List<Customer>> groupByAccountId = customers.stream().collect(Collectors.groupingBy(cust -> cust.getAccountId()));
return groupByAaccountId;
});
// }, new SyncTaskExecutor());
return DataLoader.newMappedDataLoader(customerMappedBatchLoader);
}
This seems to have done the trick because before I was issuing hundreds of SQL statement and now down to 2 (one for the driver SQL...accounts and one for the customers).
In the CustomGraphQLContextBuilder,
I think you should have registered the DataLoader as :
...
dataLoaderRegistry.register("customerDataLoader",
new DataLoader<Long, List<Customer>>(accountIds ->
...
because, you are expecting a list of Customers for one account Id.
That should work I guess.

My #Cacheable seems to be ignored (Spring)

I have to cache the result of the following public method :
#Cacheable(value = "tasks", key = "#user.username")
public Set<MyPojo> retrieveCurrentUserTailingTasks(UserInformation user) {
Set<MyPojo> resultSet;
try {
nodeInformationList = taskService.getTaskList(user);
} catch (Exception e) {
throw new ApiException("Error while retrieving tailing tasks", e);
}
return resultSet;
}
I also configured Caching here :
#Configuration
#EnableCaching(mode = AdviceMode.PROXY)
public class CacheConfig {
#Bean
public CacheManager cacheManager() {
final SimpleCacheManager cacheManager = new SimpleCacheManager();
cacheManager.setCaches(Arrays.asList(new ConcurrentMapCache("tasks"),new ConcurrentMapCache("templates")));
return cacheManager;
}
#Bean
public CacheResolver cacheResolver() {
final SimpleCacheResolver cacheResolver = new SimpleCacheResolver(cacheManager());
return cacheResolver;
}
}
I assert the following :
Cache is initialized and does exist within Spring Context
I used jvisualvm to track ConcurrentMapCache (2 instances), they are
there in the heap but empty
Method returns same values per user.username
I tried the same configuration using spring-boot based project and
it worked
The method is public and is inside a Spring Controller
The annotation #CacheConfig(cacheNames = "tasks") added on top of my
controller
Spring version 4.1.3.RELEASE
Jdk 1.6
Update 001 :
#RequestMapping(value = "/{kinematicId}/status/{status}", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
public DocumentNodeWrapper getDocumentsByKinematicByStatus(#PathVariable String kinematicId, #PathVariable String status, HttpServletRequest request) {
UserInformation user = getUserInformation(request);
Set<ParapheurNodeInformation> nodeInformationList = retrieveCurrentUserTailingTasks(user);
final List<DocumentNodeVO> documentsList = getDocumentsByKinematic(kinematicId, user, nodeInformationList);
List<DocumentNodeVO> onlyWithGivenStatus = filterByStatus(documentsList);
return new DocumentNodeWrapper("filesModel", onlyWithGivenStatus, user, currentkinematic);
}
Thanks
Is the calling method getDocumentsByKinematicByStatus() in the same bean as the cacheable method ? If true, then this is a normal behavior because you're not calling the cacheable method via proxy but directly.

Categories