For a web-application, I want to implement a paginated table. The DynamoDB "layout" is, that there are multiple items for a user, therefore I've chosen the partition key=user and the sort key=created (timestamp). The UI shall present the items in pages à 50 items from a total of a few 100 items.
The items are passed to the UI via REST-Api calls. I only want to query or scan a page of items, not the whole table. Pagination shall be possible forward and backward.
So far I've come up with the following, using the DynamoDBMapper:
/**
* Returns the next page of items DEPENDENT OF THE USER. Note: This method internally uses
* DynamoDB QUERY. Thus it requires "user" as a parameter. The "created" parameter is optional.
* If provided, both parameters form the startKey for the pagination.
*
* #param user - mandatory: The user for which to get the next page
* #param created - optional: for providing a starting point
* #param limit - the returned page will contain (up to) this number of items
* #return
*/
public List<SampleItem> getNextPageForUser(final String user, final Long created, final int limit) {
// To iterate DEPENDENT on the user we use QUERY. The DynamoDB QUERY operation
// always require the partition key (=user).
final SampleItem hashKeyObject = new SampleItem();
hashKeyObject.setUser(user);
// The created is optional. If provided, it references the starting point
if (created == null) {
final DynamoDBQueryExpression<SampleItem> pageExpression = new DynamoDBQueryExpression<SampleItem>()//
.withHashKeyValues(hashKeyObject)//
.withScanIndexForward(true) //
.withLimit(limit);
return mapper.queryPage(SampleItem.class, pageExpression).getResults();
} else {
final Map<String, AttributeValue> startKey = new HashMap<String, AttributeValue>();
startKey.put(SampleItem.USER, new AttributeValue().withS(user));
startKey.put(SampleItem.CREATED, new AttributeValue().withN(created.toString()));
final DynamoDBQueryExpression<SampleItem> pageExpression = new DynamoDBQueryExpression<SampleItem>()//
.withHashKeyValues(hashKeyObject)//
.withExclusiveStartKey(startKey)//
.withScanIndexForward(true) //
.withLimit(limit);
return mapper.queryPage(SampleItem.class, pageExpression).getResults();
}
}
The code for previous is similar, only that it uses withScanIndexForward(false).
In my REST-Api controller I offer a single method:
#RequestMapping(value = "/page/{user}/{created}", method = RequestMethod.GET)
public List<SampleDTO> listQueriesForUserWithPagination(//
#RequestParam(required = true) final String user,//
#RequestParam(required = true) final Long created,//
#RequestParam(required = false) final Integer n,//
#RequestParam(required = false) final Boolean isBackward//
) {
final int nrOfItems = n == null ? 100 : n;
if (isBackward != null && isBackward.booleanValue()) {
return item2dto(myRepo.getPrevQueriesForUser(user, created, nrOfItems));
} else {
return item2dto(myRepo.getNextQueriesForUser(user, created, nrOfItems));
}
}
I wonder if I am re-inventing the wheel with this approach.
Would it be possible to pass the DynamoDB's PaginatedQueryList or PaginatedScanList to the UI via REST, so that if the javascript pagination accesses the items, that then they are loaded lazily.
From working with other DBs I have never transferred DB entry objects, which is why my code-snippet re-packs the data (item2dto).
In addition, the pagination with DynamoDB appears a bit strange: So far I've seen no possibility to provide the UI with a total count of items. So the UI only has buttons for "next page" and "previous page", without actually knowing how many pages will follow. Directly jumping to page 5 is therefore not possible.
The AWS Console does not load all your data at once to conserve on read capacity. When you get a Scan/Query page, you only get information about how to get the next page, so that is why the console is not able to show you a-priory how many pages of data it can show. Depending on your schema, you may be able to support random page access in your application. This is done by deciding a-priori how large pages will be and second encoding something like a page number in the partition key. Please see this AWS Forum post for details.
Related
I have this paging problem where when I try to sort a table by field header on a particular page number, PageRequest.of(page-1, 10, sort) is sorting the entire table, not on a particular page. Thus, what record is returned in that page is different from the previous record before sorting.
Code:
#Override
public Page<User> getPageAndSort(String field, String direction, int page) {
Sort sort = direction.equalsIgnoreCase(Sort.Direction.ASC.name())
? Sort.by(field).ascending()
: Sort.by(field).descending();
Pageable pageable = PageRequest.of(page-1, 10, sort);
return userRepo.findAll(pageable);
}
For example. I want to sort only in page 1 by id. Returning a sorted record from page 1. The rest of the pages or entire records shouldn't
be affected.
Thank you.
Edit:
I have a workaround in this problem. After getting a page from:
Page<User> page = userService.findPage(currentPage);
I get the page.getContent() List and then pass to method sortList:
userService.sortList(new ArrayList<>(page.getContent()), field, sortDir)
sort implementation:
public ArrayList<User> sortList(ArrayList<User> users, String field, String direction) {
users.sort((User user1, User user2) -> {
try {
Field field1 = user1.getClass().getDeclaredField(field);
field1.setAccessible(true);
Object object1 = field1.get(user1);
Field field2 = user2.getClass().getDeclaredField(field);
field2.setAccessible(true);
Object object2 = field2.get(user2);
int result = 0;
if (isInt(object1.toString())) {
result = Integer.parseInt(object1.toString()) - Integer.parseInt(object2.toString());
} else {
result = object1.toString().compareToIgnoreCase(object2.toString());
}
if (result > 0) {
return direction.equalsIgnoreCase("asc") ? 1 : -1;
}
if (result < 0) {
return direction.equalsIgnoreCase("asc") ? -1 : 1;
}
return 0;
} catch (Exception e) {
Log.error(e.toString());
return 0;
}
});
return users;
}
With this work around. I successfully sorted a particular page by its column header without affecting the rest of pages. But it's not standard though as it doesn't use PageRequest.of() from Spring Data JPA and I recommend testing the code and review it thoroughly.
I think an if condition could solve the problem. Create Pageable instance with respect to the condition.
#Override
public Page<User> getPageAndSort(String field, String direction, int page) {
Sort sort = direction.equalsIgnoreCase(Sort.Direction.ASC.name())
? Sort.by(field).ascending()
: Sort.by(field).descending();
Pageable pageable = (page == 1)?PageRequest.of(page-1, 10, sort)
:PageRequest.of(page-1, 10);
return userRepo.findAll(pageable);
}
References : https://www.baeldung.com/spring-data-jpa-pagination-sorting#:~:text=We%20can%20create%20a%20PageRequest%20object%20by%20passing,%280%2C%202%29%3B%20Pageable%20secondPageWithFiveElements%20%3D%20PageRequest.of%20%281%2C%205%29%3B
I think this helps.
I don't think there is an easy way to make this kind of sorting in the database and since you are dealing with a single page which is memory anyway since you render it to the UI, I would just sort it in memory.
Alternatively you can go with a custom SQL statement structured like this:
SELECT * FROM (
SELECT * FROM WHATEVER
ORDER BY -- sort clause defining the pagination
OFFSET ... LIMIT ... -- note that this clause is database dependent.
) ORDER BY -- your sort criteria within the page goes here
You'll have to construct this SQL statement programmatically, so you can't use Spring Datas special features like annotated queries or query derivation.
I'm not sure, I got your question, but if you want a certain sorted page, the db should definitely create the query plan, sort all the data and return you a certain offset (Page) of the sorted data.
It's impossible to get a sorted page without sorting the whole data.
I believe you want to sort data only on a given page, this is difficult to manage with database query which probably will sort whole data and would give you nth page.
I would suggest to do a reverse on a given page after retrieving with same order.
Retrieve the nth page with from database with always asc.
Depending on direction do a reverse if needed.
This should be faster than relying on database for sort operation.
I have this particular problem now - I have a grid that I am trying to have the data filtered through multiple filters. For that, I am using textboxes that serve as input fields for my filtering criterion.
My grid has three columns (First Name, Last Name, Address) and I would like to be able to chain the filtering operations one after the other. All of the values are taken from a MySQL database.
Essentially the filter process should go like this:
FirstName ^ LastName ^ Address
For example, grid with three columns:
And in the filter for First Name column, I input the variables Aa, which would result in the table looking like this:
However, if I decided input D into the Last Name filter it returns results like this (ignores the modifications by the first filter):
Instead of the expected result which would look like this:
The way I am filtering through the grid is like this:
firstNameFilter.addValueChangeListener( e->
{
Notification.show(e.getValue());
ldp.setFilter(desc ->
{
return StringUtils.containsIgnoreCase(desc.getFName(), firstNameFilter.getValue());
});
});
firstNameFilter.setValueChangeMode(ValueChangeMode.EAGER);
What would be the best way to filter through multiple columns whilst taking into consideration previous filter actions?
listDataProvider.setFilter(...) will overwrite any existing filter.
I have written an answer about this very topic, with a complete example code ready for copy paste, and screenshots showing that the multiple filters work as expected.
The most important takeaway from it is this:
Every time that any filter value changes, I reset the current filter using setFilter. But within that new Filter, I will check the values of ALL filter fields, and not only the value of the field whose value just changed. In other words, I always have only one single filter active, but that filter accounts for all defined filter-values.
Here is how it could look with your code:
firstNameFilter.addValueChangeListener( e-> this.onFilterChange());
lastNameFilter.addValueChangeListener( e-> this.onFilterChange());
addressFilter.addValueChangeListener( e-> this.onFilterChange());
// sidenote: all filter fields need ValueChangeMode.EAGER to work this way
private void onFilterChange(){
ldp.setFilter(desc -> {
boolean fNameMatch = true;
boolean lNameMatch = true;
boolean addressMatch = true;
if(!firstNameFilter.isEmpty()){
fNameMatch = StringUtils.containsIgnoreCase(desc.getFName(), firstNameFilter.getValue());
}
if(!lastNameFilter.isEmpty()){
lNameMatch = StringUtils.containsIgnoreCase(desc.getLName(), lastNameFilter.getValue());
}
if(!addressFilter.isEmpty()){
addressMatch = StringUtils.containsIgnoreCase(desc.getAddress(), addressFilter.getValue());
}
return fNameMatch && lNameMatch && addressMatch;
});
});
I am stuck with an issue in my app. I want to implement a pagination kind of functionality in my application but due to the existing behaviour I cannot move forward with the standard ways of achieving pagination in my application.
Problem: I have a bean object with all the data in it. I want to device a logic for breaking down the object into groups of 50. So consider, if I have 5000 configs in my object, I will first break it down into first 50 and same will be displayed on UI. Further, I will have to continue the process by breaking the reamining 450 configs in the batches of 50. Can anyone suggest me how to proceed with this logic??
My approach: In my existing code, I am checking for the size of the object. If the size data. If its more than 50. I am setting a flag as true. This flag will be used in JSP/JS, to retrigger a DOJO call for fetching data again. Please find the snippet of the code.
public ActionForward sdconfigLoadServiceGroups(ActionMapping actionMapping,
ActionForm actionForm, HttpServletRequest servletRequest,
HttpServletResponse servletResponse) {
String groupUniqueId = servletRequest.getParameter("groupUniqueId");
Boolean retriggerRequestFlag = false;
// Get the ui group
HashMap sdConfigDetailsHashMap = (HashMap) ((DynaActionForm) actionForm).get(SD_CONFIG_DETAILS);
TreeMap sdConfigTreeMap = (TreeMap) sdConfigDetailsHashMap.get("SDConfigTree");
Boolean viewOnly=(Boolean) sdConfigDetailsHashMap.get("ViewOnly");
Order order = orderManager.getOrder((Long) sdConfigDetailsHashMap.get("OrderId"));
SDConfigUITab sdConfigUITab = sdConfig2Manager.getTabByGroupUniqueId(groupUniqueId, sdConfigTreeMap);
SDConfigUIGroup sdConfigUIGroup = sdConfig2Manager.getGroupByGroupUniqueId(servletRequest.getParameter("groupUniqueId"), sdConfigUITab);
//TODO: Adding logger to check the total number of sections
logger.info("All Sections==="+sdConfigUIGroup.getSections());
logger.info("Total Sections?? "+sdConfigUIGroup.getSections().size());
long size = Long.valueOf(sdConfigUIGroup.getSections().size());
if (size != 0 && size > 50) {
sdConfigUIGroup = loadDynamicConfigs(sdConfigUIGroup);
retriggerRequestFlag = true;
}
servletRequest.setAttribute("retriggerRequest", retriggerRequestFlag);
servletRequest.setAttribute("groupUniqueId", servletRequest.getParameter("groupUniqueId"));
servletRequest.setAttribute("sdConfigUIGroup", sdConfigUIGroup);
servletRequest.setAttribute("sdConfigUITab", sdConfigUITab);
servletRequest.setAttribute("sdConfigUITabId", sdConfigUITab.getTabId());
servletRequest.setAttribute("currentOrderId", order.getOrderId());
servletRequest.setAttribute("viewOnly", viewOnly);
return actionMapping.findForward("sdconfigLoadServiceGroups");
}
public SDConfigUIGroup loadDynamicConfigs(SDConfigUIGroup sdConfigUIGroup) {
//logic for breaking into batches of 50 goes here
}
}
Any suggestions are welcome :) Thanks !!!
Keep a track,
set startIndex and fetchCount in your session (depending on the life cycle)
In your loadDynamicConfigs, iterate through loadDynamicConfigs and pull 50 sections each time.
Next time when user clicks on "Next" (if available) they use the latest startIndex and fetchSize to pull the next batch
Note that your "Next" link/button on the page should call another mapping method to do pagination.
I'm developing an application which connects to an outside service to fetch new SMS. Theses messages are stored in a local database by Hibernate. My client can search these messages based on numerous parameters such as time, number, and etc.
After calling the search method with a list of parameters called 'List1', I get the desired result without any problems. Though while I'm waiting for this result a new message has arrived.
Soon after, I call the search method with same parameter list again and I'm expecting to get the new message as well, but I get the previous result.
I have checked my database and the new message is present so all I can think of is Hibernate caching. Since both queries are exactly the same, I guess hibernate return the same result set as before.
In case my assumption is correct, how can I overcome this problem? If not, so what exactly is going on?
Edit
here is relevant part of my source code. Following two methods will be invoked when client initiates a search request:
smsService.refresh();
JSONArray result = smsService.retrieveMessages(...);
#Transactional
public JSONArray retrieveMessages(Long periodBegining, Long periodEnd, String order, Integer limit, String profile, Boolean unread, String correspondent) {
List<ShortMessage> messageList = shortMessageDAO.find(beginDate, endDate, order, limit, profile, unread, correspondent);
JSONArray result = new JSONArray();
for (ShortMessage message : messageList)
result.put(message.toJSON());
shortMessageDAO.markRead(messageList);
return result;
}
#Transactional
public void refresh() {
webService.authentication(serviceUsername, servicePassword);
while(webService.hasUnread() > 0) {
SMS sms = webService.retrieveMessage();
ShortMessage message = new ShortMessage(sms.getHash(), sms.getFrom(), sms.getTo(), "DEFAULT", sms.getMessage(), new Date(sms.getTime()), true);
shortMessageDAO.insert(message);
}
}
}
public List<ShortMessage> find(Date beginDate, Date endDate, String order, Integer limit, String profile, Boolean unread, String correspondent) {
Criteria criteria = sessionFactory.getCurrentSession().createCriteria(ShortMessage.class);
criteria.add(Restrictions.ge("time", beginDate));
criteria.add(Restrictions.le("time", endDate));
criteria.add(Restrictions.eq("profile", profile));
if (unread)
criteria.add(Restrictions.eq("unread", true));
if (correspondent != null)
criteria.add(Restrictions.eq("origin", correspondent));
criteria.addOrder(order.equals("ASC") ? Order.asc("time") : Order.desc("time"));
criteria.setMaxResults(limit);
criteria.setCacheMode(CacheMode.IGNORE);
return (ArrayList<ShortMessage>) criteria.list();
}
Yes it looks like hibernate is caching your query and returning cached results.
Please give us a overview of your code to suggest better.
Below listed are two ways of controlling the caching behaviour of queries:-
1) At the main named query level:-
#NamedQuery(
name = "myNamedQuery"
query = "SELECT u FROM USER WHERE u.items is EMPTY"
hints = {#QueryHint(name = "org.hibernate.cacheMode", value = "IGNORE")}
)
2) At individual query level :-
Query q = session.createQuery("from User")
.setCacheMode(CacheMode.IGNORE);
After running numerous test, I found out that this problem is not cache related at all. Upon receiving each message I would have stored the time of arrival based on data provided by SMS panel and not my own machine time.
There was a slight time difference (20 seconds to be exact) between those 2 which was the reason behind the query not returning the new received message.
There is a jbutton in my Jpanel. When I clicked it, it loads up my Jtable, sometimes a query return so many records (500 rows). So I want to restrict it to 5 records.
When query return I want to count it; if it's higher than 5 then Jtable shows up only first 5 record, when user click Forward button it will shows up next 5 record. When user click Back button it will show previous 5 record.
How can I do this? Is there any example for this with TableModel?
I suggest implementing a "Paged" TableModel which provides a window onto the entire dataset and methods for moving forwards and backwards throughout the data. This way you do not require two Lists to store the data but rather a single List holding all data along with a marker to your current position; e.g.
public class ImmutablePagedTableModel extends AbstractTableModel {
private final List<MyBusinessObject> allData;
private final int pageSize;
private int pos;
public ImmutablePagedTableModel(List<MyBusinessObject> allData) {
// Copy construct internal list. Use ArrayList for random access look-up efficiency.
this.allData = new ArrayList<MyBusinessObject>(allData);
}
/**
* Returns true if the model has another page of data or false otherwise.
*/
public boolean hasNextPage() {
return pos + pageSize < allData.size();
}
/**
* Flips to the next page of data available.
*/
public void nextPage() {
if (hasNextPage()) {
pos += pageSize;
// All data in the table has effectively "changed", so fire an event
// causing the JTable to repaint.
fireTableDataChanged();
} else {
throw new IndexOutOfBoundsException();
}
}
public int getRowcount() {
return Math.min(pageSize, allData.size() - pos);
}
// TODO: Implement hasPreviousPage(), previousPage();
}
As 00rush mentions a more ambitious approach would be to use a SwingWorker to stream in the data in the background. You could still use the paged TableModel approach for this; you'd just need to ensure that appropriate TableModelEvents are fired as you append to the end of the allData list.
If you wish to load a large table, you may want to use a SwingWorker (details here) thread to load the table in the background. Loading a table with 500 rows should not be a problem. You can then put the data into a suitable object format and pass it to your TableModel.
If you decide to use a List for example, in your table model you could have two lists:
List allData
List viewData
int startIndex
The viewData list is what is referenced by the getValueAt(..) method in your implementation of the TableModel interface. The viewData list is always a subset (bound by startIndex, of length 5) of allData. When the user clicks "Next", your action listener could call a method on the Table model that increments startIndex by 5 (or whatever). You then regenerate your viewData instance so that it is the appropriate 5 row subset of allData, and call fireTableChanged(). This will be easy if you have extended AbstractTableModel in the first place.
This should be pretty straightforward to implement. I think its better than making a database call every time you want to get the next set of data. IMHO, its better to take a little bit more time upfront to preload the data.