Making Batches of 50 from an Object in Java - java

I am stuck with an issue in my app. I want to implement a pagination kind of functionality in my application but due to the existing behaviour I cannot move forward with the standard ways of achieving pagination in my application.
Problem: I have a bean object with all the data in it. I want to device a logic for breaking down the object into groups of 50. So consider, if I have 5000 configs in my object, I will first break it down into first 50 and same will be displayed on UI. Further, I will have to continue the process by breaking the reamining 450 configs in the batches of 50. Can anyone suggest me how to proceed with this logic??
My approach: In my existing code, I am checking for the size of the object. If the size data. If its more than 50. I am setting a flag as true. This flag will be used in JSP/JS, to retrigger a DOJO call for fetching data again. Please find the snippet of the code.
public ActionForward sdconfigLoadServiceGroups(ActionMapping actionMapping,
ActionForm actionForm, HttpServletRequest servletRequest,
HttpServletResponse servletResponse) {
String groupUniqueId = servletRequest.getParameter("groupUniqueId");
Boolean retriggerRequestFlag = false;
// Get the ui group
HashMap sdConfigDetailsHashMap = (HashMap) ((DynaActionForm) actionForm).get(SD_CONFIG_DETAILS);
TreeMap sdConfigTreeMap = (TreeMap) sdConfigDetailsHashMap.get("SDConfigTree");
Boolean viewOnly=(Boolean) sdConfigDetailsHashMap.get("ViewOnly");
Order order = orderManager.getOrder((Long) sdConfigDetailsHashMap.get("OrderId"));
SDConfigUITab sdConfigUITab = sdConfig2Manager.getTabByGroupUniqueId(groupUniqueId, sdConfigTreeMap);
SDConfigUIGroup sdConfigUIGroup = sdConfig2Manager.getGroupByGroupUniqueId(servletRequest.getParameter("groupUniqueId"), sdConfigUITab);
//TODO: Adding logger to check the total number of sections
logger.info("All Sections==="+sdConfigUIGroup.getSections());
logger.info("Total Sections?? "+sdConfigUIGroup.getSections().size());
long size = Long.valueOf(sdConfigUIGroup.getSections().size());
if (size != 0 && size > 50) {
sdConfigUIGroup = loadDynamicConfigs(sdConfigUIGroup);
retriggerRequestFlag = true;
}
servletRequest.setAttribute("retriggerRequest", retriggerRequestFlag);
servletRequest.setAttribute("groupUniqueId", servletRequest.getParameter("groupUniqueId"));
servletRequest.setAttribute("sdConfigUIGroup", sdConfigUIGroup);
servletRequest.setAttribute("sdConfigUITab", sdConfigUITab);
servletRequest.setAttribute("sdConfigUITabId", sdConfigUITab.getTabId());
servletRequest.setAttribute("currentOrderId", order.getOrderId());
servletRequest.setAttribute("viewOnly", viewOnly);
return actionMapping.findForward("sdconfigLoadServiceGroups");
}
public SDConfigUIGroup loadDynamicConfigs(SDConfigUIGroup sdConfigUIGroup) {
//logic for breaking into batches of 50 goes here
}
}
Any suggestions are welcome :) Thanks !!!

Keep a track,
set startIndex and fetchCount in your session (depending on the life cycle)
In your loadDynamicConfigs, iterate through loadDynamicConfigs and pull 50 sections each time.
Next time when user clicks on "Next" (if available) they use the latest startIndex and fetchSize to pull the next batch
Note that your "Next" link/button on the page should call another mapping method to do pagination.

Related

Interactive Broker Java API

Everytime before I place a new order to IB, I need to make a request to IB for next valid orderId and do Thread.Sleep(500) to sleep for 0.5 seconds and wait for IB API's callback function nextValidId to return the latest orderID. If I want to place multiple orders out, then I have to naively do thread.sleep multiple times, This is not a very good way to handle this, as the orderID could have been updated earlier and hence the new order could have been placed earlier. And what if the orderID takes longer time to update than thread sleep time, this would result in error.
Is there a more efficient and elegant way to do this ?
Ideally, I want the program to prevent running placeNewOrder until the latest available orderID is updated and notify the program to run placeNewOrder.
I do not know much about Java data synchronization but I reckon there might be a better solution using synchronized or wait-notify or locking or blocking.
my code:
// place first order
ib_client.reqIds(-1);
Thread.sleep(500);
int currentOrderId = ib_wrapper.getCurrentOrderId();
placeNewOrder(currentOrderId, orderDetails); // my order placement method
// place 2nd order
ib_client.reqIds(-1);
Thread.sleep(500);
int currentOrderId = ib_wrapper.getCurrentOrderId();
placeNewOrder(currentOrderId, orderDetails); // my order placement method
IB EWrapper:
public class EWrapperImpl implements EWrapper {
...
protected int currentOrderId = -1;
...
public int getCurrentOrderId() {
return currentOrderId;
}
public void nextValidId(int orderId) {
System.out.println("Next Valid Id: ["+orderId+"]");
currentOrderId = orderId;
}
...
}
You never need to ask for id's. Just increment by one for every order.
When you first connect, nextValidId is the first or second message to be received, just keep track of the id and keep incrementing.
The only rules for orderId is to use an integer and always increment by some amount. This is per clientId so if you connect with a new clientId then the last orderId is something else.
I always use max(1000, nextValidId) to make sure my id's start at 1000 or more since I use <1000 for data requests. It just helps with errors that have ids.
You can also reset the sequence somehow.
https://interactivebrokers.github.io/tws-api/order_submission.html
This means that if there is a single client application submitting
orders to an account, it does not have to obtain a new valid
identifier every time it needs to submit a new order. It is enough to
increase the last value received from the nextValidId method by one.
You should not mess around with order ID, it's automatically tracked and being set by the API. Otherwise you will get the annoying "Duplicate order id" error 103. From ApiController class:
public void placeOrModifyOrder(Contract contract, final Order order, final IOrderHandler handler) {
if (!checkConnection())
return;
// when placing new order, assign new order id
if (order.orderId() == 0) {
order.orderId( m_orderId++);
if (handler != null) {
m_orderHandlers.put( order.orderId(), handler);
}
}
m_client.placeOrder( contract, order);
sendEOM();
}

DynamoDB's PaginatedList via REST

For a web-application, I want to implement a paginated table. The DynamoDB "layout" is, that there are multiple items for a user, therefore I've chosen the partition key=user and the sort key=created (timestamp). The UI shall present the items in pages à 50 items from a total of a few 100 items.
The items are passed to the UI via REST-Api calls. I only want to query or scan a page of items, not the whole table. Pagination shall be possible forward and backward.
So far I've come up with the following, using the DynamoDBMapper:
/**
* Returns the next page of items DEPENDENT OF THE USER. Note: This method internally uses
* DynamoDB QUERY. Thus it requires "user" as a parameter. The "created" parameter is optional.
* If provided, both parameters form the startKey for the pagination.
*
* #param user - mandatory: The user for which to get the next page
* #param created - optional: for providing a starting point
* #param limit - the returned page will contain (up to) this number of items
* #return
*/
public List<SampleItem> getNextPageForUser(final String user, final Long created, final int limit) {
// To iterate DEPENDENT on the user we use QUERY. The DynamoDB QUERY operation
// always require the partition key (=user).
final SampleItem hashKeyObject = new SampleItem();
hashKeyObject.setUser(user);
// The created is optional. If provided, it references the starting point
if (created == null) {
final DynamoDBQueryExpression<SampleItem> pageExpression = new DynamoDBQueryExpression<SampleItem>()//
.withHashKeyValues(hashKeyObject)//
.withScanIndexForward(true) //
.withLimit(limit);
return mapper.queryPage(SampleItem.class, pageExpression).getResults();
} else {
final Map<String, AttributeValue> startKey = new HashMap<String, AttributeValue>();
startKey.put(SampleItem.USER, new AttributeValue().withS(user));
startKey.put(SampleItem.CREATED, new AttributeValue().withN(created.toString()));
final DynamoDBQueryExpression<SampleItem> pageExpression = new DynamoDBQueryExpression<SampleItem>()//
.withHashKeyValues(hashKeyObject)//
.withExclusiveStartKey(startKey)//
.withScanIndexForward(true) //
.withLimit(limit);
return mapper.queryPage(SampleItem.class, pageExpression).getResults();
}
}
The code for previous is similar, only that it uses withScanIndexForward(false).
In my REST-Api controller I offer a single method:
#RequestMapping(value = "/page/{user}/{created}", method = RequestMethod.GET)
public List<SampleDTO> listQueriesForUserWithPagination(//
#RequestParam(required = true) final String user,//
#RequestParam(required = true) final Long created,//
#RequestParam(required = false) final Integer n,//
#RequestParam(required = false) final Boolean isBackward//
) {
final int nrOfItems = n == null ? 100 : n;
if (isBackward != null && isBackward.booleanValue()) {
return item2dto(myRepo.getPrevQueriesForUser(user, created, nrOfItems));
} else {
return item2dto(myRepo.getNextQueriesForUser(user, created, nrOfItems));
}
}
I wonder if I am re-inventing the wheel with this approach.
Would it be possible to pass the DynamoDB's PaginatedQueryList or PaginatedScanList to the UI via REST, so that if the javascript pagination accesses the items, that then they are loaded lazily.
From working with other DBs I have never transferred DB entry objects, which is why my code-snippet re-packs the data (item2dto).
In addition, the pagination with DynamoDB appears a bit strange: So far I've seen no possibility to provide the UI with a total count of items. So the UI only has buttons for "next page" and "previous page", without actually knowing how many pages will follow. Directly jumping to page 5 is therefore not possible.
The AWS Console does not load all your data at once to conserve on read capacity. When you get a Scan/Query page, you only get information about how to get the next page, so that is why the console is not able to show you a-priory how many pages of data it can show. Depending on your schema, you may be able to support random page access in your application. This is done by deciding a-priori how large pages will be and second encoding something like a page number in the partition key. Please see this AWS Forum post for details.

Efficient object initialization

I am creating a mock Twitter project which loads user data from a somewhat large text file containing ~3.6 million lines formatted like this:
0 12
0 32
1 9
1 54
2 33
etc...
The first string token is the userId and the second is the followId.
The first half of this helper method takes in the current user's ID, checks to see if it exists and creates a new user if necessary. After that, the followId is added to this new or existing user's following list of type ArrayList<Integer>.
With ~3.6 million lines to read, this doesn't take long (9868 ms).
Now the second half creates or finds the followed user (followId) and adds the userId to their followers list, but this additional code extends the amount of time to read the file exponentially (172744 ms).
I tried using the same TwitterUser object throughout the method. All of the adding methods (follow, addFollower) are simple ArrayList.add() methods. Is there anything I can do to make this method more efficient?
Please note: While this is school-related, I'm not asking for an answer to my solution. My professor permitted this slow object initialization, but I'd like to understand how I can make it faster.
private Map<Integer, TwitterUser> twitterUsers = new HashMap<Integer, TwitterUser>();
private void AddUser(int userId, int followId){
TwitterUser user = getUser(userId);
if (user == null){
user = new TwitterUser(userId);
user.follow(followId);
twitterUsers.putIfAbsent(userId, user);
} else{
user.follow(followId);
}
//adding the code below, slows the whole process enormously
user = getUser(followId);
if (user == null){
user = new TwitterUser(followId);
user.addFollower(userId);
twitterUsers.putIfAbsent(followId, user);
} else{
user.addFollower(userId);
}
}
private TwitterUser getUser(int id){
if (twitterUsers.isEmpty()) return null;
return twitterUsers.get(id);
}
If putIfAbsent(int, User) does what you would expect it to do, that is: checking if it's there before inserting, why do you use it within an if block whose condition already checks if the user is there?
In other words, if fetching a user returned a null value you can safely assume that the user was not there.
Now I'm not too sure about the internal workings of the *putIfAbsent* method (probably it would loop through the set of the keys in the map), but intuitively I would expect a normal put(int, User) to perform better, even more with a map that gets as large as yours as the input file gets scanned through.
Therefore I would suggest to try something like:
user = getUser(followId);
if (user == null){
user = new TwitterUser(followId);
user.addFollower(userId);
twitterUsers.put(followId, user);
} else{
user.addFollower(userId);
}
which would apply to the first half as well.

Removing Actors does not delet all Actors

I am currently trying to save special Actors so i can put them on a map again if the old map get loaded. Therefor i want to put them into a HashMap<String, ArrayList<Monster>> monsterAtMap and remove them from there Stages. So i am trying this:
private void saveMonsters() {
if (this.screen.figureStage.getActors().size == 0)
return;
ArrayList<Monster> monsters = new ArrayList<Monster>();
for (Actor a : this.screen.figureStage.getActors()) {
a.remove();
}
Gdx.app.log("Figurstage size", ""+ this.screen.figureStage.getActors().size);
this.monsterAtMap.put(this.currentMap.name, monsters);
}
As start. But i noticed that it does not delete all. It does just delete 10 thats all. I do log the size of it befor and after the deleting. It's current 21 (20Monsters and 1 Character) after delete the size is 11.I also added this this.screen.figureStage.getRoot().removeActor(a); but this does not change anything.
Any Idea to that?
[EDIT] I wrote a workaround so my idea is working but the general idea that should work isnt possible because the .remove() does not always delete the Actor in anyway?! The workaround does look like this:
private void saveMonsters() {
this.chara = this.screen.character;
if (this.screen.figureStage.getActors().size == 0)
return;
ArrayList<Monster> monsters = new ArrayList<Monster>();
for (Actor a : this.screen.figureStage.getActors()) {
if (a.getClass() == Monster.class)
monsters.add((Monster) a);
}
this.screen.figureStage.clear();
this.screen.figureStage.addActor(chara);
this.monsterAtMap.put(this.currentMap.name, monsters);
}
The .clear()does work correct.
Deleting objects from a container while iterating over that container is always fraught with issues and complications, and I think you're running into some of these issues with the Stage's list of actors. The Stage code tries to use SnapshotArray to hide some of these issues, but its not clear to me that it will work with the code you've written.
One way to avoid this would be to loop through getActors() once and copy the actors into the monsters array, then loop through the monsters array and remove the actors from the Stage (or invoke figureStage.getRoot().clearChildren()). This should prevent you from iterating over a list that you're modifying.
Alternatively, look at how Group.clearChildren() is implemented (it uses an explicit integer index in the array of children, and not an iterator over the Array, and avoid some of the issues).

Simple Java String cache with expiration possibility

I am looking for a concurrent Set with expiration functionality for a Java 1.5 application. It would be used as a simple way to store / cache names (i.e. String values) that expire after a certain time.
The problem I'm trying to solve is that two threads should not be able to use the same name value within a certain time (so this is sort of a blacklist ensuring the same "name", which is something like a message reference, can't be reused by another thread until a certain time period has passed). I do not control name generation myself, so there's nothing I can do about the actual names / strings to enforce uniqueness, it should rather be seen as a throttling / limiting mechanism to prevent the same name to be used more than once per second.
Example:
Thread #1 does cache.add("unique_string, 1) which stores the name "unique_string" for 1 second.
If any thread is looking for "unique_string" by doing e.g. cache.get("unique_string") within 1 second it will get a positive response (item exists), but after that the item should be expired and removed from the set.
The container would at times handle 50-100 inserts / reads per second.
I have really been looking around at different solutions but am not finding anything that I feel really suites my needs. It feels like an easy problem, but all solutions I find are way too complex or overkill.
A simple idea would be to have a ConcurrentHashMap object with key set to "name" and value to the expiration time then a thread running every second and removing all elements whose value (expiration time) has passed, but I'm not sure how efficient that would be? Is there not a simpler solution I'm missing?
Google's Guava library contains exactly such cache: CacheBuilder.
How about creating a Map where the item expires using a thread executor
//Declare your Map and executor service
final Map<String, ScheduledFuture<String>> cacheNames = new HashMap<String, ScheduledFuture<String>>();
ScheduledExecutorService executorService = Executors.newSingleThreadScheduledExecutor();
You can then have a method that adds the cache name to your collection which will remove it after it has expired, in this example its one second. I know it seems like quite a bit of code but it can be quite an elegant solution in just a couple of methods.
ScheduledFuture<String> task = executorService.schedule(new Callable<String>() {
#Override
public String call() {
cacheNames.remove("unique_string");
return "unique_string";
}
}, 1, TimeUnit.SECONDS);
cacheNames.put("unique_string", task);
A simple unique string pattern which doesn't repeat
private static final AtomicLong COUNTER = new AtomicLong(System.currentTimeMillis()*1000);
public static String generateId() {
return Long.toString(COUNTER.getAndIncrement(), 36);
}
This won't repeat even if you restart your application.
Note: It will repeat after:
you restart and you have been generating over one million ids per second.
after 293 years. If this is not long enough you can reduce the 1000 to 100 and get 2930 years.
It depends - If you need strict condition of time, or soft (like 1 sec +/- 20ms).
Also if you need discrete cache invalidation or 'by-call'.
For strict conditions I would suggest to add a distinct thread which will invalidate cache each 20milliseconds.
Also you can have inside the stored key timestamp and check if it's expired or not.
Why not store the time for which the key is blacklisted in the map (as Konoplianko hinted)?
Something like this:
private final Map<String, Long> _blacklist = new LinkedHashMap<String, Long>() {
#Override
protected boolean removeEldestEntry(Map.Entry<String, Long> eldest) {
return size() > 1000;
}
};
public boolean isBlacklisted(String key, long timeoutMs) {
synchronized (_blacklist) {
long now = System.currentTimeMillis();
Long blacklistUntil = _blacklist.get(key);
if (blacklistUntil != null && blacklistUntil >= now) {
// still blacklisted
return true;
} else {
// not blacklisted, or blacklisting has expired
_blacklist.put(key, now + timeoutMs);
return false;
}
}
}

Categories