Access to one data collection from multiple logged users - java

I am solving following problem and I will grateful for some advice because I can move my project ahead, so I hope this question don't break any rules.
In my app I have 2 rest controller:
for data storing
for data fetching
It should works something like that: Some user send data to database with first rest controller. Each data object has also recipient property, so I need put this recipient into collection. That collection will be available for each user. When data are stored, user get response 200.
When another use controller for data fetching hi check this collection, if contains his id. If yes, he load it from database and return to user response with these data. Else he is waiting and in loop is checking this collection until it will be contains his id or waiting time expire. If during checking this collection he is found his id, he remove this id and fetch data and return it as response else return empty object.
Can you tell me if spring contains some feature for that? Or if not how it could be did by pure java? Thanks in advice.

As per my understanding , you need to preserve some value after each POST/PUT. So that you can use this value before GET.
I would suggest to you some in memory cache. ( Something like guava as it provides many methods for operating on cache)
Or check if you can achieve this with Spring Cache.
If you are using MongoDB, you can use cacheable collection.

Related

Where do you store static data frontend or backend?

Let's assume there is a form frontend, which has several dropdowns with data(objects, not just strings) that likely not changing in the future, but it has reasonably size, so it looks a little bit weird putting it into frontend.
Do you create tables for these data backend and fetch it from there even though the backend likely not using or changing it ever?
Could you give me some resources where I can find about these conventions?
If you are the owner of this data it is more efficient to have this on frontend stored in some constants file, no problem whether they are objects or strings. For example,create class DropdownOption and store array of these objects.
If you decide to keep it in database and provide data via REST API count on the performance - every request will reach your endpoint first, create transaction, get data from db, close transaction, map objects to dtos and only after that return to your frontend. More data more time.
Further from Ilia Ilin's answer, an additional thing to consider is if this data set is referenced anywhere, how you'd like the data to behave once a value is updated or removed.
If you load the data on the front end, then any modification will not apply to previously stored data.
If you store the data in a relational DB, fetch it in front end, any modification will cascade to all previous data references.

drools validation unique and dependent

I have a collection of POJOs in memory and these POJOs come from other system. I have the following two problems with them:
I want to know what POJO is duplicate in terms of properties value.
I also validate against other collection i.e I have 200 shops in a city and shop ids start from 1 and ends at 200. I got a data from the shop and it submits me 500 as shop id. I want to verify the data is correct according to my collection of data or not?
I am currently stuck and don't know how to perform these operation.
I am collecting data for the market trends, shops from all over the city are registered with us. We assigned ID to each of the store. Store keeper will send us their selling details in the plain file format. My task is to collect correct data in DB. If shop or goods id doesn't match with my collection, then that record would be incorrect and I notify the shop keeper that this record is invalid. If file contains same row two or more times then also, I notify that it is duplicate.
Thanks,
I think you need to have equals() method implemented for these POJOs to say that an object is duplicate of another one. Then, you can keep inserting the POJOs you received, into a java.util.Set and everytime you receive a new one, you can check if the POJO is already received, using set.contains().
For #2, you can maintain a Map of ID to the object in that other collection to check if a newly arrived POJO is a valid one present in that map.
AFAIK, drools requires you to provide a canonical representation of your object to run the rules on the state of the object. The above two validations require you to maintain those data structures irrespective of using drools or any other rule engine.

GAE : objectify delete by id

I'm trying to delete a record from the GAE datastore via an ajax query which sends the object "primary key" (Long Id with auto increment).
Currently, I'm doing this (hard coded the key=6):
Objectify ofy = ObjectifyService.begin();
ofy.delete( Test1.class , 6);
This works : it deletes the entity which has the Key=6.
But for security reasons, I need another parameter (fyi : "parent_user") so only the owner can delete this object.
It seems Objectify.delete() doesn't allow to pass more parameters than the key...
How could I solve this ? Because making a Objectify.get() with my optional parameters+key to get the full object then sending the whole object to the delete() is nubish & unoptimized...
As presented at http://objectify-appengine.googlecode.com/svn/trunk/javadoc/index.html, Objectify.delete() does not take any additional parameters besides object keys, ids, or strings.
So, you need to first get the object based on your filters and then delete them. However, in order to optimize this, you can get only the key of the object and not the full object. Then you delete based on the key.
Hope this helps!
If your data model allows you to let the user be the Datastore ancestor of your objects, you can get rid of the query, since the ancestor is part of the key.
What I often do is to authenticate the user in the beginning of every request, which uses the #Cached annotation of Objectify to cache all users (and their privileges, which are embedded into the user).
Then, most of the user related data has the user as the ancestor. This way, whenever a user tries to access or delete a resource, I will never accidently allow the user to do it on any objects that isn't hers. All-in-all, only gets which are quick and cachable.

Is it advisable to store some information (meta-data) about a content in the id (or key) of that content?

It is advisable to store some information(meta-data) about a content in the Id(or key) of that content ?
In other words, I am using a time based UUIDs as the Ids (or key) for some content stored in the database. My application first accesses the list of all such Ids(or keys) of the content (from the database) and then accessed the corresponding content(from the database). These Ids are actually UUIDs(time based). My idea is to store some extra information about the content, in the Ids itself, so that the my software can access this meta-content without accessing the entire content from the database again.
My application context is a website using Java technology and Cassandra database.
So my question is,
whether I should do so ? I am concerned since lots of processing may be required (at the time of presentation of data to user) in order to retrieve the meta data from the ids of the content!! Thus it may be instead better to retrieve it from database then getting it through processing of the Id of that content.
If suggested then , How should I implement that in an efficient manner ? I was thinking of following way :-
Id of a content = 'Timebased UUID' + 'UserId'
where, 'timebasedUUID' is the generated ID based on the timestamp when that content was added by a user & 'userId' represents the Id of the user who put that content.
so my example Id would look something like this:- e4c0b9c0-a633-15a0-ac78-001b38952a49(TimeUUID) -- ff7405dacd2b(UserId)
How should I extract this userId from the above id of the content, in most efficient manner?
Is there a better approach to store meta information in the Ids ?
I hate to say it since you seem to have put a lot of thought into this but I would say this is not advisable. Storing data like this sounds like a good idea at first but ends up causing problems because you will have many unexpected issues reading and saving the data. It's best to keep separate data as separate variables and columns.
If you are really interested in accessing meta-content with out main content I would make two column families. One family has the meta-content and the other the larger main content and both share the same ID key. I don't know much about Cassandra but this seems to be the recommended way to do this sort of thing.
I should note that I don't think that all this will be necessary. Unless the users are storing very large amounts of information their size should be trivial and your retrievals of them should remain quick
I agree with AmaDaden. Mixing IDs and data is the first step on a path that leads to a world of suffering. In particular, you will eventually find a situation where the business logic requires the data part to change and the database logic requires the ID not to change. Off the cuff, in your example, there might suddenly be a requirement for a user to be able to merge two accounts to a single user id. If user id is just data, this should be a trivial update. If it's part of the ID, you need to find and update all references to that id.

How do I implement Hibernate Pagination using a cursor (so the results stay consistent, despite new data being added to the table being paged)?

Is there any way to maintain a database cursor using Hibernate between web requests?
Basically, I'm trying to implement pagination, but the data that is being paged is consistently changing (i.e. new records are added into the database). We are trying to set it up such that when you do your initial search (returning a maximum of 5000 results), and you page through the results, those same records always appear on the same page (i.e. we're not continuously running the query each time next and previous page buttons are clicked). The way we're currently implementing this is by merely selecting 5000 (at most) primary keys from the table we're paging, storing those keys in memory, and then just using 20 primary keys at a time to fetch their details from the database. However, we want to get away from having to store these keys in memory and would much prefer a database cursor that we just keep going back to and moving backwards and forwards over the cursor to generate pages.
I tried doing this with Hibernate's ScrollableResults but found that I could not call methods like next() and previous() would cause an exception if you within a different web request / Hibernate session (no surprise there).
Is there any way to reattach a ScrollableResults object to a Session, much the same way you would reattach a detached database object to make it persistent?
Never use offset because offset also reads all the data before the offset, which is very inefficient.
You need to order by an indexed unique property and return the last item property's value in your API call and use a WHERE clause to start from where you left. This last item's property value will be your cursor position. For example, a simple paginated query that uses the primary key id as cursor would be like this:
List<MyEntity> entities = entityManager
.createQuery("""
FROM
MyEntity e
WHERE
e.id > :cursorPosition
ORDER BY
e.id ASC
""", MyEntity.class)
.setParameter("cursorPosition", cursorPosition)
.setMaxResults(pageSize)
.getResultList()
The first call to the API, the cursorPosition value can be 0. The second one you will receive from the client the cursor that the client received from the first call. See how Google Maps paginated places query works with the nextPageToken attribute.
Your cursor has to be a string that identifies all parameters of your query. So if you have additional parameters it must be retrievable with the cursor.
I believe you can do this in multiple ways. One way is concatenating all parameters and cursorPosition in a string, encode it in a URL friendly string like Base64 and when receiving back decode it and split the string into the original parameters:
String nextPageToken = Base64.getUrlEncoder()
.encodeToString("indexProperty=id&cursorPos=123&ageBiggerThan=65".getBytes())
Your api call will return a json like this:
{
"items": [ ... ],
"nextPageToken": "aW5kZXhQcm9wZXJ0eT1pZCZjdXJzb3JQb3M9MTIzJmFnZUJpZ2dlclRoYW49NjU="
}
And the client next call:
GET https://www.example.com/api/myservice/v1/myentity?pageToken=aW5kZXhQcm9wZXJ0eT1pZCZjdXJzb3JQb3M9MTIzJmFnZUJpZ2dlclRoYW49NjU=
The part of concatenating and splitting the cursor string may be tiresome, I really don't know if there is a library that handles this work of creating the tokens and parsing it, I am actually in this question because I was looking for it. But my guess is that GSON or Jackson can save you lines of code on this.
Essentially you're on your own for this one. What you want to do is take a look at the OpenSessionInView filter and build your own so that instead of making a new HibernateSession per request, you pull one out of a cache that's associated with the user's web session.
If you don't have a framework like Spring WebFlow that gives you some conversation structure, you're going to need to build that too. Since you probably want some way to manage the lifecycle of that Hibernate session beyond "When the web session expires." You also most likely do not want two user threads from the same web session but different browser tabs sharing a hibernate session. (Hilarity is likely to ensue.)

Categories