I have a Java XPages application with a REST service that functions as an API for rooms & resources database (getting appointments for specific room, creating etc).
The basic workflow is that an HTTP request is being made to a specific REST action, having the room's mail address in the search query. Then in the java code I'm iterating over all documents from the rooms & resources database, until I find a document with the InternetAddress field with the searched mail address.
This isn't as fast as I would like it to be, and there are multiple queries like this being made all the time.
I'd like to do some sort of caching in my application, that when one room is found once, it's document UID is being stored in a server-wide cache so next time a request is made for this mail address, I can directly go to the document using getDocumentByUNID(), which I think should be way faster than searching over the entire database.
Is it possible to have such persistent lookup table in Java XPages without having any additional applications, while keeping it as fast as possible? A hash table would be perfect for this.
To clarify: I don't want caching in a single request, because I'm not doing more than one database lookups in a single query, I'd want to keep the caching server-wide, so it would be kept between multiple requests.
Yes, it is possible to store persistent data. What you are looking for is called an application scoped managed bean.
Related
I need to keep in sync Client with postgreSQL database (only data that are loaded from database, not entire database, 50+ db tables and a lot of collections inside entities). As recently I have added server based on Spring-REST API to my application I could manage those changes maybe differently/more efficient that would require less work. So untill now my approach was to add psql notification that triggers json
CREATE TRIGGER extChangesOccured
AFTER INSERT OR UPDATE OR DELETE ON xxx_table
FOR EACH ROW EXECUTE PROCEDURE notifyUsers();
the client then receive the json built as:
json_build_object(
'table',TG_TABLE_NAME,
'action', TG_OP,
'id', data,
'session', session_app_name);
compare if this change is made by this client or any other and fetch the new data from database.
Then on client side new object is manually "rewritten", something like method copyFromObject(new_entity) and variables are being overriden (including collections, avoid transient etc...).
This approach requires to keep copyFromObject method for each entity (hmm still can be optimized with reflections)
Problems with my approach is:
requires some work when modifying variables (can be optimized using reflections)
entire new entity is loaded when changed by some client
I am curious of Your solutions to keep clients in sync with db, generally I have desktop client here and the client loads a lot of data from database which must be sync, loading database takes even 1min on the app start depends on chosen data-period which should be fetched
The perfect solution would be to have some engine that would fetch/override only those variables in entities that was really changed and make it automatically.
A simple solution is to implement Optimistic Lock? It will prevent user from persisting data if the entity was changed after the user fetched it.
Or
You can use 3rd party apps for DB synchronization. I've played some time ago with Pusher and you can find an excessive tutorial about Client synchronization here: React client synchronization
Of course pusher is not the only one solution, and I'm not related to the dev team of that app by at all.
For my purpose I have implemented AVL Tree based loaded entities and database synchronization engine that creates repositiories based on the loaded entities from hibernate and asynchronously search throught all the fields in entities and rewrites/merge all the same fields (so that if some field (pk) is the same entity like the one in repository, it replaces it)
In this way synchronization with database is easy as it comes to find the externally changed entity in the repository (so basically in the AVL Tree which is O(log n)) and rewrite its fields.
I need to build a /search API that allows someone to send a POST, and retrieve an ID that can be queried later via a seperate /results API.
I've looked at Spring methods:
DeferredResult
#Async
but neither seem to demonstrate returning an ID from a search. I need to have a system that can remember the ID and reference it when someone calls the /results API to retrieve specific results for a search.
Are there any examples of a Spring application doing this
You must remember that Restful services are stateless, therefore It won't be a good practice keeping your search results states in the server.
One solution could be storing your search states on a Database (SQL/NoSQL) and using the Spring Cache support to improve response times.
When an user requests a new search using /search, on the server you must generate the ID, prepare your results and persist it on the database, then you send the new ID to the client. Later the client must request its results using /results/{searchId}.
Please let me know if you'll use this possible solution and I'll share you an example on Github
Read/Write operations by multiple users.
A user may be able to make the editor read only i.e only the creator of the session writes.
You should be able to share the link of the current session to add more users to work on simultaneously.
It should be concurrent(synchronization) and avoid editing conflicts. Suggest approach to do this.
Please focus on a correct and scalable functionality.
Should have auto save
Editor should maintain changes/edits on each save.
Support rollback to any change.
Must have share/like functionality for social media.
I was able to come with the following, need help identifying classes to build a class diagram for this:
It will be a client server implementation.
For website, client can be written in HTML5 and Javascript. We can use additional javascript frameworks for specific requirements(eg. angularjs).
For sending request two methods are available:
1. Request/Response
-- Sending request every second
2. Long pooling
-- Make a never ending http request to server and communicate through it. This method will be way faster that earlier one because multiple http request will not be made.
Its the work of client to send the changes to server on fixed interval (1 second).
Its the work of client to understand the changes done by other users and display the same to current user.
Server will be expose an API which will be used to
-- Get current document
-- SendUpdate request whose response will contain modification done by other users on same document. We will try and capture the delta and represent the changes on the client side.
Server Stack has to be very fast(.node.js or golang will be suitable for such requirement) because of its very short response time.
Data should be stored in memory, we can use Redis to store data. We can on intervals or on explicit save requests, save data on the file system or non in memory databases also.
Every request will contain set of changes made by client.
These changes will be saved in Redis along with timestamp.
We wont be store whole file in database, we will just store historic changes. As redis is based on memory, it will take very little resource to compute final document from set of stored changes.
For every document there will be unique id associated with it. Unique id should be long enough.
You can create a url for notepads like example.com/notepad/{unique-id}
This will load the client and then load the document related to that unique id.
For every request this unique id will be send to identify which document is being edited by user.
Save
As every change is being sent to database, it will be auto saved.
Revert
You can keep historic data in AngularJs. If you want persistence between sessions, store data to file system.
You can also retrieve historic information from server using API. Which can be undone.
Facebook Share
We can also use FB graph api to post link in user;s timeline or Facebook exposes a sharer.php url, which can be used to share Post / Share a link in user's timeline.
Scalability
We can use cloud based scalable solutions like Mmazon AWS EC2 instances to implement this solution. We can keep webserver behind a load balancer.
We have to keep redis as separate (large) ec2 instance. There can be multiple webserver behind load balanacer.
All of them will be communicating with Redis instance.
We can keep static data like css and js in CDN (AWS CloudFront behind S3)
It will be a client server implementation.
Where server will be expose an API which will be used to
-- Get current document
-- SendUpdate request whose response will contain modification done by other users on same document
Its the work of client to send the changes to server on fixed interval (say 1 second).
Its the work of client to understand the changes done by other users and display the same to current user.
For website, client can be written in HTML5 and Javascript. You can use AngularJs as javascript framework for the same.
For sending request two methods are available:
1. Synchronization
-- Sending request every second
2. Long pooling
-- Make a never ending http request to server and communicate through it. This method will be way faster that earlier one because multiple http request will not be made.
Server Stack has to be very fast. node.js or golang will be suitable for such requirement, because of its very short response time.
Data should be stored in memory, you can use Redis to store data.
Every request will contain set of changes made by client.
These changes will be saved in Redis along with timestamp.
You wont store whole file in database, you should just store historic changes. As redis is based on memory, it will take very little resource to compute final document from set of stored changes.
I have a login page which connects to a Database, the Database has only one client, when a user logs on he/she may make certain changes to his profile and then save. A large number of frames require the current user id in order to manipulate his data
Among the possible ways of storing the user currently logged in is
1) to save the data to a temporary text file and persist it before the user logs out
2) another option would be to use variables across all the frames ,however I'm not too confident about this
3) a third way would be to have a Boolean column in the database and to persist the the data of the field with true in it
Perhaps there are better ways of storing the current user Id could somebody elucidate other possible methods and highlight the pros and cons of each implementation with reference to an "optimal" method of doing this
Edit: This is a desktop application
I would suggest not to share this information in any static context for the reason it will render your project as very hard to test once it gets big enough. See this link for more info: When to use singletons, or What is so bad about singletons?
What I would do is store session objects in some map, identifying the appropriate session by an ID that will be given and sent back to you via client cookie. This is how the web has been doing it for years, and it is still doing it this way. Simply pass the session object around to any class that requires access to that data when it needs it.
If you are using a J2EE implementation, then you may already have support for sessions within that implementation, you should check out "How to Use Sessions"
This is more of a software design question, and covering the basis to complete the patterns used to support what I just suggested is unfortunately beyond the scope of the question
The logged user is an instance of the class Person or LoggedUser.
You have to instantiate it and share its reference between Views via a Model.
I'm multing a multi-tenant SaaS web-application in Java, Spring, Struts2 and Hibernate. After a bit of research, i choose to implement multi-tenancy in a shared db, shared schema, shared table approach. And tagging each db-line with a tenantId.
I have rewritting my application, so Managers and Dao's will take the tenantId as a parameter to only serve the correct db-resources.
This works perfect for all view's when getting information. And also for creating new stuff (using the logged in users tenantId to store the info).
However, for updating and deleting stuff I am not sure how to secure my application.
For example: When a user want to edit an object, the url will be: /edit?objectId=x
And this is mapped to an action that will retrieve this object by Id. Meaning any logged in user can by url-modification view any object.
This i can solve by adding the tenantId to the Dao so if the User tries to view an object outside his tenancy he will get nothing.
Ok thats ok then, but about when sending in the edit-form?
What if the user modifies the request, messing with the hidden field objectId so the action will receive a request to alter an object not belonging to the users tenancy.
Or if the users url-modifies a delete action /delete?objectId=x
Basicly I need some way of assure that the logged in user has access to whatever he is trying to do. For all get's its easy. Just putting the tenantId in the where clause.
But for updates and deletes i'm not sure what direction to go.
I could query the db for every update and delete to see if the users has access to the object, but i'm trying to keep db-interaction to the minimum. So i find it impractical to make an extra db-call for every such action.
Does anyone have any hints or tips to my issues?
The same for reading applies to writing/updating: user can only see/access/change what they own. Your question is more about database that about anything else. The same constraints you apply to viewing data must also apply to writing data.
In this case, you don't want to wear the performance of a query first then an update. That's fine, since you can update the database with conditions. Since this seems likely to be database-level in your case you need to know what your database is capable of (to do it in one go). For example, oracle has the merge statement.
I am quite late to this thread and maybe you have already built the solution you were asking here about. Anyway, I have implemented a database-per-tenant multitenant web application using Spring Boot 2 and secured the web access using Spring Security 5. The data access is via Spring JPA (with Hibernate 5 as the JPA provider). Do take a look here.