I would like to save in my database information about history, for example user "dog" edited field "grass" in table "garden".
I have trigger which saves everything correctly but I have problem with username "dog". Username is logged user's name and I don't now how to "catch" it, because I don't know how to tell my database (PostgreSQL) that this specific user did that.
How can I tell my trigger that it should use value "dog"?
I would like to write an application in Java using Spring Framework and Hibernate Framework. I haven't any app code, because now I'm creating database and thinking about my future application.
Any ideas?
For certain database platforms, they offer context parameters. To use these, you would:
Set the database context parameters.
You can simply use the native SQL interface exposed by Session or EntityManager to accomplish this step.
Register an AfterTransactionCompletionProcess with the Session.
This should basically use the provided Session and clear the database context parameters which you set as part of (1). You would want to do this regardless of whether the transaction was successful or not. This step makes sure those context parameters are cleared prior to giving the JDBC connection back to your connection pool.
Execute your normal ORM changes.
But there is probably a much simplier approach all together, called Hibernate Envers.
Hibernate Envers is designed to mirror your mapped #Entity classes and keep a running history of changes made to your entities. You can easily configure the fields you'd like audited should there only be a subset of fields you're interested in the history on. Additionally, the Envers API exposes an easy way for you to query the history tables and get historical snapshots.
In order to store your username "dog" with Hibernate Envers, you would need to merely implement a custom RevisionEntity that contains your userName field and set it. You can find more information on how to configure the necessary components for this here.
Related
I've started exploring axon framework.
I'm following this tutorial
It is basically creating an account, retrieving information about the account and few other activities.
Whenever an event takes place, an entry is stored in Domain Event Entry Table. The payload is also present in hashed form . My questions are :
Is it possible to access the entries present in Domain Event Entry table ?If yes , then how?
Also, If I want to add the query and do log it in the console, does the payload appear in hashed form or in the original format.
Any help will be appreciated.
Checking the project you shared, I would start by upgrading the version you are using. I can see it's using 4.4.3 while latest is 4.5.3 and it brings lots of improvements and new features.
Before going to your questions, I believe it is important to note some things here.
This project is NOT using Axon Server, so your Events are stored on a Database of your choice. In this case, since you have JPA on your classpath, Axon Framework will automatically configure a JpaEventStorageEngine for you. Similar to the TokenStore implementation.
It is important to know about the Serializers as well, and for that I will refer to the official docs. Configuring them, you can have your Events stored as XML or JSON.
So, to your questions now:
Yes, you can access them as you would do in a normal database. Since I can see on the project it has /h2-console enabled, going there would be my first choice. Alternatively, you can configure any tool/app capable of checking the contents of your database.
For logging the query, you have to configure Hibernate for example for logging that. In any case, it will log it as a deserialized object and not as a 'payload' form. Of course, you can write some queries for the DomainEventEntry table and log the payload but that's not how you are supposed to use that table.
For more info, I would recommend going to the official docs or to our Axon Academy!
Edit 1: adding info about QueryHandler
Axon Framework offers, as you noted, #QueryHandler annotation which should be part of your Query side (appart from the Command side, derived from CQRS concept). That is the side responsible for providing information based on Events (from your Event Store a.k.a. DomainEventEntry in your case). Basically, Events will be propagated to the Query side where you will have one or more EventHandlers components/methods responsible to handle the Event and write the derived result to a form of storage (usually a relational database). You also have one or more QueryHandlers components/methods that will get Query messages, perform the query it (using a Repository for example) and return the response to the caller. In that sense, you can really tailor your query side the way you want! The ref guide has some info about Query Handlers that I recommend!
You can access the events in the domain events table. With H2 you can log in to the H2 console and find it there.
If you use a query the payload will be deserialized (no XML or JSON) and can be used to log information or whatever you need to do.
We are developing a new web application. one of the most basic requirement is to audit all entities changes into a separate table.
We would like to use DB triggers for that purpose.
We use MySQL as our RDMBS.
The problem we now foresee is that whenever a trigger is pulled, and insert a new entry for the DB, it cant possibly know the (applicative) user that made the change. (all users have different ids, but spring uses a single user account for the db manipulations.)
Any ideas how to resolve this issue?
We resolved the issue by adding a field to all tables that are being audited of the userId, and on each CRUD operation we made it mandatory to provide it. (for system business logic we use id=0). this way our audit table are being populated with the id itself to be monitored.
I'm multing a multi-tenant SaaS web-application in Java, Spring, Struts2 and Hibernate. After a bit of research, i choose to implement multi-tenancy in a shared db, shared schema, shared table approach. And tagging each db-line with a tenantId.
I have rewritting my application, so Managers and Dao's will take the tenantId as a parameter to only serve the correct db-resources.
This works perfect for all view's when getting information. And also for creating new stuff (using the logged in users tenantId to store the info).
However, for updating and deleting stuff I am not sure how to secure my application.
For example: When a user want to edit an object, the url will be: /edit?objectId=x
And this is mapped to an action that will retrieve this object by Id. Meaning any logged in user can by url-modification view any object.
This i can solve by adding the tenantId to the Dao so if the User tries to view an object outside his tenancy he will get nothing.
Ok thats ok then, but about when sending in the edit-form?
What if the user modifies the request, messing with the hidden field objectId so the action will receive a request to alter an object not belonging to the users tenancy.
Or if the users url-modifies a delete action /delete?objectId=x
Basicly I need some way of assure that the logged in user has access to whatever he is trying to do. For all get's its easy. Just putting the tenantId in the where clause.
But for updates and deletes i'm not sure what direction to go.
I could query the db for every update and delete to see if the users has access to the object, but i'm trying to keep db-interaction to the minimum. So i find it impractical to make an extra db-call for every such action.
Does anyone have any hints or tips to my issues?
The same for reading applies to writing/updating: user can only see/access/change what they own. Your question is more about database that about anything else. The same constraints you apply to viewing data must also apply to writing data.
In this case, you don't want to wear the performance of a query first then an update. That's fine, since you can update the database with conditions. Since this seems likely to be database-level in your case you need to know what your database is capable of (to do it in one go). For example, oracle has the merge statement.
I am quite late to this thread and maybe you have already built the solution you were asking here about. Anyway, I have implemented a database-per-tenant multitenant web application using Spring Boot 2 and secured the web access using Spring Security 5. The data access is via Spring JPA (with Hibernate 5 as the JPA provider). Do take a look here.
Describe please a typical lifecycle of a Hibernate object (that maps to a db table) in a web app.
Suppose, you create a new instance of an object and persist in the db.
But during the app lifetime you'll be working on a detached object and finally
you need to update it in the database, for example on exit.
How does it look like with hibernate and spring?
p.s. Can transactions and sessions live between servlet transitions? So that we opened 1 session and use it in all servlets without a need to reopen it?
I'll try to give a descriptive example.
Suppose, when the app starts, the log record is created. this can be done at once,
Log log = new Log(...) and then something like save(log) -- log corresponds to a table LOG.
then, as the application processes user inputs and keeps going, new data is being accumulated.
and after the second step we could add something to a log object, a collection for example:
// now we have a tracking of what user chosen: Set thisUserChoice,
// so we can update the persistent object, we have new data now !
// log.userChoices = thisUserChoice.
Here occurs the nature of my question. How are we supposed to deal with it, if we want to
update the database whenever new data is gotten from a user?
In a relational model we can work with a row id, so we could get this record and update some other data of the row.
In Hibernate we are also able to load a object by its id.
But is IT THE WAY TO GO? IS ANYTHING BETTER?
You could do everything in a single session. But that's like doing everything in a single class. It could make sense from a beginner's point of view, but nobody does it like that in practice.
In a web app, you can normally expect to have several threads running at once, each dealing with a different user. Each thread would typically have a separate session, and the session would only have managed instances of the objects that were actually needed by that user. It's not that you can completely ignore concurrency in your own code, but it's useful to have hibernate's help. If you were to do everything with one session, you would have to do all the concurrency management yourself.
Hibernate can also manage the concurrency if you have multiple application servers talking to a single database. The separate JVMs can't possibly share the same session in this case...
The lifecycle is described in the hibernate documentation (which I'm sure you've seen).
Whenever a request comes from the web client to the server, the first thing you should do is load the relevant objects (see section 10.3) so that you have persistent, not detached entities to deal with. Then, you do whatever operations are required. When the session closes (ie. when the server returns the response to the client), it will write any updates to the database. Or, if your operation involves creating new entities, you'll have to create transient ones (with new) and then call persist() or save() (see section 10.2). That will result in a managed entity -- you can make more changes to it, and hibernate will record those changes when the session closes.
I try to avoid using detached objects. But if I have to (perhaps they're stored in the user's session), then whenever they might need to be saved to the database, you'll have to use update() (see section 10.6). This converts it into a managed object, and so the session will save any changes to the database when it's closed.
Spring makes it very easy to generate a new session for each request. You would normally tell Spring to create a sessionFactory, and then every request will be given its own session. Search for "spring hibernate tutorial" and you'll find several examples.
http://scbcd.blogspot.com/2007/01/hibernate-persistence-lifecycle.html This explains transient, persistent objects.
Also have a look at the Lifecycle interface to know what hibernate does (and it provides hooks at all stages for user to do something)
I am working on an application that uses Oracle's built in authentication mechanisms to manage user accounts and passwords. The application also uses row level security. Basically every user that registers through the application gets an Oracle username and password instead of the typical entry in a "USERS" table. The users also receive labels on certain tables. This type of functionality requires that the execution of DML and DDL statements be combined in many instances, but this poses a problem because the DDL statements perform implicit commits. If an error occurs after a DDL statement has executed, the transaction management will not roll everything back. For example, when a new user registers with the system the following might take place:
Start transaction
Insert person details into a table. (i.e. first name, last name, etc.) -DML
Create an oracle account (create user testuser identified by password;) -DDL implicit commit. Transaction ends.
New transaction begins.
Perform more DML statments (inserts,updates,etc).
Error occurs, transaction only rolls back to step 4.
I understand that the above logic is working as designed, but I'm finding it difficult to unit test this type of functionality and manage it in data access layer. I have had the database go down or errors occur during the unit tests that caused the test schema to be contaminated with test data that should have been rolled back. It's easy enough to wipe the test schema when this happens, but I'm worried about database failures in a production environment. I'm looking for strategies to manage this.
This is a Java/Spring application. Spring is providing the transaction management.
First off I have to say: bad idea doing it this way. For two reasons:
Connections are based on user. That means you largely lose the benefits of connection pooling. It also doesn't scale terribly well. If you have 10,000 users on at once, you're going to be continually opening and closing hard connections (rather than soft connection pools); and
As you've discovered, creating and removing users is DDL not DML and thus you lose "transactionality".
Not sure why you've chosen to do it this but I would strongly recommend you implement users at the application and not the database layer.
As for how to solve your problem, basically you can't. Same as if you were creating a table or an index in the middle of your sequence.
You should use Oracle proxy authentication in combination with row level security.
Read this: http://www.oracle.com/technology/pub/articles/dikmans-toplink-security.html
I'll disagree with some of the previous comments and say that there are a lot of advantages to using the built-in Oracle account security. If you have to augment this with some sort of shadow table of users with additional information, how about wrapping the Oracle account creation in a separate package that is declared PRAGMA AUTONOMOUS_TRANSACTION and returns a sucess/failure status to the package that is doing the insert into the shadow table? I believe this would isolate the Oracle account creation from the transaction.