I am looking around for a multitenancy solution for my web application.
I would like to implement a application with Separate Schema Model. I am thinking to have a datasource per session. In order to do that i put datasource and entitymanger in session scope , but thats not working. I am thinking to load data-access-context.xml(which include datasource and other repository beans) file when the user entered username and password and tenantId. I would like to know if it is a good solution?
Multitenancy is a bit tricky subject and it has to be handled on the JPA provider side so that from the client code perspective nothing or almost nothing changes. eclipselink has support for multitenancy (see: EclipseLink/Development/Indigo/Multi-Tenancy), hibernate just added it recently.
Another approach is to use AbstractRoutingDataSource, see: Multi tenancy in Hibernate.
Using session-scope is way too risky (also you will end up with thousands of database connections, few for every session/user. Finally EntityManager and underlying database connections are not serializable so you cannot migrate your session and scale your app properly.
I have worked with a number of multi-tenancy systems. The challenge here is how you keep
open architecture and
provide a solution that evolves with your business.
Let's look at second challenge first. Multi-tenancy systems has a tendency to evolve where you'll need to support use cases where same data (record) can be accessed by multiple tenants with different capacity (e.g. https://bugs.eclipse.org/bugs/show_bug.cgi?id=355458). So, the system ultimately needs Access Control List.
To keep the open architecture you can code to a standard (like JPA). Coding to EclipseLink or Hibernate makes me uncomfortable.
Spring Security ACL provides very flexible community supported solution to both these challenges. Give that a try. I did and been happy with it's performance. However, I must caution you, it took me some digging to get my head around it.
Related
Note: This is not a programming question (at least at the moment). Once I start progressing further would seek assistance from the community on programming questions. Feel free to delete this, if this question is deemed inappropriate.
I am trying to start using DashDB as a Database on Bluemix. The DashDB data would be consumed by a Java/Java EE app
I am not planning to use this as a Data warehouse.
DashDB as I understand it has two flavours - Regular (using this term loosely here to refer to the standard offering ) and DashDB Transactional.
DashDB Transaction, i believe is used for transactional workloads.
I wanted to understand if JPA would play well with DashDB. I am unable to locate good information in this space.
Should we use denormalized design for both DashDB Regular and Transactional?
The dashDB Transactional Bluemix plan provides a dashDB database that is optimized for online transaction processing (OLTP). This means that it is designed for highly structured repetitive processing and it supports ACID transactions. That said you should use all the best practices you would use with a classic RDBMS: normalization, constraints and so on. I confirm that the dashDB-JPA integration is not well documented yet, but there should be no particular problem in using it with JPA. Since your application will run on Liberty Runtime, when you bind the dashDB service instance the server.xml is automatically configured with dataSource with a JNDI name and the database driver jars are also added.
JPA does not work seamlessly with DashDB today. DashDB uses organized by column be default and JPA does not work well with it. There is no specific way today to set organize by row using an annotation in JPA. We tried to override the DB2Dictionary but that did not work either.
If i drop the table using sql statement and recreate the table using sql statement appended with organize by row, then JPA is able to read the table.
Not sure who should be fixing this issue - JPA or DashDB :)
I have already written one web app using java,spring, and tomcat8 as server and now I want to write other but this one has to interact with the previous. It has to share some data from database and session(I mean if user login in one app he doesn't need to login in other app). What is the best way to implement this ?
There are a couple of ways to solve this. Tomcat supports clustered see: https://tomcat.apache.org/tomcat-9.0-doc/cluster-howto.html
But as Dimitrisli already wrote it may be the easiest solution to have a look at spring-session (see: http://projects.spring.io/spring-session/).
I am using this in a project of mine and it works pretty good, but you have to be aware that right now the default serialization scheme is "ObjecStream" which is the regular java serialization. So you can't use different versions of a class on your servers that you are putting into the session. This would lead to a deserialization exception. But I am pretty sure the same problem may occur if you use tomcat/jboss7glassfish/etc. clustering.
If you want to be free in your service deployments, you may use one of the clustering solutions and only store the minimal information that is necessary, like the sessionID and then use something like redis or whatever DB solution you like to store the session related data in a more "class-evolution" friendly format like for example json. This leads to more work for you, but also much more flexibility.
This is fairly broad, but generally speaking, you'd just use the same database configuration for both applications, and you can use session replication to share sessions between servers. Tomcat has some built-in functionality to do that, but you should also consider Spring Session, which hooks into the servlet filter chain to externalize sessions in a cross-platform style.
There are a few solutions for session clustering but since you are in the Spring ecosystem take a look at the newly launched Spring Session project which makes this task much easier and is also webapp provider agnostic.
Generally sharing sessions is not recommended, for database sharing use JNDI and get the objects. If login is to be handled in your case use Single SignON.
I wanted to ask you, if you have any experience that Hibernate OGM works as much fine with mongodb, that it could be used in an enterprise solution without any worries. With other words - does this combination work as fine as for example Hibernate ORM with MySQL and is is also that easy to set up? Is it worth to use it - meant the level of afford needed to set it up compared to the level of improvement of the work with the database? Would you prefer another OGM framework or even don't use any? I read about it some time ago, but it was in the early stages of this project and didn't work too well yet. Thanks for advices and experiences.
(Disclaimer: I'm one of the Hibernate OGM authors)
With other words - does this combination work as fine as for example Hibernate ORM with MySQL?
The 4.1 release is the first final we consider to be ready to use in production. The general user experience should be not much different from using the classic Hibernate ORM (which still is what you use under the hood when using Hibernate OGM). Also the MongoDB dialect probably is the one we put most effort in, so it is in good shape.
But as Hibernate OGM is a fairly young project, of course there may be bugs and glitches which need to be ironed out. Feature-wise, there are some things not supported yet (e.g. secondary tables, criteria API, more complex JPA queries), but you either shouldn't really need those in most kinds of applications or there are work-arounds (e.g. native queries).
and is is also that easy to set up?
Yes, absolutely. The set-up is not different from using Hibernate ORM / JPA with an RDBMS. You only use another JPA provider class (HibernateOgmPersistence) and need to set some OGM-specific options (which NoSQL store to use, host name etc.). Check out this blog post which walks you through the set-up. For store-specific settings (e.g. how to store associations in document stores) there is an easy-to-use option system based on annotations and/or a fluent API.
[Is it worth the effort] to set it up compared to the level of improvement of the work with the database?
I don't think there is a general answer to that. In many cases object mappers like Hibernate ORM/OGM are great, in others cases working with plain SQL or NoSQL APIs might be a better option. It depends on your use case and its specific requirements. In general, OxMs work well if there is a defined domain model which you want to persist, navigate its associations etc.
Would you prefer another OGM framework
I'm obviously biased, but let me say that using Hibernate OGM allows you to
benefit from the eco-system existing around JPA/Hibernate, be it integration with other libraries such as Hibernate Validator or Hibernate Search (or your in-house developed Hibernate-based API) or tooling such as modelling tools which emit JPA entities.
work with different NoSQL backends using the same API. So if chances are you need to integrate another NoSQL store (e.g. Neo4j to run graph queries) or an RDMBS, then Hibernate OGM will allow you to do so easily.
I read about it some time ago, but it was in the early stages of this project
Much work has been put into Hibernate OGM over the last year, so my recommendation definitely is to try it out and see in a prototype or spike how it works for your requirements.
If you have any feature requests or questions, please let us know and we'll see what we can do for you.
I have an application that uses SQL Server. I wanted to use a NOSQL store and I decided it to be graph since my data is highly connected. Neo4j is an option.
I want optimally to be able to switch the databases without touching the application layer, say, just modifying some xml configuration files.
I've taken a look at some examples public on the web, I've seen that ORM and OGM don't configure applications the same way, the config file of each has it's own name and more importantly its own structure. Looking at the code of each revealed that they also differ in the way they initialize the session, which doesn't sound good for what I'm thinking of.
My question is: "is it possible or feasible-without-great-overhead to switch between the two databases without touching the existing application code? I may add things but not touch what exists already". It would be a great idea to establish a pure polyglot persistence between SQL and NOSQL databases, for example, using Hibernate.
I want to hear from you guys before digging deeper. Do we have one of Hibernate men with us here in SO?
The goal of Hibernate OGM is to offer an unified abstraction for various NoSQL data stores. The project is still young, as we speak, so I am not sure if you can adopt it right out-of-the-box.
There is also the problem of transactions. If your application was designed to use SQL transactions, then things will radically change when you switch to a NOSQL solution.
Using an abstraction layer is good for portability but doesn't offer all the power of native querying. That's the same problem with JP-QL, which only covers SQL-92, lacking support for window functions or CTE.
Polyglot persistence is a great feature, but try using separate repositories, like Spring Data offers. I find that much more flexible from an architectural point of view.
I have to make a web application multi-tenant enabled using Shared database separate schema approach. The application is built using Java/J2EE and Oracle 10g.
I need to have one single appserver using a shared database with multiple schema, one schema per client.
What is the best implementation approach to achieve this?
What needs to be done at the middle tier (app-server) level?
Do I need to have multiple host headers each per client?
How can I connect to the correct schema dynamically based on the client who is accessing the application?
At a high level, here are some things to consider:
You probably want to hide the tenancy considerations from day-to-day development. Thus, you will probably want to hide it away in your infrastructure as much as possible and keep it separate from your business logic. You don't want to be always checking whether which tenant's context you are in... you just want to be in that context.
If you are using a unit of work pattern, you will want to make sure that any unit of work (except one that is operating in a purely infrastructure context, not in a business context) executes in the context of exactly one tenant. If you are not using the unit of work pattern... maybe you should be. Not sure how else you are going to follow the advice in the point above (though maybe you will be able to figure out a way).
You probably want to put a tenant ID into the header of every messaging or HTTP request. Probably better to keep this out of the body on principle of keeping it away from business logic. You can scrape this off behind the scenes and make sure that behind the scenes it gets put on any outgoing messages/requests.
I am not familiar with Oracle, but in SQL Server and I believe in Postgres you can use impersonation as a way of switching tenants. That is to say, rather than parameterizing the schema in every SQL command and query, you can just have one SQL user (without an associated login) that has the schema for the associated tenant as its default schema, and then leave the schema out of your day-to-day SQL. You will have to intercept calls to the database and wrap them in an impersonation call. Like I say, I'm not exactly sure how this works out in Oracle, but that's the general idea for SQL Server.
Authentication and security are a big concern here. That is far beyond the scope of what I can discuss in this answer but make sure you get that right.