Recommended way for Multitenant User management - java

I want to manage users for N number of clients across 3 different products with different level of accessibility. I am not so interested in creating application from scratch as it will divert my core objective. Also I will be happy if any nominal user interface is provided so that I can give it's hosted end to client for user management.
I endup trying Apache Syncope but not succeeded. Here's what I wanted to do over there.
Client 1 will be as Domain
user uc1, uc2 created on Root(/) realm
Product p1 and p2 create as child realm as /p1 and /p2
Group ug1p1 and ug2p1 created under p1 realm
Now that, I wanted to add uc1 and uc2 in ug1p1 and also only uc1 into ug2p1. In this way same user will be able to share across different Realm (i.e. product) and can assign with different level of accessibility there separately. I failed in this approach.
If this is achieved, I was planning to provided user accessibility on group basis. Let me know if my approach is not recommended way.
If it is, can Apache Syncope suffice this requirement. Else if, suggest any other tool. Should be easy enough to integrate with Java web application is preferable.

You want a Multitenant RBAC. Apache Shiro is the right library you can use. It also works well with spring. You will need to implement your own realm if the existing Realms do not meet your requiremnt it will be one class though, you will also need to firgure out how integrate it with your web application INI based approach is pretty easy to use, however I prefered the Spring Application Context based approach and that works too.

Related

What is a good practice to deploy webservices? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Is it a good practice to deploy web services separately or should they be part of the web application? For instance, I am developing a spring rest based web service. The function of this service is to, let's say, to get user data.
Each webapplication that queries this web service has it's user data in different schema. So, now the webservice will need to know who is calling it - is it Appilcation A or Application B? If it's AppA, then it should get data from Schema A, if it's AppB, then its another schema. Note, that AppA and AppB are just the same code packed into two different wars and the schema they are supposed to query is supplied from properties file.
In a situation like this, does it make sense to pack the webservice with the webapp code and deploy it under different contexts, so it becomes a duplciate service running in a different context. Or, should it be deployed separately and somehow the AppA and AppB are supposed to identify themselves to this web service?
I prefer below approach, which is in use for 50K concurrent users.
Make sure that each web service encapsulates both UI and Schema independently by executing required business use case. Each web service will have all three layers - Model, View and Controller for that business service. That means your App-A is one web service & App-B is other web service.
All web services will register and un-register with Master web service. Master web service is responsible to redirecting user request to appropriate web service like App-A OR App-B.
You should have cluster of Master web service & cluster of individual web services - App-A & App-B
In this approach, your schema can reside on different database instead of single database
Advantages of this approach:
Each web service can scale horizontally. Just add additional VM nodes if you want to increase the scale.
If you have different schemas on different databases in different locations, you are avoiding network performane bottlenecks in OLTP queries (Online transaction processing queries).
Disadvantages:
I see only one disadvantage since Master Web Service acts like a Facade and it should know the internals of individual web service. But it's not a drawback for the advantages it is offering if you consider the trade-off.
I have no idea about your business requirement to maintain different schemas for user data and going with webservice.
But instead of maintaining multiple wars with same code, i would suggest you to configure multiple datasources within the application and switch to datasource as per your requirement.
This link may help you to configure multiple DS
If you fallow aforementioned logic, you may end up with single deployable context.
Still want to stick with multiple wars as webservice, i would suggest you to have look at SpringBoot simple, container less deployable and scalable.
It is a matter of opinion, both choices are okay. You should take into account the usage of the service, scaling concerns etc.
You could look at Microservices as an idea, but it has to make sense from your standpoint.
About the two different apps: if the differences are only in configuration, try externalizing it (23. Externalized Configuration). This way you can have a single artifact being deployed twice.
Given that scenario, it is a good practice having only one web service, in this way you improve the maintainability of the system because you don’t have the same code twice. If you have in the future other new similar app you don’t have to implement a new service.
Approach 1:- (Preffered)
You should have a single web application in which will have the entire code for application UI and Repo/data interaction.
Based on the type of request dynamically switch the data source as needed. You can have at look at Spring Dynamic datasource routing here
Approach 2:-
In case your UI has a completely different type of interactions managed by different teams, it makes sense to have separate UI components and the backend web services maintained at a same location.
Again based on the type of request you can dynamically route the datasource.
Hope this helps :)
my inputs:
1) Any specific reasons to build 2 different wars for same code? Is it only because you have two different data sources for each of them?
Why cant you have single application deploy with some parameterized mechanism in each request to identify which schema to get data from?
2) Why do you need a web service in first place? Why not application hook directly to database it needed.
3) Is underlying database transactional DB or some historical data? How about merging both schemas in one as one-time effort OR using some sort of virtualized views which picks data from 2 schemas based on input parameters.
***** edited after Jay's inputs:
My suggestion will be to have web service deployed separately from 2 web apps because it provides single place to manage code in long run. I have following additional suggestions:
Define your own headers in SOAP XML Schema which can give you both appContext(application making call) as well as userContext(user). Give a good thought on this aspect keeping long term view.
Keep SOAP request-response stateless which will give you scalability. Dont maintain any state of SOAP request at server side.
I have in past used a data virtualization solution (CISCO Composite)..what benefits it provides: if there are two (or more) data sources containing similar type of data(entities), it can join,cleanse & merge it virtually and expose it as REST/SOAP based web service. Try evaluating this option as well.
What it can further help if in future you have other consumers to access your information using plain SQL/JDBC call, they will be able to do it...also data virtualization solutions support many other interfaces to consumers like Hadoop, OData etc...again it depends on budget and other constraints of project...I am not sure if there is any effective open source data virtualization solution available or not?
Personally, in my experience, it's a lot better to have them separated, it usually depends on how big and how critical your main project is.
But even if at the beginning your project isn't that big and there's only 1 person working on it, later on, as it continues to grow, if you have microservices for all the things your main project do, it will be a lot easier to maintain, rather than having many people working on the same code handling many versions of an unique project, handling many small projects is less confusing and errors are easier to find.
Plus if something fails, you can have 1 microservice down while your main still runs without interruption, it will only by denied of 1 service, instead of having everything down while you fix it.
High availability is very important in production, and having them separated helps with this.
Given your situation I'd advice going with ONE webapp (one "project") with some caveat and then consider one of the two solutions:
1) Given you are using spring, I'll assume (hope) you are using maven as well..
Make a different compilation goal and make it so that, based on the goal invoked to produce the war, the relevant properties file is different..
This way you have ONE webapp, and based on the compilation (or rather based on the properties file tied to that specific compilation) you will obtain a war tied to a specific environment&schema... You deploy an individual war for each webservice with a clean separation, though the root code is the very same and it's only one application... [CLEANER SOLUTION]
2) Make it so that you don't only get the json request but also the https certificate of the sender (thus you identify a specific "webapp" based on the https certificate exposed), and based on the certificate AND The source of the request, you ensure the source as "qualified" to receive data from schema X rather than schema Y.. You deploy ONE war only that will, at his own discretion, apply logic to reroute your "user data fetch query" to one database or the other [I DISCOURAGE THIS PRACTICE]
of course there are other approach as well, but I think these two are the most feasible..
It really depends on what you want to achieve.
If you want to encapsulate the database/schema/table, then it should really be one service for each application. The main advantage of doing this is that you could swap the database later on if there is some problem with the current one, it also simplifies caching and invalidation, etc etc.
If the database/schema/table is not encapsulated anyway, then the single service is much easier and better. Each web application just have to identify themselves, and each of them will get exactly what they need. This could be achieved by putting the query/schema information in property file, or creating db views with the same name as client, etc.
If we were to go for this approach, a question will pop up. Why bother having this layer at all? Couldn't each web application just query the db directly? If the answer is yes, then just remove the whole layer completely.
You are trying to implement a Data Provider, or DAO as a service.
To make it -
Simple
Scalable
Maintainence-friendly,
Adaptable
You can simply have a single webservice, deployed outside the WebApp(s) and driven off configuration. The configuration itself can be stored as property file, or from a DB. The identifier for the client should be being passed in the webservice request.
This is actually a pretty standard approach implemented to enable optimizations at the Data tier outside of DB, like caching (again driven of configuration), expiry, pooling, etc.
The other option, to include as a shared jar within the webapp, yes, has advantage of code-reuse (which you get with externally deployed service as well), but the following disadvantages outweigh the option.
Coupling
Employing optimizations are difficult
Release management (this also depends upon how your code is organized)
Versioning.
Hope it helps.
I would deploy to one instance. No matter what. Of course, there are circumstances where it may be necessary to deploy separately. From a best "coding" practice, one instance should be used to allow for "right once, use many".
Then...
Define different XSD's for each AppA, AppB, etc. Marshall accordingly.
Or, use Groovy to marshall appropriate objects as json or xml.

Liferay integration options

We have an existing large Java Web Application that is clustered across many servers. We currently store our Word documents within our Oracle/BLOB and would like to move to a CMS solution like Liferay. Ideally we would like to present our users a view of their directory/file within one of the pages of our existing application and implement some workflow on top of Liferay within our application.
I've been reading the Liferay documentation to get a good feel to how best integrate into an existing Liferay/CMS server and from what I can tell the only way is via Portlets and or IFrames. So the integration happens in the GUI of the application.
We were hoping to integrate with Liferay within our Server calling SOAP/REST/JSON calls and then taking the results and displaying it within our application.
Could someone educate me on if this is feasible and if it is where I could get further information regarding this?
Yes, you can integrate just at "view side", but a good choice consists in usign Liferay ServiceBuilder.
It is a well documented Liferay's framework available for any custom portlet you want to write, allowing you to:
- automatically create a ready-to-use persistence layer (db DDL, ORM, cache configuration, transaction ecc...)
- expose local (in the same VM), remote (in the same VM, or by SOAP/REST/JS API/Mobile API) functionalities
You can surely combine both functionalities together, but you are free to use just one of them.
If it was a my choice, I would create a LR service wrapping the call to your external datasource.
In this way it will be able to partecipate in a distributed transaction (simply configuring a distributed transaction manager), to configure access to resource by using LR permissions framework, to be compliant with any kind of LR taglib (as SearchContainer: it should be very useful for showing a list of item)... and everything without the necessity to configure anything.
Several ways for achieving what I said are available... with a simple Google search I immediately fiund this guide.
Hope it helps.
Liferay allows you to write your own custom document store. You will need to implement few interfaces and configure LR to use it. That should do it. You can look at com.liferay.portlet.documentlibrary.store.BaseStore and com.liferay.portlet.documentlibrary.store.DBStore to understand how it can be done.
Thanks

User/Role/Module based architecture

I need to build an application that will be based on user roles and preferences. Similar to facebook or google widgets, where a user can add/remove apps. In additions, there will be preconfigured apps loaded automatically.
Is there a generic tool/framework that would facilitate this?
Haven't seen a generic tool for that, but...
....I have seen this case implemented ("reinvented and reprogrammed") several times. I have see that some of the newest libraries and frameworks, have their own access rights implemented (example: asp.net).
You didn't mention or tagged if you already choose an existing programming framework, for your application, maybe you already have in mind a framework, and maybe that framework has some libraries to control how and what modules, can a user access.
Usually, a set of libraries for this, is separated in two sections.
One section is a data access layer, that store the users, roles and access rights for each role or user. Usually is a set of tables in the application's database. But, can be also some configuration files, like XML.
The other section of code has to do with the logical or user interface layer, and that is very specific to the programming language and programming framework you are using, that's why I think there is not a generic tool.

Simple Java framework/tool for web-based DB table access?

I'm looking for a web-based Java tool (preferably one that will run in both Weblogic and JBoss) that will allow controlled access to a particular database. I need to allow non-technical users to insert, update, and delete rows in a particular Oracle DB table. The rows will be of varying data type (some dates, some numbers). Ability to add dropdowns with specific values would be nice.
Also nice, but not necessary (since we can always use a reverse proxy) would be the ability to control read/write access using LDAP/AD groups.
Another developer on my team suggested Spring/Roo, but that may be too heavyweight for what we're looking to do. There's got to be something simpler out there... Oracle Apex is another option, if we get desperate.
Grails is a great cheap way to build a CRUD app like you're describing, and it integrates cleanly with Java applications. You can probably build your first prototype app in an hour or two to get a feel for it. Here's a decent starter tutorial: https://www.ibm.com/developerworks/java/library/j-grails01158/
Spring Roo is absolutely not an overkill for this task in my opinion. It actually supports database reverse engineering, so you can explicitly specify which tables you want to have a CRUD view for.
You will need a really simple script, something like this:
project --topLevelPackage org.whatever --projectName crud --java 6
persistence setup --provider HIBERNATE --database ORACLE
--> you will need to acquire ojdbc*.jar because it's not available from Maven
--> also you will need to adjust database.properties to suit your needs
database reverse engineer --schema my --includeTables "Table1 .." --package ~.domain
controller all --package ~.web
logging setup --level DEBUG --> OPTIONAL
security setup --> OPTIONAL
exit
That's it, you can run your application.
Just write a simple web application with a few JSP files if that is all that you need to do. You can package them into a WAR file and deploy them easily to either JBoss or Weblogic.
What you want is a java-based Web Framework that gives you automatic Create/Retrieve/Update/Delete (CRUD) screens. There are a huge number of frameworks available, each with different strengths and weaknesses. You don't give enough information to make a reasonable suggestion of which would be best, so I would recommend that you play around with different frameworks until you find the one best suited to your needs.
Spring Roo is one way to try out different frameworks, but I find that it has a lot of typing overhead to build the model you want. If you recorded a script you could perhaps replay it with different frameworks selected for generation, but that may be too complicated.
I would recommend you check out AppFuse, which is a meta-framework that allows you to play with different frameworks easily. See AppFuse QuickStart for information on getting started.
As for controlling access to the tables using LDAP, there are many possibilities available. Java provides direct control as shown here . Another option that many use is Spring Security.

Architecture - Multiple web apps operating on the same data

I'm asking for a suitable architecture for the following Java web application:
The goal is to build several web applications which all operate on the same data. Suppose a banking system in which account data can be accessed by different web applications; it can be accessed by customers (online banking), by service personal (mostly read) and by the account administration department (admin tool). These applications run as separate web applications on different machines but they use the same data and a set of common data manipulation and search queries.
A possible approach is to build a core application which fits the common needs of the clients, namely data storage, manipulation and search facilities. The clients can then call this core application to fulfil their requests. The requirement is the applications are build on top of a Wicket/Spring/Hibernate stack as WARs.
To get a picture, here are some of the possible approaches we thought of:
A The monolithic approach. Build one huge web application that fits all needs (this is not really an option)
B The API approach. Build a core database access API (JAR) for data access/manipulation. Each web application is build as a separate WAR which uses the API to access a database. There is no separate core application.
C RMI approach. The core application runs as a standalone application (possibly a WAR) and offers services via RMI (or HttpInvoker).
D WS approach. Just like C but replace RMI with Web Services
E OSGi approach. Build all the components as OSGi modules and which run in an OSGi container. Possibly use SpringSource dm Server or ModuleFusion. This approach was not an option for us for some reasons ...
Hope I could make clear the problem. We are just going the with option B, but I'm not very confident with it. What are your opinions? Any other solutions? What are the drawbacks of each solution?
I think that you have to go in the oppposite direction - from the bottom up. Of course, you have to go forth and back to verify that everything is playing, but here is the general direction:
Think about your data - DB scheme, how transactions are important (for example in banking systems everything is about transactions) etc.
Then define common access method - from set of stored procedures to distributed transaction engine...
Next step is a business logic/presentation - what could be generalized and what is a subject of customization.
And the final stage are the interfaces, visualisation and reports
B, C, and D are all just different ways to accomplish the same thing.
My first thought would be to simply have all consumer code connecting to a common database. This is certainly doable, and would eliminate the code you don't want to place in the middle. The drawback, of course, is that if the schema changes, all consumers need to be updated.
Another solution you may want to consider is giving each consumer its own database, using some sort of replication to keep them in sync.
It looks like A and E are out of the picture as you have stated in your question for various reasons. Option A would be one huge application which would make maintenance difficult in the future.
B, C and D are essentially the same architecturally since they involve remote access to common libraries from the various web applications, the only difference is the transport mechanism. I would recommend implementing this in EJB 3 or Spring if possible instead of with your own RMI libraries since either of these provide a good framework over RMI / Webservices.
So I think this problem basically boils down to the following two options:
1) Include the business and DAO layer classes as a common jar included in the deployment of all web applications.
Advantages:
Deployment is easier.
Applications will perform better initially since there is no remote access to other servers.
Disadvantages:
You cannot add more hardware to the middle tier specifically (service and DAO layers) since it is included in each web application.
Other business teams in the organisation will not have access to your business services since there is no remote interface.
2) Deploy the business service and DAO layer classes in a separate application server and expose business methods remotely.
Advantages:
You can scale up the business service and DAO layer as needed depending on load from the various web applications calling it.
Other applications in the organisation can make use of your interfaces if needed.
More scalable
You get all the advantages of Java EE.
Disadvantages:
More complex deployment.
Another server to maintain and monitor.
Could be slower since calls will be made over the network although this shouldn't be too much of a problem.
In both cases if the interfaces change the client code will need to change so this isn't a factor in the decision. Transactions should be handled on the business service method level so this shouldn't be a factor either.
I think it depends on the size of the applications as well and how scalable the solution needs to be to warrant the extra complexity of option 2 above.
I think you need to have a separate application that all the client applications will use as their data layer. The reason for this is that you want to ensure they're all accessing the database in the same way. There are also some race conditions you can get into that database transactions may not be able to prevent. The other reason is that using the database as a form of RPC is a known antipattern. If all your apps access the database directly, you will almost inevitably end up with some "event" table that the various applications poll periodically... don't do that.
Apart from the provided responses, if you are considering having multiple applications working with the database at the same time, consider a distributed cache as part of your solution, as well. The beauty of the distributed cache is that it can be accessed by multiple applications at the same time, apart from being distributed. I am not sure if this holds true for all of the Java variations, such as Ehcache, etc, as I do not come from a Java background.
What we are currently doing is abstracting the data a level further than before. We now have a DAL that can be accessed directly, but we have put a "Model Factory" in front of the DAL. The purpose of the Model Factory is to broker both the cache and the data layer, acting as a passthrough. So, the caller always calls the Model Factory and not the DAL or caching code directly. This abstraction layer will basically retrieve data from the DAL on a cache miss without adding the complexity to the API.

Categories