We have 13 years old monolithic java application using
Struts 2 for handling UI calls
JDBC/Spring JDBC Template for db calls
Spring DI
Tiles/JSP/Jquery for UI
Two deployables are created out of this single source code.
WAR for online application
JAR for running back-end jobs
The current UI is pretty old. Our goal is to redesign the application using microservices. We have identified modules which can run as separate microservice.
We have following questions in our mind
Which UI framework should we go for (Angular/React or a home grown one). Angular seems to be very slow and we need better performance as far as page loading is concerned.
Should UI/Javascript make call to backend web services directly or should there be a spring controller proxy in deployed WAR which kind of forwards UI calls to APIs. This will also help if a single UI calls requires getting/updating data from different microservice.
How should we cover microservice security aspect
Which load balancer should we go for if we want to have multiple instance of same microservice.
Since its a banking application, our organization does not allow using Elastic Search/Lucene for searching. So need suggestion for reporting using Oracle alone.
How should we run backend jobs?
There will also be a main payment microservice which will create payments. Since payments volume is huge hence it will require multiple instances. How will we manage user logged-in session. Should we go for in-memory distributed session store (may be memcache)
This is a very broad question. You need to get a consultant architect to understand your application in depth, because it is unlikely you will get meaningful in-depth answers here.
However as a rough guideline here are some brief answers:
Which UI framework should we go for (Angular/React or a home grown one). Angular seems to be very slow and we need better performance as far as page loading is concerned.
That depends on what the application actually needs to do. Angular is one of the leading frameworks, and is usually not slow at all. You might be doing something wrong (are you doing too many granular calls? is your backend slow?). React is also a strong contender, but seems to be losing popularity, although that is just a subjective opinion and could be wrong. Angular is a more feature complete framework, while React is more of a combination of tools. You would be just crazy if you think you can do a home grown one and bring it to the same maturity of these ready made tools.
Should UI/Javascript make call to backend web services directly or
should there be a spring controller proxy in deployed WAR which kind
of forwards UI calls to APIs. This will also help if a single UI calls
requires getting/updating data from different microservice.
A lot of larger microservice architectures often involve an API gateway. Then again it depends on your use case. You might also have an issue with CORS, so centralising calls through a proxy / API gateway, even if it is a simple reverse proxy (you don't need to develop it) might be a good idea.
How should we cover microservice security aspect.
Again no idea what your setup looks like. JWT is a common approach. I presume the authentication process itself uses some centralised LDAP / Exchange or similar process. Once you authenticate you can sign a token which you give to the client, which is then passed to the respective micro services in the HTTP authorization headers.
Which load balancer should we go for if we want to have multiple
instance of same microservice.
Depends on what you want. Are you deploying on a cloud based solution like AWS (in which case load balancing is provided by the infrastructure)? Are you going to deploy on a Kubernetes setup where load balancing and scaling is handled as part of its deployment fabric? Do you want client-side load balancing (comes part of Spring Cloud)?
Since its a banking application, our organization does not allow using
Elastic Search/Lucene for searching. So need suggestion for reporting
using Oracle alone.
Without knowledge of how the data on Oracle looks like and what the reporting requirements are, all solutions are possible.
How should we run backend jobs?
Depends on the infrastructure you choose. Everything is possible, from simple cron jobs, to cloud scheduling services, or integrated Java scheduling mechanisms like Quartz.
There will also be a main payment microservice which will create
payments. Since payments volume is huge hence it will require
multiple instances. How will we manage user logged-in session. Should
we go for in-memory distributed session store (may be memcache)
Not really. It will defeat the whole purpose of microservices. JWT tokens will be managed by the client's browser and expire automatically. You don't need to manage user logged-in session in such architectures.
As you have mentioned it's a banking site so security will be first priory. Here I have few suggestions for FE and BE.
FE : You better go with preactjs it's a react like library but much lighter and fast as compare to react. For ui you can go with styled components instead of using some heavy third party lib. This will also enhance performance and obviously CDNs for images and big files.
BE : As per your need you better go with hybrid solution node could be a good option.e.g. for sessions.
Setup an auth server and get you services validate user from there and it will be used in future for any kinda service .e.g. you will expose some kinda client API's.
User case for Auth : you can use redis for session info get user validated from auth server and add info to redis later check if user is logged in from redis this will reduce load from auth server. (I have used same strategy for a crypto exchange and went pretty well)
Load balancer : Don't have good familiarity with java but for node JS PM2 will do that for you not a big deal just one command and it will start multiple instances and will balance on it's own.
In case you have enormous traffic then you better go with some messaging service like rabbitmq this will reduce cost of servers by preventing you from scaling your servers.
BE Jobs : I have done that with node for extensive tasks and went quite well there you can use forking or spanning this will start a new instance for particular job and will be killed after completing it and you can easily generate logs along with that.
For further clarification I'm here :)
Related
Within our corporate intranet, we have a few end-point service platforms like BPM, document management system, etc. These end-point services expose REST API. We develop web applications using AngularJS as front end.
There are two options on how we can make calls from AngualJS to these end-point services.
Option 1: Given these end-point services expose REST, call these REST API directly from AngualrJS.
Option 2: Introduce a middle layer (on an application server like WebLogic or Tomcat). Build a Java application layer that calls into the end-point REST API; and host it on this millde layer. The AngularJS calls into REST provided by this middle layer; this middle layer inturn calls into the end-point REST.
I personally prefer the Option 1; however I invite your openion on this matter. I have listed the pros and cons of Option 1 as I see them.
Pros of Option 1:
Better performance (throughput) given one less hop for HTTP requests.
Lesser development/deployment efforts due to one less component.
Lesser number of points of failure. If there is an issue, we know its either in AngualrJs or the end service.
Cons of Option 1:
Security issues? Not sure of this - would like expert comments on this.
CORS: the end services will need to enable Access-Control-Allow-Origin to appropriate domains.
Poor logging? If something goes wrong, the logs will be available only on user machines (IE/Chrome development tool) or on the end service.
Too much processing in AngualJS layer? This processing is mainly parsing the result from end service. This also depends on the kind of end service that is being used.
option 2 in my opinion is a better option in long run. There are few reasons for that.
Security is first and foremost, If you have a middleware in between, you can have inherent security, which means you can expose only those REST APIs which your angular webapp needs. You can also include a security mechanism like oAuth since you control the middleware.
Logging is another one. for sure any application nowadays do need some sort of auditing. both security and logging are layers before your actual REST calls.
You would be able to add some aspects on any key REST API, such that in case if that API is called trigger a mail, it's always handy to have those flexibility even we don't need at the moment.
You can include response transformation and error handling efficiently. Once you get the response from service, in your middleware you can transform the response, remove unnecessary or critical fields, conjure some values etc. This all can be done with angular also but then the real response or error is exposed to the client.
On the downside you rightly mentioned performance is one but imo keeping your REST middlware in sync with services REST is more bane. any new API added by services, needs to be included in middleware, recompiled and redeployed. But it also depends what are the likelihood and frequency of those changes? for any those changes you anyhow might need to change in angular webapp to include it.
You mention "Within our corporate intranet". Depending on how the end-points are secured, option 1 could be challenging.
Angular will run in a web-browser so if those services are only accessible via VPN / intranet, the web-app will only work if your computer is connected to that intranet (i.e. it won't work if you run it from home).
Another security challenge with option 1 is that if the end-points require special authentication "secrets" (API tokens, passwords, certificates, etc.), those secrets will be exposed and visible to anyone who uses the web-app since anyone can see the traffic between their browser and the server. With option 2, those secrets can stay hidden behind your middle layer.
Lastly, even if Angular talks to those end-points directly, you will still need to have the HTML / JS / CSS hosted on some web-server. You may not need a full blown application server but you'll need something to point your web browser at.
If those concerns don't apply to your case, then it's really up to you to pick whichever option you and your team are the most comfortable with.
Thanks for such a nice article.
If you are concern with security and your project requirement is focused on Security. One must go with Option 2.
If Security is not a big concern. Options 2 is better.
We have the following setup.
STM (Stingrey Traffic Manager) does load balancing + session stickiness
Weblogic 'cluster'
Auth handled by a third party tool
Therefore I do not have to worry about session with regards to horizontal scaling/ running multiple instances of the application. STM/ Weblogic cluster makes sure that the subsequent request come to same managed server.
What we currently have is a monolithic application and we are trying to move to microservices. Also we do not wan't to move out of current infrastructure (i.e. STM/ Weblogic cluster/ Auth tool). What we have planned is:
A Gateway WAR which routes requests to other microservices
N x Microservices (WAR) for each functional sub-domain
Only the API Gateway receives user requests and other microservices are not accessible from outside
So my question is
Should API Gateway be state-full while other microsevices are stateless?
If so, how should the user session data be shared between API Gateway and microservices?
Please suggest any better alternatives and resources/links as well. Thanks.
Let me share my opinion.
First of all, if you can keep your application stateless, by all means do so :)
It will be the best solution in terms of both performance and scalability.
Now, if its impossible, then you should maintain some distributed session management layer.
The gateway responsible for authentication could generate some unique session identifier which could later be used as a key.
This key could be propagated to all the microservices and be a part of the API or something.
In order to access the session, the microservice could 'get' value by key and work with it.
In terms of implementation: I would take a look on NoSQL solutions. Some of them that can suit your need are:
Redis. Take a look on ''hset'' there
Hazelcast. Its more a in-memory grid but if the solution is java only, you can also implement the required functionality
Memcache.d. It will give you an old good map, just distributed :)
There are also other solutions I believe.
Now, the performance is crucial here, otherwise the whole solution will be just too slow. So In my understanding, using an RDBMS would be not be good here, moreover potentially it would be harder to scale it out.
Hope this helps
1)Should API Gateway be state-full while other microservices are stateless?
Yes, As in 12 Factor App guide lines all the services should be stateless.
2)If so, how should the user session data be shared between API Gateway and microservices?
Your API should be stateless therefore do not share the session state to the microservices. The recommended approach is to set up a Redis cache to store session data.
I am working on a desktop Java application that is supposed to connect to an Oracle database via a proxy which can be a Servlet or an EJB or something else that you can suggest.
My question is that what architecture should be used?
Simple Servlets as proxy between client and database, that connects to the database and sends results back to the client.
An enterprise application with EJBs and remote interfaces to access the database
Any other options that I haven't thought of.
Thanks
Depending on how scalable you want the solution to be, you can make a choice.
EJB (3) can make a good choice but then you need a full blown app server.
You can connect directly using jdbc but that will expose url of db (expose as in every client desktop app will make a connection to the DB. you can not pool, and lose lot of flexibilities). I would not recommend going this path unless your app is really a simple one.
You can create a servlet to act as proxy but its tedious and not as scalable. You will have to write lot of code at both ends
What i would recommend is creating a REST based service that performs desired operations on the DB and consume this in your desktop app.
Start off simple. I would begin with a simple servlet/JDBC-based solution and get the system working end-to-end. From that point, consider:
do you want to make use of conenction pooling (most likely). Consider C3P0 / Apache DBCP
do you want to embrace a framework like Spring ? You can migrate to this gradually, and start with using the servlet MVC capabilities, IoC etc. and use more complex solutions as you require
Do you want to use an ORM ? Do you have complex object graphs that you're persisting/querying, and will an ORM simplify your development ?
If you do decide to take this approach, make sure your architecture is well-layered, so you can swap out (say) raw JDBC in favour of an ORM, and that your development is test-driven, such that you have sufficient test cases to confirm that your solution works whilst you're performing the above migrations.
Note that you may never finalise on a solution. As your requirements change, and your application scales, you'll likely want to swap in/out the technology most suitable for your current requirements. Consequently the architecture of your app is more important than the particular toolset that you choose.
Direct usage of JDBC through some ORM (Hibernate for example) ?
If you're developing a stand-alone application, better keep it simple. In order to use ORM or other frameworks you don't need a J2EE App Server (and all the complexity it takes with it).
If you need to exchange huge amounts of data between the DB and the application, just forget about EJBs, Servlets and Web Services, and just go with Hibernate (or directly with plain old JDBC).
A REST based Web Services solution may be good, as long as you don't have complex data, and high numbers (try to profile how long does it takes to actually unmarshal SOAP messages back and to java objects).
I have had a great deal of success with using Spring-remoting and a servlet based approach. This is a great setup for development as well, since you can easily test your code without deploying to an web container.
You start by defining a service interface to retrieve/store your data (POJO's).
Create the implementation, which can use ORM, straight JDBC or some pooling library (container provided or 3rd party). This is irrelevant to the remote deployment.
Develop your application which uses this service directly (no deployment to a server).
When you are satisfied with everything, wrap your implementation in a war and deploy with the Spring DispatcherServlet. If you use maven, it can be done via the war plugin
Configure the desktop to use the service via Spring remoting.
I have found the ability to easily develop the code by running the service as part of the application to be a huge advantage over developing/debugging something running on a server. I have used this approach both with and without an EJB, although the EJB was still accessed via the servlet in our particular case. Only service to service calls used the EJB directly (also using Spring remoting).
As we are in the beginning phases of rejuvenating our application in to SOA design I have some questions that I can not get a clear answer/picture on.
I have been doing a lot of reading, mostly around books from Thomas Erl and following that design pattern of understanding what Task Services, Entity Services and Utility Services are.
What I am stumbling on is the whole DAL concept of how that would look. So this is more of a verification of understanding or a clarification so as to help make the best approach for our platform.
So background. We currently have several web based e-commerce applications that have been pretty much been built in silos and are again pretty much a copy of each other. We have supporting applications such as Daemons and misc web services out there. Many of these applications are older then 5 years and are build on only technology (Model 1). All of our applications are centered around conducting auction sales. So during a sale event we will be taking bids from users, determine who is winning and display that information back. Each sale event has a set amount of time that they will be available to the users.
The company is moving towards a SOA solution as a lot of things we end up doing can be shared across not only our group but across other groups.
So what I understand on the DAL is that it in itself is a service which will sit on top of Data, in this case different Databases - MSSQL, ORACLE, MSSQL. Each of these databases have different schema's (Oracle) etc.
So the services (Task, Entity, Utility and Presentation Tier if needed) will make calls to the DAL to retrieve data. It is the responsibility of the DAL to know, from the contents of the message to determine what it needs to do in order to fulfill the request.
So for example, we have a Security Service candidate. This service needs to authenticate with LDAP and to authorize from the data that is stored for that given application.
The thought here is that a Utility service will be created to wrap up all the operations required to communicate with LDAP and that the Security Service will call upon the Utility Service and to the DAL to fetch the authorization data. The DAL then has the responsibility to go to the correct database/schema to retrieve the information. The information will be in XML format (standard SOA communication).
So, am I on the right track here? Have others done similar things or not? What other things do I need to consider (Currently getting the statistics on how many bids we take in an hour - on average).
Should each service have its own DAL - for example should the Security Service have the DAL as part of the service or should DAL be a shared service in which all services can use?
In your case, the approach to use for a full SOA based deployment would be to use an ESB, Identity provider and a data services solution.
To break it down, the DAL should be implemented using data services, in this way, this service will be a globally accessibly service in a language neutral way, and will support re-use and loose coupling. So all your data access logic can be implemented as web service operations in a data service.
So for the authentication and authorization management, in the SOA world, there's a standard called XACML, which is used for fine grained authorization management. So what you will need is an XACML server, who would authorize the user according to a specific criteria, where this should also have the ability to authenticate with LDAP.
Then your "Security Service" will be implemented in a service at the ESB, where that service will query the identity provider for authentication/authorization and according it's response, it will call the appropriate operations in the data service, with suitable parameters to fetch the data, and return it to the user.
The above scenarios can be implemented using WSO2 Data Services Server, WSO2 Identity Server and WSO2 ESB respectively, which are open source products, and can be freely used and found here.
i once worked with (developing) an soa project that used a "data service". it was some time ago, and i was only involved marginally, but my recollection was that it ended up being too complicated and slow.
in particular we had no real need for a data service - it would have made more sense to have placed the same abstractions in a library layer, which would have given better efficiency and no real loss of functionality (for our particular needs). this was exacerbated by the fact that the data tended to be requested in many small "chunks".
i guess it comes down to the trade-offs involved in the implementation. in our case, with a relatively closed system and a single underlying database technology, we could have easily exploited the support for distributed access that the database provided; instead we ended up duplicating this in a slower, more general, message bus, which added nothing except complexity. but i can easily imagine different cases where access to data is more "distant".
How you have to use SOA for your design is depends on the its requirements.
In generally you can write coarse grain services and expose them as web services. In your case you can write some services which calls the databases and produce the results. In this case authorization logic can also be written with the service logic.
The other approach is to use an ESB or BPEL engine to write the integration logic and expose the integrated service as a web service. In this case you can use some data services to expose data base data in xml format and integrate them. You can use services for different sachems and call the correct service with the request data. And the authorization logic can also be added to service integration logic.
Security aspects such as authentication, confidentiality, integrity is considered as non functional requirements and hence can be engaged to any service without writing an explicit security service.
Following articles describes such sample possible integration of services as mentioned in the second approach.
http://wso2.org/library/articles/2011/05/integrate-business-rules-bpel
http://wso2.org/library/articles/2011/06/securing-web-service-integration
I am about to build a system that will have its own engine, as well as a front end user interface. I would like to decouple these two as much as possible. The engine should be able to accept commands and data, be able to work on this data, and return some result. The jobs for the engine may be long, and the client should have the ability to query the engine at any time for its current status.
A decouple front-end / back-end system is new territory for me and I'm unsure of the best architecture. I want the front end to be web-based. It will send commands to the engine through forms, and will display engine output and current status, all through ajax calls. I will mot likely use a Spring-based web app inside Tomcat.
My question involves the best structure for engine component. These are the possibilities I'm considering:
Implement the engine as a set of threads and data structures within the web app. The advantages here would be a more simple implementation, and messaging between the web app front end and engine would be simple (nothing more than some shared data structures). Disadvantages would be a tight coupling between the front and back ends, reliance on the server container to manage the engine (e.g if the web server or web app crashed, so would the engine).
Implement the back end as a stand alone Java application, and expose its functionality through some service on a TCP port. I like this approach because it's decoupled from the web server. However, I'm not stoked about the amount of low-level networking / communication code required. I would prefer some higher level of message passing that abstracts Sockets etc.
Use an OSGi container like Spring DM server to host both the web app and engine. This approach is nice because the networking code is nonexistent. The engine exposes services to the OSGI container for the web app to consume. The downside here is the learning curve and overhead of a new technology: OSGi. Also, the front and back end remain coupled again which I dont really want. In other words, I couldn't deploy the front end on any old servlet container, it would have to be in the same OSGi container as the engine.
I have a feeling RMI is the way to go here, but again that's a new area of technology for me, and it still doesn't explain how to design the architecture of the underlying systems. What about JMS?
Thank you for any advice.
If you really want to decouple web app and engine,you can also deploy the engine in a different server and expose the API as web service calls (WS-* or REST).
If it's going to be a Web app, there's no need to decouple the processes like there would be if you had a desktop app front end and a server back end. So keep it simple.
The basis I would use (and am using for a project I'm working on currently as it turns out) is this kind of stack:
Spring 3
Web container
Application deployed as a Web application (WAR);
For persistence, either Ibatis (my preferred option) or JPA/Hibernate (if you prefer a more object persistence approach);
Your preferred choice of Web framework. There's no easy answer here and there are dozens to choose from, from the straight templating to the more componentized (JSF, Seam, etc). Tapestry/Wicket look interesting but I'm no expert in either.
A Spring container is entirely capable of launching a series of threads and it's quite common to do so. So what you'll need is a series of components that will simply be your engine. Unless you have a good reason to do otherwise, Spring beans within a Web application context is simple, flexible and powerful.
On the front end it depends on what you want. Straight HTML can be done with any Web framework. Even if decorated by some Javascript. I use jQuery for that kind of thing.
It only gets a little different if you want the front end to look like a desktop app (a so-called "rich" UI). For this you either need to use the Google Web Toolkit ("GWT"), possibly a component Web framework like JSF (although I tend to think these get real messy real fast) or a Javascript framework like ExtJS, SmartClient, YUI or the fairly new Uki.
You'll decouple your UI if you write your back end as services, and establish an XML or JSON message format to pass between client and services.
All the rest of cletus's comments can hold true for the back end, but the client can be blissfully unaware of it. It can even be a .NET implementation for all it cares. The focus is on the use cases and the messages, not the back end implementation.
This can also be useful in those instances when you use a non-HTML based UI (e.g., Flex).