We're designing two distinct systems which can be simulated by the following typical example.
Web App #1 - Course catalog (allows updating / populating the course catalog)
Professors
Course (courseCode, professorId, list of Prerequisites, grade scale used)
Prerequisites (courseCode and minimum grade required)
GradeScale (i.e. A-F, 1-100, pass/fail)
Web App #2 - Student catalog (handles students registering for new courses, seeing their transcript, etc)
Student
Transcript (what courses did they take and what final grade)
Data that needs to pass between the two systems (there will be more calls and stuff that needs to be handed back and forth, but this gives the idea that it's a 2-way flow of questions and answers):
Does a student have the pre-reqs needed to take a particular course?
Pulling details from the course catalog to create a full transcript
From reading, it seems our options are:
Create EJBs for the underlying data model, then have the web applications use the EJB interface.
Use a REST or Web Service interface between the two applications.
RMI or other Java remoting?
Which way would you cut this up into JARs/WARs/EARs?
This was initially a comment but it's actually too long.
If you got only simple imperative services (set this, do that, is this valid?), then you can go for an AXIS2/SOAP based web services solution. (you probably won't need the whole bloat of SpringWS). If the app logic is not too twisted, I'd follow KISS principle.
I don't know your system scenario, but if you're using a full fledge RDBMS, it's high probable that the database will reside on its own machine, thus having different pools connecting to it, is not much of a burden. (if you're using a local db on each AS, you're probably going to face some scalability problems later on).
In modern Java EE app servers you can actually use connection pools of one server from another (via jnp:// urls), it's just a matter of JNDI lookup.
If the db engine supports it, oracle alike db links are also a good way to share a database between apps.
You can spare code times by having a business/data layer in a simple java project with all the ORM stuffs, shared across the 2 web dinamic projects, so eventual changes in business logic will reflect onto both apps.
You can also tryout the mixed way (simple imperative Web Services and database share), it really depends on what messages are exchanged between the two applications. You can provide a layer of web services API (SOAP or jsonp based), but take into account execution time of the web service itself (it's not so good to have time consuming ws).
Web Services and EJB are good and probably can do what you need, the real question is: do you really need them ? lately I've seen lots of project starting with the full REST thing, and in many cases it was like killing flies with a bazooka.
If the requirements are simple, then keep it simple.
Related
I have created a simple blogging application using Spring Boot and RESTful APIs. I have connected the same to a MySQL database to run some SQL queries like those for adding a blog, deleting a blog, etc.
My questions are as follows:
Does it mean that I have used a microservice architecture? When does an architecture become a microservice? (I ask because many similar websites call an application as microservice-based. Other than the main application, e.g., currency exchange instead of blogging, I see no other difference; for e.g., this one - it does have many more aspects, but they are not contributing to its microservice-ness, IMHO).
Can I call an application as horizontally scalable if I am using microservice-based architecture?
Note: The tutorial I followed is here and the GitHub repo is here.
First of all: those aren't exact yes/no questions. I'll give you my opinion, but others will disagree.
You have created what most people would agree qualifies as a Microservice. But a Microservice does not make a Microservice architecture, in the same way that a tree doesn't make a forest.
A Microservice architecture is defined by creating a greater application that consists of several distributed components. What you have done is created a monolith (which is absolutely fine in most cases).
Almost every talk about Microservices that I have attended has featured this advice: start with a monolith, evolve to microservices once you need it.
Regarding the last question: your application is horizontally scalable if it is stateless. If you keep any session state, it can still be horizontally scalable, but you'll need a smart LB managing sticky sessions or distributed sessions. That's when things get interesting, and when you can start thinking about a Microservice architecture.
Common problems are: how can I still show my customers my website, if the order database, cart service, payment provider etc. are down. Service discovery, autoscaling, retry strategies, evolving Rest apis, those are common problems in a Microservice architecture. The more of them you use and need, the more you can claim to have a Microservice architecture.
Not at all. The use of microservices is an advanced architecture pattern that is hard to implement right, but that gives useful benefits in huge projects. This should not be of any concern to a small project, unless you want to test this particular architectural style.
Breaking an application in smaller chunk does increase its scalability, as resources can be increased on a smaller scale. However, statelesness, among other properties, are also key components to a scalable architecture.
First of all, what you showed us dont looks like microsservice at all.
You can say that you have an application that uses microsservices architecture when it is formed by microsservices(oh rly?) with independent functionalities and that can be scalable. Scale one service, means that you will run multiple instances (possible in multiple hosts) and it will be transparent for other services.
A good example to ilustrate that is a web store microsservice based composed by 4 microsservices:
Sale Microsservice
Product Microsservice
Messaging Microsservice
Authentication Microsservice
In a blackfriday event, for example, which theoretically will occur a lot of purchases, you can scale only the Sale Microsservice, saving resources from the other three (of course this means using a bunch of other technologies like proxy, LB ...). If you were using a monolithic architecture would need to scale all your application.
If you are using correctly a microsservice architecture, yes, you can say that your application is horizontally scalable.
I'm trying to make simple application and deploy it on Google Cloud Platform Flexible App Engine, which will contain two main parts:
Front end application (simple Web UI based on Java 8 (Spring + Thymeleaf) with OAuth authorization from different external sites)
Back end application (monitoring several resources in separate threads, based on logged in users and reacting to their input in a certain way (behavioral changes))
Initially I was planning to make them as one app, but I think that potentially heavy background processing may cause failures in my front end application part + App Engine docs says that deployed services behave similar to microservice architecture.
My questions are:
Do I really need to separate front end from back end, if I need to react to user input as fast as possible? (but delays up to 2 seconds aren't that critical)
If I do need to separate them (and I strongly believe that I do) - how to I set up interaction between applications?
Each resource must be tracked exactly by one thread on back end - what are the best practices about this? I thought about having a SQL table with a list of acquired resources, but the flaw I see there is if an instance will fail I will need to make some kind of clean up on that table and redetermine which resources are actually acquired.
Your proposed architecture sounds like the right approach in separating the two into different services for the following reasons:
Can deploy code for each separately, rollback versions separately, and split traffic separately for experiments or phased rollouts.
Can adjust machine types and memory allocations for each service to better suit its needs. If you're doing work that is memory intensive on the backend, you can adjust that service's settings to allocate more memory per instance.
Allow each type of service to scale independently based on demands, which should result in better utilization of the services and less waste. This should also lower your overall spending than if you tried to go for a one-sized fits all approach in a single monolithic service.
You can mix different runtime environments across services. For example, you can mix language runtimes within a project OR you could even mix between standard and flexible environments. Say your front-end code is more cost efficient in standard, designate that service as a standard environment service and your backend as a flexible environment service. Or say you need a customer docker file with Perl in it, you could do that as a flexible environment custom runtime and have your front-end in Java 8.
You can still share common services like Cloud SQL, PubSub, Cloud Tasks (currently in alpha) or Redis for in memory caching. Your works don't need t reside in App Engine, they could reside in a different product if that better suits your needs.
Overall, you get much better control over your application to split it apart. The biggest benefit likely comes down to optimizing your application for spending only on what you need.
I think that you are likely to be able to deploy everything as an appengine app except if you use some exotic Java libraries that are not whitelisted. It could still be good to deploy it with compute engine for increased configurability and versatility.
You can create one front-end instance and one back-end instance in compute engine and divide the resources between them like that. Google's documentation has an example where you can do that.
First of all, I have a conceptual question, Does the word "distributed" only mean that the application is run on multiple machines? or there are other ways where an application can be considered distributed (for example if there are many independent modules interacting togehter but on the same machine, is this distributed?).
Second, I want to build a system which executes four types of tasks, there will be multiple customers and each one will have many tasks of each type to be run periodically. For example: customer1 will have task_type1 today , task_type2 after two days and so on, there might be customer2 who has task_type1 to be executed at the same time like customer1's task_type1. i.e. there is a need for concurrency. Configuration for executing the tasks will be stored in DB and the outcomes of these tasks are going to be stored in DB as well. the customers will use the system from a web browser (html pages) to interact with system (basically, configure tasks and see the outcomes).
I thought about using a rest webservice (using JAX-RS) where the html pages would communicate with and on the backend use threads for concurrent execution.
Questions:
This sounds simple, But am I going in the right direction? or i should be using other technologies or concepts like Java Beans for example?
2.If my approach is fine, do i need to use a scripting language like JSP or i can submit html forms directly to the rest urls and get the result (using JSON for example)?
If I want to make the application distributed, is it possible with my idea? If not what would i need to use?
Sorry for having many questions , but I am really confused about this.
I just want to add one point to the already posted answers. Please take my remarks with a grain of salt, since all the web applications I have ever built have run on one server only (aside from applications deployed to Heroku, which may "distribute" your application for you).
If you feel that you may need to distribute your application for scalability, the first thing you should think about is not web services and multithreading and message queues and Enterprise JavaBeans and...
The first thing to think about is your application domain itself and what the application will be doing. Where will the CPU-intensive parts be? What dependencies are there between those parts? Do the parts of the system naturally break down into parallel processes? If not, can you redesign the system to make it so? IMPORTANT: what data needs to be shared between threads/processes (whether they are running on the same or different machines)?
The ideal situation is where each parallel thread/process/server can get its own chunk of data and work on it without any need for sharing. Even better is if certain parts of the system can be made stateless -- stateless code is infinitely parallelizable (easily and naturally). The more frequent and fine-grained data sharing between parallel processes is, the less scalable the application will be. In extreme cases, you may not even get any performance increase from distributing the application. (You can see this with multithreaded code -- if your threads constantly contend for the same lock(s), your program may even be slower with multiple threads+CPUs than with one thread+CPU.)
The conceptual breakdown of the work to be done is more important than what tools or techniques you actually use to distribute the application. If your conceptual breakdown is good, it will be much easier to distribute the application later if you start with just one server.
The term "distributed application" means that parts of the application system will execute on different computational nodes (which may be different CPU/cores on different machines or among multiple CPU/cores on the same machine).
There are many different technological solutions to the question of how the system could be constructed. Since you were asking about Java technologies, you could, for example, build the web application using Google's Web Toolkit, which will give you a rich browser based client user experience. For the server deployed parts of your system, you could start out using simple servlets running in a servlet container such as Tomcat. Your servlets will be called from the browser using HTTP based remote procedure calls.
Later if you run into scalability problems you can start to migrate parts of the business logic to EJB3 components that themselves can ultimately deployed on many computational nodes within the context of an application server, like Glassfish, for example. I don think you don't need to tackle this problem until you run it to it. It is hard to say whether you will without know more about the nature of the tasks the customer will be performing.
To answer your first question - you could get the form to submit directly to the rest urls. Obviously it depends exactly on your requirements.
As #AlexD mentioned in the comments above, you don't always need to distribute an application, however if you wish to do so, you should probably consider looking at JMS, which is a messaging API, which can allow you to run almost any number of worker application machines, readying messages from the message queue and processing them.
If you wanted to produce a dynamically distributed application, to run on say, multiple low-resourced VMs (such as Amazon EC2 Micro instances) or physical hardware, that can be added and removed at will to cope with demand, then you might wish to consider integrating it with Project Shoal, which is a Java framework that allows for clustering of application nodes, and having them appear/disappear at any time. Project Shoal uses JXTA and JGroups as the underlying communication protocol.
Another route could be to distribute your application using EJBs running on an application server.
As a concluding assignment for the technologies taught in a data management course, we have to write a web application using the technologies taught throughout the course, this mostly includes xhtml, css, JSP, servelets, JDBC, AJAX, webservices. the project will eventually be deployed using tomcat. we are given the option of choosing the technologies that we see fit. since this is my first time developing a web application I am having some uncertainties about where to start, so for example now I am writing the object classes that will be used in the database and implementing the operations that will be performed on the database, but I am not sure about how to make these operations available to a client through the website, I mean I think I have to write a servlet through which I can extract the request parameters and set the response accordingly, but I would still like a more specific overview of what I am going to do, so if someone can link me to a tutorial with an example that makes use of these technologies while illustrating the stages of the design so that I can see how all these things are linked together in a web project.
thanks
Java Enterprise applications typically use a layered architecture as illustrated below:
In short:
The presentation layer provides the application's user interface. In a web application, this typically involves the use of a MVC (Model-View-Controller) framework.
The service layer exposes coarse grained services implementing the business logic of an application. They act as entry point and are typically responsible of transaction demarcation.
The data access layer abstract physical storage systems (e.g. a database) and expose CRUD (Create, Read, Update, Delete) methods and finders.
Domain objects represent the business concepts of your domain (Client, Order, Product, etc) and are typically used across all layers, from the data access layer to the presentation).
I don't want to make things too confusing and to throw in too much technologies or frameworks (are you allowed to use frameworks?) that could fit in this diagram. Just tell me if I should.
Regarding your question about the presentation layer, I already hinted the answer: use the MVC pattern.
Basically, the View is the part that renders the user interface (e.g. JSP). From the view, the user sends input to a controller (a Servlet acting as entry point). The controller communicates and interacts with the model (standard Java classes), set appropriate data in the HTTP request or Session and forwards the request and response to a view. And this restarts the cycle.
If you need more details, let me know.
Baby steps are needed. Get something running and then expand on it.
Start with this tutorial, get it running and then start asking questions http://www.eclipse.org/webtools/community/tutorials/BuildJ2EEWebApp/BuildJ2EEWebApp.html
This will give you a Servlet and a JSP running on Tomcat from Eclipse. From there you can expand.
Sun documentation is pretty good: The Java EE 6 Tutorial, Volume I.
There is also a working sample application released by the Java BluePrints program at Sun called the Pet Store Demo.
I have also put together a string of tutorials aimed at beginners who are wanting to learn how to build Java web applications (within the Eclipse environment). I have tried to keep it as simple as possible.
I'm new to this and is looking at Apache Camel, Spring Integration and even Terracotta.
I'm looking at sharing of common data like user/groups/account/permission and common business data like inventory/product details/etc.
Any example will be really appreciated.
How about database-level integration?
Have both applications access the same relational database. Those are built for that kind of task.
To do that, the two applications can use a shared library (of which for the sake of simplicity each one will have a copy in their WEB-INF/lib).
You should consider creating a full blown EAR instead, if you want this to be web container independent.
As different web applications have different classloaders you cannot just create an object in one web app which is immediately usable by another. Hence you need to have a common classloader which knows about the classes in common, and - to be 100% compliant - these classes may not be in either web apps WEB-INF/lib. This is hard to get right, and the result is fragile.
Therefore consider migrating to a web container which can deploy EAR's instead as they may contain several web applications sharing objects. I believe a good choice for starting would be JBoss.
I'm looking at sharing of common data like user/groups/account/permission and common business data like inventory/product details/etc.
Common data like users, groups, and permissions belong in a central LDAP or database. These are part of your Spring Security solution, and all apps can share those regardless of whether they're on the same app server or not.
It can be argued that common business data like inventory, product details, etc. should be "owned" by a single service. It's the only one that can modify the data. Others can get access by querying the service, but it's the one that manages CRUD operations on those tables.
If you do this, you keep objects and systems from being coupled at the database level. You're trading looser coupling for increased network latency.
In theory, every application has its own memory space, but off the top of my head I can think of a number of methods for sharing information between applications.
If the amount of shared information is small, perhaps a direct approach is best. set up a communication channel (web services are a bit of an overkill, but a good example) and have the applications request info from each other.
If there is massive sharing, perhaps the two applications should be reading from the same database or local file. Mind you, This brings up synchronization issues, and gets you into the realm of lockings and blockings. Tread lightly in this realm...
If your new an idea may be to build the classes to handle the common data and just build a separate servlet for each application.
This will at least get you started and more familiar with the technologies.