Our company is currently implementing a couple of tools for employee use, as i'm the only programmer within the company its fallen to me to develop these tools.
However i have little to no experience with webservices or java, so im a little stumped on some logic here. and hoping someone can give me some guidance
We have a mysql database hosted in the UK, this will provide the data for the tools that will be used both within the UK and outside of the UK by our other offices. I'm looking to provide access to the database via web services.
However having looked into this, I get the feeling i have missed something key. Right now I'm looking to create methods for every database table, so each table will need a select, update and delete method, since there are 20 odd tables, that means the web service would have 60 methods exposed!, is this normal?
It seems to me that there would be an easier way to do this but having little experience with java i'm at a loss, and my google fu has failed me thus far.
Could anyone give me some pointers on what the "usual" way of doing this is? and if there is some way that I've simply overlooked.
Web services should be written for each entity and not for each table. An entity should be a logical one and not simply something very abstract. There can be multiple tables in your database to store the data for one entity. For example: You have an entity called 'Person' but assume that details of the person are stored in multiple tables such as 'PersonDetail', 'PersonContactDetails','PersonDependentDetails', etc. You can manipulate these tables data using webservices created for 'Person'.
Web services operations can be mapped to database CRUD(CREATE,READ,UPDATE,DELETE) operations. If you are writing RESTful webservices CRUD operations can be mapped to HTTP methods i.e. POST,GET,PUT,DELETE.
Here's one typical approach, although it's a pretty big learning curve:
Create Data Access Objects (DAOs) to query the DB and convert from your relational data model to a java object model. If extreme performance isn't a consideration (it isn't a consideration for most applications), consider ORM mapping frameworks like Hibernate or JPA. You probably don't need one method per table. Many times multiple tables make up one domain object. For instance, in a banking app you might have a table called customer, and a related table called customer_balance. If you just want to present a balance to a customer, you could have one domain object called "Customer", with a field called "balance". Your Customer DAO would join customer and customer_balance to create a single Customer object.
Create services to wrap DAOs and apply your business rules to them. Keep biz rules in the service as much as possible because it improves testability. An example of a simple banking service method would be "withdrawMoney(amount)". The service would pull the Customer from the DB via a DAO, then first check that the custom has at least "amount" in current balance, and then subtract "amount" from the current balance and save it in the database via the DAO.
Your web layer will call the services layer and present the data to the user and allow them to operate on it. At some point, you may want your web layer to communicate with the services layer via a web service API, although that is probably overkill for early implementations.
As others have cited, the Java Petstore application is a good example of this approach. Oracle doesn't maintain the Petstore app any longer, but volunteers have copied it to GitHub and are keeping it up to date with the latest J2ee versions. Here's a link to the GitHub site: https://github.com/agoncal/agoncal-application-petstore-ee6
Yes, if every one of your 20 tables will require selection (HTTP GET), update (HTTP PUT) and delete (HTTP DELETE), you will probably need 20*3=60 methods.
You'll probably want to start off by having a read of this part of the Java EE 7 tutorial which will give you an overview of web service development. What you are suggesting though seem strange and perhaps not really what you want. If you want to expose every table to updates / deletes / etc then you'd perhaps be better off just opening the port to the database server but this is generally considered a bad idea.
I think you probably want to work at a higher level and pass around objects rather than database updates, lets say, for example you have a Person object in your application. You can pass that to and from your web application and client application and let the web application worry about putting it in the database, deleting it etc. Although there is nothing technically wrong with performing updates in the way you are suggesting I've not seen it done for many years.
Related
I would like to create a simple project using spring to control the status of some customers, with different environments. So a customer can have two environments (dev and prod), and others may have one, two or three.
The basic idea is I would like to create a Web Service using spring with the following interface:
localhost:8080/customer1/environment1/status to extract status data from customer1 and environment1.
I have two options:
Using MongoDB, with a database per customer, a collection per environment and inside the status documents. I found the following problems:
I found many solutions on the web, it was for previous versions of Spring (I am using Spring 5)
Also, I am not sure how can I implement dynamic collections (I mean, if I make a request to localhost:8080/customer2/environment2/status, I not only would like to change the database but also the collection dynamically)
Using Postgres, using a schema per customer, and a table per environment (all the tables will have the same structure)
The problem is that the table name can be different (production, development, test and so on), so I should have to implement dynamic tables name in Spring (which I am not sure if it is possible)
I have been searching a couple of days for an easy solution for this (which initially I thought it would be easy, but looks like it is not that easy)
What do you think it would be the best and simpler solution: MongoDB or Postgres?
Can you provide the basics steps to reproduce it, or provide a Github repository with code I could use as a reference?
PS: There is no need to be extra safe because it will be an internal service, so it doesn't matter the location of the customer's data: can be in the same database, or in different databases
First of all, I think your database choice should depends more on which advantages or disadvantages give you one database over the other. Second, I dont believe using a database per user its a good idea, imagine what will happen when you get 5000 users, it will be a pain administrate such amount of databases or keep changing your database every time in your code. I suggest you firstly try to get a compressed database model of your requeriments in a single database and then over that, you can work and select wich database is better for you.
I hope it helps!
I planning to split my systems into front-end and back-end. Currently my application directly communicates with database, but I want to create a Spring Web service to do it instead. My problem lies with using Hibernate to map my objects to database tables.
I need my front end program to have persistant up-to-date interaction with the database. This again means I have to write a lot of web service endpoints to handle all the queries and updates. This again makes it Hibernate mapping pointless, since I'm not gaining anything.
My question is: is there a proven and reasonable way to pass (via SOAP if possible) hibernate mapped objects over to front-end and later commit changes done to these objects?
In short: no.
Detaching and re-attaching hibernate-managed objects in different applications, like you are thinking of, will lead to all kinds of problems that you want to avoid, such as concurrency and locking issues, after you've dealt with all the LazyLoadingExceptions. It will be a pain in the b***.
The road you're heading into finally leads to an architecture that adds an extra layer of indirection with Data Objects being transferred between business service and clients of those business services. Only your business service will be able to talk to the database directly. Obviously this time-consuming, and must be avoided if possible. That's why I asked you to explain the problem you're trying to solve.
You can pass hibernated entities via SOAP or other serialization mechanisms, but you shall be very careful with lazy loading, collections loading and detaching entities from session - otherwise you may end up sending all your database where you need just one object or hibernate proxies which are not usable on the other side.
I'm a .NET Developer trying my hand at Java. My current project has a UI layer, Business logic layer, and a Data Access layer. I'm currently working on the DAL.
I'm not connecting to an external database yet; I had hoped to have my DAL classes utilize in-memory dataTables until the DB is in place.
In .NET it's very easy to make in-memory dataTables, select from them, add to them, and remove from them. But, in Java, I've been unable to find something that does the same thing.
I was considering replacing the 'dataTables' with a collection of strongly typed objects; but that would require adding references to Business layer inside of the DAL (and I thought that was a no-no).
Can someone help a confused developer out? If this whole approach is flawed, what would you do? If I missed the equivalent of a dataTable in Java - what is it?
Here's an article on running an in-memory Derby database.
If I knew what database and what persistence library you're using, I might be able to give a more precise answer.
You could use a memory database like described in this answer.
A comparison of different memory databases is shown in this SO question.
I was considering replacing the
'dataTables' with a collection of
strongly typed objects; but that would
require adding references to Business
layer inside of the DAL (and I thought
that was a no-no).
Who makes up these rules?
If your data access layer is responsible for CRUD operations for model objects, it seems to me that it has to have references to them. There's no way around that.
The persistence tier need not know about the service or view layers.
The only completely uncoupled class is one that talks to no one and offers nothing. It's useless.
Don't be so hung up on "rules". You're trying to layer your application. You're putting all things about persistence into a layer of classes.
I don't think in-memory database has any effect on the way you design the persistence tier. You should be able to swap in a relational database or flat file or any other mechanism, but the interface shouldn't change. That's an implementation detail.
OR/Ms were available much earlier in Java than in .NET. DataSets are flawed in that they force you to program procedurally. Try to interact with objects and map those to the DB later.
I have to make a web application multi-tenant enabled using Shared database separate schema approach. The application is built using Java/J2EE and Oracle 10g.
I need to have one single appserver using a shared database with multiple schema, one schema per client.
What is the best implementation approach to achieve this?
What needs to be done at the middle tier (app-server) level?
Do I need to have multiple host headers each per client?
How can I connect to the correct schema dynamically based on the client who is accessing the application?
At a high level, here are some things to consider:
You probably want to hide the tenancy considerations from day-to-day development. Thus, you will probably want to hide it away in your infrastructure as much as possible and keep it separate from your business logic. You don't want to be always checking whether which tenant's context you are in... you just want to be in that context.
If you are using a unit of work pattern, you will want to make sure that any unit of work (except one that is operating in a purely infrastructure context, not in a business context) executes in the context of exactly one tenant. If you are not using the unit of work pattern... maybe you should be. Not sure how else you are going to follow the advice in the point above (though maybe you will be able to figure out a way).
You probably want to put a tenant ID into the header of every messaging or HTTP request. Probably better to keep this out of the body on principle of keeping it away from business logic. You can scrape this off behind the scenes and make sure that behind the scenes it gets put on any outgoing messages/requests.
I am not familiar with Oracle, but in SQL Server and I believe in Postgres you can use impersonation as a way of switching tenants. That is to say, rather than parameterizing the schema in every SQL command and query, you can just have one SQL user (without an associated login) that has the schema for the associated tenant as its default schema, and then leave the schema out of your day-to-day SQL. You will have to intercept calls to the database and wrap them in an impersonation call. Like I say, I'm not exactly sure how this works out in Oracle, but that's the general idea for SQL Server.
Authentication and security are a big concern here. That is far beyond the scope of what I can discuss in this answer but make sure you get that right.
My requirement is I have server J2EE web application and client J2EE web application. Sometimes client can go offline. When client comes online he should be able to synchronize changes to and fro. Also I should be able to control which rows/tables need to be synchronized based on some filters/rules. Is there any existing Java frameworks for doing it? If I need to implement on my own, what are the different strategies that you can suggest?
One solution in my mind is maintaining sql logs and executing same statements at other side during synchronization. Do you see any problems with this strategy?
There are a number of Java libraries for data synchronizing/replication. Two that I'm aware of are daffodil and SymmetricDS. In a previous life I foolishly implemented (in Java) my own data replication process. It seems like the sort of thing that should be fairly straightforward, but if the data can be updated in multiple places simultaneously, it's hellishly complicated. I strongly recommend you use one of the aforementioned projects to try and bypass dealing with this complexity yourself.
The biggist issue with synchronization is when the user edits something offline, and it is edited online at the same time. You need to merge the two changed pieces of data, or deal with the UI to allow the user to say which version is correct. If you eliminate the possibility of both being edited at the same time, then you don't have to solve this sticky problem.
The method is usually to add a field 'modified' to all tables, and compare the client's modified field for a given record in a given row, against the server's modified date. If they don't match, then you replace the server's data.
Be careful with autogenerated keys - you need to make sure your data integrity is maintained when you copy from the client to the server. Strictly running the SQL statements again on the server could put you in a situation where the autogenerated key has changed, and suddenly your foreign keys are pointing to different records than you intended.
Often when importing data from another source, you keep track of the primary key from the foreign source as well as your own personal primary key. This makes determining the changes and differences between the data sets easier for difficult synchronization situations.
Your synchronizer needs to identify when data can just be updated and when a human being needs to mediate a potential conflict. I have written a paper that explains how to do this using logging and algebraic laws.
What is best suited as the client-side data store in your application? You can choose from an embedded database like SQLite or a message queue or some object store or (if none of these can be used since it is a web application) files/ documents saved on the client using Web DB or IndexedDB through HTML 5's LocalStorage API.
Check the paper Gold Rush: Mobile Transaction Middleware with Java-Object Replication. Microsoft's documentation of occasionally connected systems describes two approaches: service-oriented or message-oriented and data-oriented. Gold Rush takes the earlier approach. The later approach uses database merge-replication.