I'm a .NET Developer trying my hand at Java. My current project has a UI layer, Business logic layer, and a Data Access layer. I'm currently working on the DAL.
I'm not connecting to an external database yet; I had hoped to have my DAL classes utilize in-memory dataTables until the DB is in place.
In .NET it's very easy to make in-memory dataTables, select from them, add to them, and remove from them. But, in Java, I've been unable to find something that does the same thing.
I was considering replacing the 'dataTables' with a collection of strongly typed objects; but that would require adding references to Business layer inside of the DAL (and I thought that was a no-no).
Can someone help a confused developer out? If this whole approach is flawed, what would you do? If I missed the equivalent of a dataTable in Java - what is it?
Here's an article on running an in-memory Derby database.
If I knew what database and what persistence library you're using, I might be able to give a more precise answer.
You could use a memory database like described in this answer.
A comparison of different memory databases is shown in this SO question.
I was considering replacing the
'dataTables' with a collection of
strongly typed objects; but that would
require adding references to Business
layer inside of the DAL (and I thought
that was a no-no).
Who makes up these rules?
If your data access layer is responsible for CRUD operations for model objects, it seems to me that it has to have references to them. There's no way around that.
The persistence tier need not know about the service or view layers.
The only completely uncoupled class is one that talks to no one and offers nothing. It's useless.
Don't be so hung up on "rules". You're trying to layer your application. You're putting all things about persistence into a layer of classes.
I don't think in-memory database has any effect on the way you design the persistence tier. You should be able to swap in a relational database or flat file or any other mechanism, but the interface shouldn't change. That's an implementation detail.
OR/Ms were available much earlier in Java than in .NET. DataSets are flawed in that they force you to program procedurally. Try to interact with objects and map those to the DB later.
Related
I am researching how to build a general application or microservice to enable building workflow-centric applications. I have done some research about frameworks (see below), and the most promising candidates share a hard reliance upon RDBMSes to store workflow and process state combined with JPA-annotated entities. In my opinion, this damages the possibility of designing a general, data-driven workflow microservice. It seems that a truly general workflow system can be built upon NoSQL solutions like MondoDB or Cassandra by storing data objects and rules in JSON or XML. These would allow executing code to enforce types or schemas while using one or two simple Java objects to retrieve and save entities. As I see it, this could enable a single application to be deployed as a Controller for different domains' Model-View pairs without modification (admittedly given a very clever interface).
I have tried to find a workflow engine/BPM framework that supports NoSQL backends. The closest I have found is Activiti-Neo4J, which appears to be an abandoned project enabling a connector between Activity and Neo4J.
Is there a Java Work Engine/BPM framework that supports NoSQL backends and generalizes data objects without requiring specific POJO entities?
If I were to give up on my ideal, magically general solution, I would probably choose a framework like jBPM and Activi since they have great feature sets and are mature. In trying to find other candidates, I have found a veritable graveyard of abandoned projects like this one on Java-Source.net.
Yes, Temporal Workflow has pluggable persistence and runs on Cassandra as well as on SQL databases. It was tested to up to 100 Cassandra nodes and could support tens of thousands of events per second and hundreds of millions of open workflows.
It allows to model your workflow logic as plain old java classes and ensures that the code is fully fault tolerant and durable across all sorts of failures. This includes local variable and threads.
See this presentation that goes into more details about the programming model.
I think the reason why workflow engines are often based on RDBMS is not the database schema but more the combination to a transaction-safe data store.
Transactional robustness is an important factor for workflow engines, especially for long-running or nested transactions which are typical for complex workflows.
So maybe this is one reason why most engines (like activi) did not focus on a data-driven approach. (I am not talking about data replication here which is covered by NoSQL databases in most cases)
If you take a look at the Imixs-Workflow Project you will find a different approach based on Java Enterprise. This engine uses a generic data object which can consume any kind of serializable data values. The problem of the data retrieval is solved with the Lucene Search technology. Each object is translated into a virtual document with name/value pairs for each item. This makes it easy to search through the processed business data as also to query structured workflow data like the status information or the process owners. So this is one possible solution.
Apart from that, you always have the option to store your business data into a NoSQL database. This is independent from the workflow data of a running process instance as far as you link both objects together.
Going back to the aspect of transactional robustness it's a good idea to store the reference to your NoSQL data storage into the process instance, which is transaction aware. Take also a look here.
So the only problem you can run into is the fact that it's very hard to synchronize a transaction context from a EJB/JPA to an 'external' NoSQL database. For example: what will you do when your data was successful saved into your NoSQL data storage (e.g. Casnadra), but the transaction of the workflow engine fails and a role-back is triggered?
The designers of the Activiti project have also been aware of the problem you have stated, but knew it would be quite a re-write to implement such flexibility which, arguably, should have been designed into the project from the beginning. As you'll see in the link provided below, the problem has been a lack of interfaces toward which to code different implementations other than that of a relational database. With version 6 they went ahead and ripped off the bandaid and refactored the framework with a set of interfaces for which different implementations (think Neo4J, MongoDB or whatever other persistence technology you fancy) could be written and plugged in.
In the linked article below, they provide some code examples for a simple in-memory implementation of the aforementioned interfaces. Looks pretty cool and sounds to perhaps be precisely what you're looking for.
https://www.javacodegeeks.com/2015/09/pluggable-persistence-in-activiti-6.html
We started writing a Java Framework for our company. But we don't have enough experience about Java. We decided to use JPA framework for database CRUD operations.
What do you suggest about that:
about defining persistence.xml. We search creating dynamic
EntityManager and found some documents but we don't know that is
it best way.
Is it a good way that create a layer over JPA base db operations?
(for example CRUD methods.)
How can we do calling JPA CRUD methods from my CRUD methods in
framework?
We will use this framework for desktop and web applications. Is
deployment a problem for us.
Do we have to use EJB?
Is there alternative to JPA which you suggest? (example: ADF,JDBC)
Thanks
It highly depends on your requirements and what you want to do with your "framework". I do not know enough of your project to give you a real advise, but here are some thoughts:
What do you mean with "framework"? Are you developing a library which other people should use? What should be the purpose of your framework? Is it a data access layer for some of your company data? If so: JPA is a kind of a standard and might be a good fit since it is widely used. If other people should use your "framework" it is good to use something which is standard and used in many other applications and tools.
Do you really need a data access layer on the desktop? Do you have a rich client? It is no problem to just "deploy" the application to the desktop, but a data access layer must always be configured and (maybe) updated. And that's where the pain begins when you use a rich client. Users must configure a database, the database must be installed or accessible remote and the version of the client must match the version of the database. Sooner or later this will hit you.
What else have you considered already? What about a ORM? Hibernate might by a good and popular fit. Also eBeans which is used in Play! is very cool. If you make a CRUD applications, frameworks like eBeans are doing most of the work out-of-the-box for you. You create a model (just POJOs + annotations) and the frameworks provides the complete data access layer (including the database setup).
Our company is currently implementing a couple of tools for employee use, as i'm the only programmer within the company its fallen to me to develop these tools.
However i have little to no experience with webservices or java, so im a little stumped on some logic here. and hoping someone can give me some guidance
We have a mysql database hosted in the UK, this will provide the data for the tools that will be used both within the UK and outside of the UK by our other offices. I'm looking to provide access to the database via web services.
However having looked into this, I get the feeling i have missed something key. Right now I'm looking to create methods for every database table, so each table will need a select, update and delete method, since there are 20 odd tables, that means the web service would have 60 methods exposed!, is this normal?
It seems to me that there would be an easier way to do this but having little experience with java i'm at a loss, and my google fu has failed me thus far.
Could anyone give me some pointers on what the "usual" way of doing this is? and if there is some way that I've simply overlooked.
Web services should be written for each entity and not for each table. An entity should be a logical one and not simply something very abstract. There can be multiple tables in your database to store the data for one entity. For example: You have an entity called 'Person' but assume that details of the person are stored in multiple tables such as 'PersonDetail', 'PersonContactDetails','PersonDependentDetails', etc. You can manipulate these tables data using webservices created for 'Person'.
Web services operations can be mapped to database CRUD(CREATE,READ,UPDATE,DELETE) operations. If you are writing RESTful webservices CRUD operations can be mapped to HTTP methods i.e. POST,GET,PUT,DELETE.
Here's one typical approach, although it's a pretty big learning curve:
Create Data Access Objects (DAOs) to query the DB and convert from your relational data model to a java object model. If extreme performance isn't a consideration (it isn't a consideration for most applications), consider ORM mapping frameworks like Hibernate or JPA. You probably don't need one method per table. Many times multiple tables make up one domain object. For instance, in a banking app you might have a table called customer, and a related table called customer_balance. If you just want to present a balance to a customer, you could have one domain object called "Customer", with a field called "balance". Your Customer DAO would join customer and customer_balance to create a single Customer object.
Create services to wrap DAOs and apply your business rules to them. Keep biz rules in the service as much as possible because it improves testability. An example of a simple banking service method would be "withdrawMoney(amount)". The service would pull the Customer from the DB via a DAO, then first check that the custom has at least "amount" in current balance, and then subtract "amount" from the current balance and save it in the database via the DAO.
Your web layer will call the services layer and present the data to the user and allow them to operate on it. At some point, you may want your web layer to communicate with the services layer via a web service API, although that is probably overkill for early implementations.
As others have cited, the Java Petstore application is a good example of this approach. Oracle doesn't maintain the Petstore app any longer, but volunteers have copied it to GitHub and are keeping it up to date with the latest J2ee versions. Here's a link to the GitHub site: https://github.com/agoncal/agoncal-application-petstore-ee6
Yes, if every one of your 20 tables will require selection (HTTP GET), update (HTTP PUT) and delete (HTTP DELETE), you will probably need 20*3=60 methods.
You'll probably want to start off by having a read of this part of the Java EE 7 tutorial which will give you an overview of web service development. What you are suggesting though seem strange and perhaps not really what you want. If you want to expose every table to updates / deletes / etc then you'd perhaps be better off just opening the port to the database server but this is generally considered a bad idea.
I think you probably want to work at a higher level and pass around objects rather than database updates, lets say, for example you have a Person object in your application. You can pass that to and from your web application and client application and let the web application worry about putting it in the database, deleting it etc. Although there is nothing technically wrong with performing updates in the way you are suggesting I've not seen it done for many years.
I planning to split my systems into front-end and back-end. Currently my application directly communicates with database, but I want to create a Spring Web service to do it instead. My problem lies with using Hibernate to map my objects to database tables.
I need my front end program to have persistant up-to-date interaction with the database. This again means I have to write a lot of web service endpoints to handle all the queries and updates. This again makes it Hibernate mapping pointless, since I'm not gaining anything.
My question is: is there a proven and reasonable way to pass (via SOAP if possible) hibernate mapped objects over to front-end and later commit changes done to these objects?
In short: no.
Detaching and re-attaching hibernate-managed objects in different applications, like you are thinking of, will lead to all kinds of problems that you want to avoid, such as concurrency and locking issues, after you've dealt with all the LazyLoadingExceptions. It will be a pain in the b***.
The road you're heading into finally leads to an architecture that adds an extra layer of indirection with Data Objects being transferred between business service and clients of those business services. Only your business service will be able to talk to the database directly. Obviously this time-consuming, and must be avoided if possible. That's why I asked you to explain the problem you're trying to solve.
You can pass hibernated entities via SOAP or other serialization mechanisms, but you shall be very careful with lazy loading, collections loading and detaching entities from session - otherwise you may end up sending all your database where you need just one object or hibernate proxies which are not usable on the other side.
Let say a company are building a brand new application. The application are following the DDD principles.
The old codebase has alot of products (or another "entity" for the company) that they want to convert to the new codebase.
How should this work be done? normally it is faster and easier to import using for examples ssis,-transferring from one database to another. But the main problem here is that alot of the BusinessRules (implemented in managed code in the DomainLayer) is skipped...
Is this good enough if the develeoper says: "i have it under control. The rules are duplicated as sql scripts..."
Should we import the managed code libraries into the SQL Server (atleast this is possible in .NET and MS SQL Server)?
Or should we we create a import script in managed code so all the layers in the domain are traversed when the entity are saved in the database?... (can take many hours..)
What are your thoughts?
I would suggest that you write a little - import-application in .NET where you can apply the business rules. Since this task (at least I suppose so) will only run once (or twice ;)) speed is not that important - for speeding it up - design it to be multi-threaded - if possible.
and no it is not good enough - if anyone says "I have it under control" - this is a buzz-sentence and all my alarm-bells go off. some detail will always be forgotten and this is mostly a little catastrophe ;)
The two options are not mutually exclusive. SQL Server can consume web services. You can create your import service as a web service and then call it from SQL Server. You can, of course, even do this with SSIS.
Like a lot of other questions in that league (which is the best approach for this and that, which language to choose, ORM or not) it is hard to answer without knowing about the details of the old app, the new app, the data model (relational and OO).... Or easy: Analyze your task carefully and then choose your tool.
Will the new app be build on top of the new relational layer with it's own also new domain layer or will the mentioned domain layer stay?
What kind of and how many business rules we're talking about?
...?
Without knowing too much about all this I'd say: The new app has to follow the old business rules as well as the new relational layer has to be designed for the domain. The developers should by that know about business and domain rules and the (sql-)scripting should be a viable way.