We are a complete SOA workshop(Java only) and we use SOAP for the data transfer. Currently we are in a process of centralizing the database work for a specific component so the other components can fetch data from one application using SOAP.
My argument is that it is good to centralize but it adds a lot of latency when adding soap between database calls. I want a RMI/EJB type of implementation so we get serialized object and it reduces the marshaling overhead. I like the way the Ejbs are implemented and would like to use it. But the data that we return is not at all from one table, so, I cannot return a database table entity, the data might be from 20 other tables or more.
So, in our current system we have custom entities which are created to map to heavy sql queries. (not related to one table)
Can ejbs be used for this type of environment? If so, are there libraries that are readily available to map the result of a query to entities?
Unfortunately our in-house system is very old, we use java 1.4.
This can be done, but it is going to be painful. There was a reason EJB 3.0 entity beans were created. It's because dealing with these sorts of complex requirements is really quite difficult to map via the old 2.x entity beans xml files.
If you are really building a new SOA layer to represent your database content, why would you do this with a technology that has been obsolete for almost 10 years?
Worse, building this with EJB 2.x and then using RMI/EJB will bind all of your other applications to this same outdated technology. Very few people would choose to start a new EJB 2.1 project.
I honestly believe that you are better off using SOAP for your service instead of EJB, at least it won't couple you to an obsolete platform. Current best practices prefer REST for entity transfer, and save SOAP for things RPC-style interactions, but there are lots of good libraries for doing your database tables to SOAP mappings, many of which are out-of-the-box for RDMS's.
Finally, if you are determined to do this, I'd suggest you first do a test. Build a test framework to actually see if the SOAP deserialization is a significant cost component. Compare it to the cost of the network transport. Unless these entities are in the megabyte range, deserialization will be a tiny fraction of your overall application time.
Related
I am new to SPRING and was assigned to work on project currently under development. Unfortunately development of the project has been slow so people have come and gone so I cant ask them why some things were done a certain way.
The project is a web service using SPRING.
They are using a View - Controller - Service (interface & implementation) - DAO (interface & implementation) - POJO (class used to transport data structure across layers).
Every POJO I have checked implementations serialization. On closer examination and search of the code, none of the POJO's are ever written or read, either in the POJO itself or any other file. Which has lead me to ask why its being done.
The POJO's are populated from Oracle statements in the DAO, which bubble upto the view, and then will bubble back down to the DAO where they information from them are written to the database using Oracle statements. The POJO itself is not written into the database.
Does SPRING MVC or java web applications require serialization and it is being used in the background? Is it needed to transmit the data between server and client connections? Is there a good reason that all the POJO's are using it that someone new would not recognize?
Depends on technologies used in the layers as well as implementation details.
If persistence is done using JPA/Hibernate then POJOs most likely will need to be Serializable.
In case if the POJO is passed to view via servlet session and session replication is on then you need to have your POJOs Serializable.
Use of Java's default serialization is a normal way for regular POJOs.
Java specifies a default way in which objects can be serialized. Java classes can override this default behavior. Custom serialization can be particularly useful when trying to serialize an object that has some unserializable attributes.
This might not be the correct answer, but so far in my case it matches and explains what I am seeing. I have not seen this information mentioned else where, but the answer is well upvoted, has been around for awhile, and is from a high reputation user, so I am inclined to trust it.
There is an answer from another question where they mention something important.
As to the why you need to worry about serialization, this is because most Java servlet containers like Tomcat require classes to implement Serializable whenever instances of those classes are been stored as an attribute of the HttpSession. That is because the HttpSession may need to be saved on the local disk file system or even transferred over network when the servlet container needs to shutdown/restart or is being placed in a cluster of servers wherein the session has to be synchronized.
The application Im working on DOES use Tomcat, so if this is a restriction or behavior, then I can easily see why all the POJO's are created in this fashion, simply to avoid issues that might develop later, and is a result of experience having worked with this all before, and its that experience that I am lacking.
I am researching how to build a general application or microservice to enable building workflow-centric applications. I have done some research about frameworks (see below), and the most promising candidates share a hard reliance upon RDBMSes to store workflow and process state combined with JPA-annotated entities. In my opinion, this damages the possibility of designing a general, data-driven workflow microservice. It seems that a truly general workflow system can be built upon NoSQL solutions like MondoDB or Cassandra by storing data objects and rules in JSON or XML. These would allow executing code to enforce types or schemas while using one or two simple Java objects to retrieve and save entities. As I see it, this could enable a single application to be deployed as a Controller for different domains' Model-View pairs without modification (admittedly given a very clever interface).
I have tried to find a workflow engine/BPM framework that supports NoSQL backends. The closest I have found is Activiti-Neo4J, which appears to be an abandoned project enabling a connector between Activity and Neo4J.
Is there a Java Work Engine/BPM framework that supports NoSQL backends and generalizes data objects without requiring specific POJO entities?
If I were to give up on my ideal, magically general solution, I would probably choose a framework like jBPM and Activi since they have great feature sets and are mature. In trying to find other candidates, I have found a veritable graveyard of abandoned projects like this one on Java-Source.net.
Yes, Temporal Workflow has pluggable persistence and runs on Cassandra as well as on SQL databases. It was tested to up to 100 Cassandra nodes and could support tens of thousands of events per second and hundreds of millions of open workflows.
It allows to model your workflow logic as plain old java classes and ensures that the code is fully fault tolerant and durable across all sorts of failures. This includes local variable and threads.
See this presentation that goes into more details about the programming model.
I think the reason why workflow engines are often based on RDBMS is not the database schema but more the combination to a transaction-safe data store.
Transactional robustness is an important factor for workflow engines, especially for long-running or nested transactions which are typical for complex workflows.
So maybe this is one reason why most engines (like activi) did not focus on a data-driven approach. (I am not talking about data replication here which is covered by NoSQL databases in most cases)
If you take a look at the Imixs-Workflow Project you will find a different approach based on Java Enterprise. This engine uses a generic data object which can consume any kind of serializable data values. The problem of the data retrieval is solved with the Lucene Search technology. Each object is translated into a virtual document with name/value pairs for each item. This makes it easy to search through the processed business data as also to query structured workflow data like the status information or the process owners. So this is one possible solution.
Apart from that, you always have the option to store your business data into a NoSQL database. This is independent from the workflow data of a running process instance as far as you link both objects together.
Going back to the aspect of transactional robustness it's a good idea to store the reference to your NoSQL data storage into the process instance, which is transaction aware. Take also a look here.
So the only problem you can run into is the fact that it's very hard to synchronize a transaction context from a EJB/JPA to an 'external' NoSQL database. For example: what will you do when your data was successful saved into your NoSQL data storage (e.g. Casnadra), but the transaction of the workflow engine fails and a role-back is triggered?
The designers of the Activiti project have also been aware of the problem you have stated, but knew it would be quite a re-write to implement such flexibility which, arguably, should have been designed into the project from the beginning. As you'll see in the link provided below, the problem has been a lack of interfaces toward which to code different implementations other than that of a relational database. With version 6 they went ahead and ripped off the bandaid and refactored the framework with a set of interfaces for which different implementations (think Neo4J, MongoDB or whatever other persistence technology you fancy) could be written and plugged in.
In the linked article below, they provide some code examples for a simple in-memory implementation of the aforementioned interfaces. Looks pretty cool and sounds to perhaps be precisely what you're looking for.
https://www.javacodegeeks.com/2015/09/pluggable-persistence-in-activiti-6.html
I have a 2-tier application (a heavy client written in C++ which connects to the object-oriented database). The database itself is InterSystems Cache', and it is actually both a database and an application server (Cache' is also a MUMPS interpreter).
For performance reasons, I want to design a client-side cache (or, more generally, a persistence manager).
InterSystems Cache' does have "fast" interfaces like ODBC/JDBC, but I'm dealing with a lot of legacy client code which has already been using Object Binding for ages. So I can't change the architecture of the client, but have to make the protocol faster. The protocol itself is originally very verbose: all class/method/property names are sent verbatim, so, for instance, creating a single object server-side "costs" me 50k traffic.
Classes at the server side support inheritance and can have properties and methods. So using Cache' Object Binding means I can:
create and delete objects,
read and update properties, and
call methods.
What is important here, a server-side method call generally executes the code, the nature of which is unknown to the client. Since this code may potentially alter the state of the objects in the database, client-side cache may need to be invalidated after a method call. This is different from regular CRUD operations, where the client can keep track of changes he made to the objects, and update the cache accordingly.
Questions:
Which Java persistence managers are worth looking at, so that I can take the idea and reinvent the wheel? I'm thinking of J2EE entity beans, JPA, and in-memory grids like Coherence.
Which C++ persistence managers can be adapted to use the InterSystems API? Particularly, are Protocol Buffers a fit for my task?
Which methods can be used to "compress" the protocol which is originally very verbose on the wire? My first call is ZIP-compressing the traffic and hashing (encoding) class/method/property names (so that a TLV structure containing integers instead of names is sent over the network). Any other ideas?
What reading on Enterprise Patterns (particularly, in C++) applicable in my case can you suggest?
ORM framework (Wt::Dbo) seems to do this job:
http://www.webtoolkit.eu/wt/doc/tutorial/dbo/tutorial.html
Wt looks like a very interesting tool to implement a modern web application in C++.
I planning to split my systems into front-end and back-end. Currently my application directly communicates with database, but I want to create a Spring Web service to do it instead. My problem lies with using Hibernate to map my objects to database tables.
I need my front end program to have persistant up-to-date interaction with the database. This again means I have to write a lot of web service endpoints to handle all the queries and updates. This again makes it Hibernate mapping pointless, since I'm not gaining anything.
My question is: is there a proven and reasonable way to pass (via SOAP if possible) hibernate mapped objects over to front-end and later commit changes done to these objects?
In short: no.
Detaching and re-attaching hibernate-managed objects in different applications, like you are thinking of, will lead to all kinds of problems that you want to avoid, such as concurrency and locking issues, after you've dealt with all the LazyLoadingExceptions. It will be a pain in the b***.
The road you're heading into finally leads to an architecture that adds an extra layer of indirection with Data Objects being transferred between business service and clients of those business services. Only your business service will be able to talk to the database directly. Obviously this time-consuming, and must be avoided if possible. That's why I asked you to explain the problem you're trying to solve.
You can pass hibernated entities via SOAP or other serialization mechanisms, but you shall be very careful with lazy loading, collections loading and detaching entities from session - otherwise you may end up sending all your database where you need just one object or hibernate proxies which are not usable on the other side.
I'm looking for a comprehensive setup that you've successfully used already. I've already loads of hints as to what building bricks I might use, but I'm not sure how to put it all together. Tools that need to be bought are OK, too.
Details:
I'm developing a Flex front end client for a Java server application and I have a set of model classes that represent objects in my business logic and should have the same properties and exhibit the same behaviour throughout all layers. These objects
have form validation logic for user input
are displayed in various forms (lists, detail views ...) throughout the UI
are retrieved from and sent to the server using XML or AMF
are validated again on the server
are stored in a RDBM with tables and fields corresponding to the classes and fields
This is a very common application structure, I guess. I'm already using:
ORM for the Java backend (Eclipse persistency package)
automatic mapping from XML to Action Script, using XML schema and the classes in mx.rpc.xml, as described here.
Now, what I'd really like to do is define the objects once (I already have them in XSD) and have tools set up class stubs for the whole chain. What can I use?
I've already heard of (but not evaluated):
XMLBeans to generate Java classes from XML Schema
Granite DS to generate AS classes from Java classes
I don't think your Flex UI should know or care about Java objects.
Take a "contract first", XML schema-drive approach and come up with the messages that you need to exchange between the Flex client and your service tier. Once you have that in place, the two are completely decoupled. That's a good start.
I'd also recommend not buying into a generation scheme. You'll only have to pay that price once during development.
I'm a Spring user, so I'd recommend Spring's "contract first" web services, using the Spring OXM interfaces. That will keep your UI and service tiers nicely decoupled. Use the org.springframework.oxm interfaces to do your mappings.
You can use Spring/BlazeDS to integrate your Flex UI with the Spring back end.
You have the full power of Spring IoC and AOP to create the back end.
I think you'll find it's a good approach for this problem.