Seeking advice: overview/detail representation of models - java

I have a problem that bothers me for quite some time, and I also came up with a solution, the question here is rather on how to best implement it, so I'm seeking advice from you if you've ever dealt with that situation before (it's hard to find anything useful on this topic on the web).
Situation
3-tier architecture (rich client such as Swing or Eclipse RCP or Android, web application with implementations of the service layer, relational database).
My models are POJOs (plain old Java objects, pure data containers with getters and setters), which are persisted (technical ID on all my models).
I'm often dealing with large models that are aggregatively used, but need to be efficiently read and transported. Let's say I've got following models:
User, with login name, password hash, salt, first/last name, e-mail address, authorization credentials, profile picture (Image)
Image, with name, content type and (usually large) data
Article, with text, and author (User)
Problem
Now when I'm listing or loading an article, I don't want to load the entire author (User), as it exposes too much details (password hash and salt) and carries too much data (credentials, image) for what I actually need in the context of an article (first/last name and e-mail).
Generally speaking: sometimes I need the full details of my models (when creating/editing them, or in very specific situations), but when I use them aggregatively in other models, I'd rather have a simplified form (if I need details, I could load them with a separate request).
Solution
For each model, I could create two variants: a full detail variant with full CRUD (create, read, update, delete) and a simplified, read-only variant, which can be used as a surrogate in aggregative relationships. The simplified model version also contains the technical ID of the detail version, so I could fetch that by demand.
For the User: the simplified model just has the first/last name and e-mail.
For the Image: the simplified model comes without the image data.
The article's author is the simplified version of the user, and the User's profile picture is the simplified version of the image.
Question
Is this an existing pattern? It somewhat relates to DTO (Data Transfer Object), but it's not the same. Has anybody seen this before?
Have you used something like this before? Any advice or tips on the naming, OO-relationship between the two representations?

I cannot correlate your solution option to a pattern that i am aware of. But your requirement(s) can be fulfilled by introducing a very thin API (Web API) on top of your service later.
There are two parts to it,
Reading: as you've mentioned, you may need to funnel specific selection of data items to the consumer based on the usage/permissions. This can be accommodated by specifying a set of facets in your incoming GET query.
Check here for a sample, https://developer.linkedin.com/documents/people-search-api
Writing: For fragments, use PATCH operations. This will give you the flexibility to update by field.
Check here for a sample: https://www.mnot.net/blog/2012/09/05/patch
All in all, what this gives you is a very flexible API with a very clean domain model under your API. And this is a widely accepted approach these days.
Hope that helps.

Related

What is the correct way to structure this kind of data in Firestore?

I have seen videos and read the documentation of Cloud firestore, from Google Firebase service, but I can't figure this out coming from realtime database.
I have this web app in mind in which I want to store my providers from different category of products. I want perform a search query through all my products to find what providers I have for such product, and eventually access that provider info.
I am planning to use this structure for this purpose:
Providers ( Collection )
Provider 1 ( Document )
Name
City
Categories
Provider 2
Name
City
Products ( Collection )
Product 1 ( Document )
Name
Description
Category
Provider ID
Product 2
Name
Description
Category
Provider ID
So my question is, is this approach the right way to access the provider info once I get the product I want?
I know this is possible in the realtime database, using the provider ID I could search for that provider in the providers section, but with Firestore I am not sure if its possible or if this is right approach.
What is the correct way to structure this kind of data in Firestore?
You need to know that there is no "perfect", "the best" or "the correct" solution for structuring a Cloud Firestore database. The best and correct solution is the solution that fits your needs and makes your job easier. Bear also in mind that there is also no single "correct data structure" in the world of NoSQL databases. All data is modeled to allow the use-cases that your app requires. This means that what works for one app, may be insufficient for another app. So there is not a correct solution for everyone. An effective structure for a NoSQL type database is entirely dependent on how you intend to query it.
The way you are structuring your data looks good to me. In general, there are two ways in which you can achieve the same thing. The first one would be to keep a reference of the provider in the product object (as you already do) or to copy the entire provider object within the product document. This last technique is called denormalization and is a quite common practice when it comes to Firebase. So we often duplicate data in NoSQL databases, to suit queries that may not be possible otherwise. For a better understanding, I recommend you see this video, Denormalization is normal with the Firebase Database. It's for Firebase Realtime Database but the same principles apply to Cloud Firestore.
Also, when you are duplicating data, there is one thing that needs to keep in mind. In the same way, you are adding data, you need to maintain it. In other words, if you want to update/delete a provider object, you need to do it in every place that it exists.
You might wonder now, which technique is best. In a very general sense, the best way in which you can store references or duplicate data in a NoSQL database is completely dependent on your project's requirements.
So you should ask yourself some questions about the data you want to duplicate or simply keep it as references:
Is the static or will it change over time?
If it does, do you need to update every duplicated instance of the data so they all stay in sync? This is what I have also mentioned earlier.
When it comes to Firestore, are you optimizing for performance or cost?
If your duplicated data needs to change and stay in sync in the same time, then you might have a hard time in the future keeping all those duplicates up to date. This will also might imply you spend a lot of money keeping all those documents fresh, as it will require a read and write for each document for each change. In this case, holding only references will be the winning variant.
In this kind of approach, you write very little duplicated data (pretty much just the Provider ID). So that means that your code for writing this data is going to be quite simple and quite fast. But when reading the data, you will need to load the data from both collections, which means an extra database call. This typically isn't a big performance issue for reasonable numbers of documents, but definitely does require more code and more API calls.
If you need your queries to be very fast, you may want to prefer to duplicate more data so that the client only has to read one document per item queried, rather than multiple documents. But you may also be able to depend on local client caches makes this cheaper, depending on the data the client has to read.
In this approach, you duplicate all data for a provider for each product document. This means that the code to write this data is more complex, and you're definitely storing more data, one more provider object for each product document. And you'll need to figure out if and how to keep up to date on each document. But on the other hand, reading a product document now gives you all information about the provider document in one read.
This is a common consideration in NoSQL databases: you'll often have to consider write performance and disk storage vs. reading performance and scalability.
For your choice of whether or not to duplicate some data, it is highly dependent on your data and its characteristics. You will have to think that through on a case-by-case basis.
So in the end, remember that both are valid approaches, and neither of them is pertinently better than the other. It all depends on what your use-cases are and how comfortable you are with this new technique of duplicating data. Data duplication is the key to faster reads, not just in Cloud Firestore or Firebase Realtime Database but in general. Any time you add the same data to a different location, you're duplicating data in favor of faster read performance. Unfortunately in return, you have a more complex update and higher storage/memory usage. But you need to note that extra calls in Firebase real-time database, are not expensive, in Firestore are. How much duplication data versus extra database calls is optimal for you, depends on your needs and your willingness to let go of the "Single Point of Definition mindset", which can be called very subjective.
After finishing a few Firebase projects, I find that my reading code gets drastically simpler if I duplicate data. But of course, the writing code gets more complex at the same time. It's a trade-off between these two and your needs that determines the optimal solution for your app. Furthermore, to be even more precise you can also measure what is happening in your app using the existing tools and decide accordingly. I know that is not a concrete recommendation but that's software development. Everything is about measuring things.
Remember also, that some database structures are easier to be protected with some security rules. So try to find a schema that can be easily secured using Cloud Firestore Security Rules.
Please also take a look at my answer from this post where I have explained more about collections, maps and arrays in Firestore.

Always 'Entity first' approach, when designing java apps from scratch?

I'm just reading the book here: http://www.amazon.com/Java-Architects-Handbook-Second-Edition/dp/0972954880/ trying to find a strategy about how to efficiently design a (generic) medium to large application (200 tables or more) - for instance a classic, multi-layered, corporate intranet. I'm trying to adapt my past experience (as a database designer, but also OOAD) in order to architect such a java application. From what I've read, if you define your entities first, there is no recommended way to infer your database directly (automatically).
The book says that you would build the entity/object model first (OOAD) and THEN there is the db admin/dev.(?) job to build/infer the database (schema, normalization etc.) based on the entity model already built. If this is the case, I'm afraid the architect/developer could lose control over important aspects - normalization, entity-attribute-value modeling etc.
Perhaps like many older developers (back-end developers, architects etc) I feel more comfortable defining the database schema first - and spending a good amount of time on aspects like normalization etc. While this would be certainly possible nowadays, I'm asking myself if this would become (pretty soon, if not already) the 'old fashioned way' and not the norm - as a classic/recommended approach when designing applications from scratch.
I know Entity Framework (.NET) already have these approaches explicitly defined - 'entities first', 'database first', 'code first' and and these could be mixed, if necessary. I surely know that they recommend 'entity first' for newly designed apps, and 'database first' if you have already defined database schema (which is the case for many older applications, when migrating etc. I'm just asking if there is something similar for the java world.
So, the questions are: (although I know there is no silver bullet etc.)
'Entities first' for newly built apps - this is the norm nowadays?
What tools do you use (if any) in order to assist inferring db schema process? - your experience, pros & cons with concrete UML
tools etc.
What if you have parts/older/sub-domain database schema (which you'd want to preserve, mainly)? In such case, you would infer entities model from
database and then refactor the model using your preferred UML tool?
From labor force perspective (let's say for db of 200-500 tables): what is the best approach: for instance, to have 2 different people
involved in designing OOAD/entities and database respectively,
working together with an architect?
As you expect - my answer is it depends.
The problem is that there are so many possible flavours and dimensions to a good design you really need to take the widest view possible first.
Ask yourself some of the big questions:
Where is the core of the system? Is the database really the core or is it actually just a persistence layer for the code. It could also perhaps be that the database is the core and the code is really just a snazzy UI on the data. There can also be a mix - where some of the tables are core along with some of the entities.
What do you see in the future? Remember that there are developments going on as we speak that are moving database technology rapidly forward. There are some databases that are all in-ram. Some are designed for a distributed architecture. Some are primarily cloud. If you build your schema first you risk locking yourself in to a certain technology.
What scale do you want to achieve? By insisting on a specific database you may be closing doors to perhaps hand-held presence.
I generally find entity first as the best initial approach because you can always derive a schema from the entities and some meta-data. It is certainly possible to go schema first and grow the entities out of the schema but that way you generally find the database influences the design too much.
1) I've done database first in the past but now I usually do Entity first but that's mainly because of the tools I'm using in creating the applications. Entity first has a few good advantages over trying to match your entities to your defined schema later. You're also not locking yourself to tightly to your schema. What your application is for matters alot as well, if it's just a basic CRUD application, write once read many or does it actually 'do' something that will inform your choice over how to architect your application.
2) I use hibernate a lot which encourages creating your model first, designing all your entities etc and then generating the schema from that, hibernate can generate your whole schema from the models you've created (though you may need to tweak them to make sure they're not crazy). If you have 200 entities in your model then you probably want to do a significant amount of UML modelling ahead of time to ensure your model is consistent.
3) If you're working with partially legacy database then it can sometimes be good to fall into line with the schema design for that so your entities and schema are consistent. It can be a bit of a pain but then so it trying to explain why part of your app is just different to other parts. So yes I would probably infer my entities from the schema in that case. But again if it was totally crazy then it may be to do some very specific DAO code to hide that part of the schema from that app and pretend it's not there.
4) I can't really give you a good answer on this as I'm not sure what you're driving at really. Once you have the design standards for your schema it's turning the handle to crank it out.
So after all that my answer is 'It depends'
While the answers already posted cover a lot of points - and ultimately, all answers probably have to all sum up to "it depends" - I'd like to expand on a point that's been touched on already.
My focus is on data - I'm a business intelligence and data warehousing developer, and I deal with issues like data quality, data governance, having a set of master data, etc. To this end, I have to pull data from other systems - data which is in varying conditions.
When considering whether the core of your system is really the database or the front end (as suggested by OldCurmudgeon), I strongly suggest thinking outside of your own area. I have seen and heard about many systems where it's clear that the database has been treated as an afterthought (sometimes created via an entity-first model, but also sometimes hand-built), despite the fact that most of the business value is in the data. More and more companies are of course realising that their data is valuable and are adopting tools to make use of it - but it's difficult to do if poor transactional databases mean that data has been lost, was never saved in the first place, has been overwritten when a history is needed, or is inconsistent.
While I don't want to do myself and others with similar roles out of a job: If the data that a system you're working on is or might be valuable, if there's any reason it might be accessed by anything other than the front end you're creating, then it is worth the time and effort to create a sound data model to hold it. If the system is for an organisation or is going to be sold to organisations, there's a decent chance they'll want to report out of it, will want to run output from it into a data warehouse or other data stores, and will want to carry out analysis on the data it creates and holds.
I don't know enough about tools like Hibernate to know if it's possible to both use them to work in an entity-first manner and still create a good quality database, but I know that I have come across some problematic databases created in this manner. At the very least, as has been suggested, if you are going to work that way, make sure it is producing something sane and perhaps adjust it where necessary to maintain data integrity. If data integrity is a key requirement and you cannot get such a tool to create a suitable database that will ensure data integrity, then perhaps consider going back to doing things the "old fashioned" way.
I would also suggest that there's real value in developers working alongside any data specialists, analysts, architects, etc. they may have as colleagues to do some up-front modelling, even if the system they then produce uses entity-first and even if it veers away from the more conceptual models produced early on for technical reasons. I have seen many baked-in problems in systems which have been caused by a lack of understanding of the wider business entities and relationships, and which could have been avoided if time had been spent understanding the overall structure in this way. I've been personally responsible for building those problems when I was an application developer myself, so this shouldn't be read as criticism of front-end developers - just a vote in favour of cross-functional and collaborative analysis and modelling before development approaches and designs are decided.

Should my DAOs (Database Entities) Directly match my UI Objects?

I am trying to figure out best practice for N-Tier application design. When designing the objects my UI needs and those that will be persisted in the DB some of my colleagues are suggesting that the objects be one in the same. This doesn't not feel right to me and I am ultimately looking for some best practice documentation to help me in this decision.
EDIT:
Let me clarify this by saying that the tables (Entity Classes) that are in the DB are identical to the objects used in the UI
I honestly do not understand why I would want to design this way given that other applications may want to interact with my Data Access Layer....or it is just ignorance or lack of understanding on my part.
Any documentation, information you could provide would be greatly appreciated. Just want to better understand these concepts and I am having a hard time finding some good information on the best practice for implementing these patterns (Or it is right in front of me on what I found and I didn't understand what was being outlined).
Thanks,
S
First of all, DAOs and database entities are two very different things.
Now to the question. You're right. The database entities are mapped to a database schema, and this database schema should follow the database design best practices, and be normalized. The UI sometimes dislays exactly the information from a given entity, but often show data that comes from multiple entities, in an aggregate format. Or, to the contrary, they only show a small part of a given entity.
For example, it would make sense for a UI to show a product name, description and price along with the name of its category, along with the number of remaining items in stock, along with the manufacturer of the product. It would make no sense to have a persistent entity containing all those fields.
In general and according to most "best practices" comments, yes, those two layers should be decoupled and there should be separate objects.
BUT: if your mapping would only be a one-to-one-mapping without any further functionality in the non-database-object, why introduce an additional object? So, it depends. (as usual ;-) ).
Don't use additional objects if the introduced overhead is bigger than the gain. And don't couple the two layers if re-usability is a first-class-goal. That may not be the case with some legacy applications, e.g.

Spring MVC: should service layer be returning operation specific DTO's?

In my Spring MVC application I am using DTO in the presentation layer in order to encapsulate the domain model in the service layer. The DTO's are being used as the spring form backing objects.
hence my services look something like this:
userService.storeUser(NewUserRequestDTO req);
The service layer will translate DTO -> Domain object and do the rest of the work.
Now my problem is that when I want to retrieve a DTO from the service to perform say an Update or Display I can't seem to find a better way to do it then to have multiple methods for the lookup that return different DTO's like...
EditUserRequestDTO userService.loadUserForEdit(int id);
DisplayUserDTO userService.loadUserForDisplay(int id);
but something does not feel right about this approach. Perhaps the service should not return things like EditUserRequestDTO and the controller should be responsible of assembling a requestDTO from a dedicated form object and vice versa.
The reason do have separate DTO's is that DisplayUserDTO is strongly typed to be read only and also there are many properties of user that are entities from a lookup table in the db (like city and state) so the DisplayUserDTO would have the string description of the properties while the EditUserRequestDTO will have the id's that will back the select drop down lists in the forms.
What do you think?
thanks
I like the stripped down display objects. It's more efficient than building the whole domain object just to display a few fields of it. I have used a similar pattern with one difference. Instead of using an edit version of a DTO, I just used the domain object in the view. It significantly reduced the work of copying data back and forth between objects. I haven't decided if I want to do that now, since I'm using the annotations for JPA and the Bean Validation Framework and mixing the annotations looks messy. But I'm not fond of using DTOs for the sole purpose of keeping domain objects out of the MVC layer. It seems like a lot of work for not much benefit. Also, it might be useful to read Fowler's take on anemic objects. It may not apply exactly, but it's worth thinking about.
1st Edit: reply to below comment.
Yes, I like to use the actual domain objects for all the pages that operate on a single object at a time: edit, view, create, etc.
You said you are taking an existing object and copying the fields you need into a DTO and then passing the DTO as part of the model to your templating engine for a view page (or vice-versa for a create). What does that buy you? The ref to the DTO doesn't weigh any less than the ref to the full domain object, and you have all the extra attribute copying to do. There's no rule that says your templating engine has to use every method on your object.
I would use a small partial domain object if it improves efficiency (no relationship graphs to build), especially for the results of a search. But if the object already exists don't worry about how big or complex it is when you are sticking it in the model to render a page. It doesn't move the object around in memory. It doesn't cause the templating engine stress. It just accesses the methods it needs and ignores the rest.
2nd edit:
Good point. There are situations where you would want a limited set of properties available to the view (ie. different front-end and back-end developers). I should read more carefully before replying. If I were going to do what you want I would probably put separate methods on User (or whatever class) of the form forEdit() and forDisplay(). That way you could just get User from the service layer and tell User to give you the use limited copies of itself. I think maybe that's what I was reaching for with the anemic objects comment.
You should use a DTO and never an ORM in the MVC layer! There are a number of really good questions already asked on this, such as the following: Why should I isolate my domain entities from my presentation layer?
But to add to that question, you should separate them to help prevent the ORM being bound on a post as the potential is there for someone to add an extra field and cause all kinds of mayhem requiring unnecessary extra validation.

Dynamic Typed Table/Model in Java EE?

Usually with Java EE when we create Model, we define the fields and types of fields through XML or annotation before compilation time. Is there a way to change those in runtime? Or better, is it possible to create a new Model based on the user's input during the runtime? Such that the number of columns and types of fields are dynamic (determined at runtime)?
Help is much appreciated. Thank you.
I felt the need to clarify myself.
Yes, I meant database modeling, when talking about Model.
As for the use cases, I want to provide a means for users to define and create their own tables. Infinite flexibility is not required. However some degree of freedom has to be there: e.g. the users can define what fields are needed to describe their product.
You sound like you want to be able to change both objects and schema according to user input at runtime. This sounds like a chaotic recipe for disaster to me. I've never seen it done.
I have seen general schemas that incorporate foreign key relationships to generic tables of name/value pairs, but these tend to become infinitely flexible abstractions that can neither be easily understood nor get out of their own way when it comes to performance.
I'm betting that your users really don't want infinite flexibility. I'd caution you against taking this direction. Better to get your real use cases straight.
Anything is possible, of course. My direct experience tells me that it's a bad idea that your users will hate if you can pull it off. Best of luck.
I worked on a system where we had such facilities. To stay efficient, we would generate/alter the table dynamically for the customer schema. We also needed to embed a meta-model (the model of the model) to process information in the entities dynamically.
Option 1: With custom tables, you have full flexibility, but it also increases the complexity significantly, notably the update/migration of existing data. Here is a list of things you will need to consider:
What if the type of a column change?
What if a column is added? Is there a default value?
What if a column is removed? Can I discard the existing information?
How to manage renaming of a column?
How to make things portable across databases?
How to make it efficient at database-level (e.g. indexes) ?
How to manage a human error (e.g. user removes a column then changes its mind)?
How to manage migration (script, deployment, etc.) when new version of the system is installed at customer site?
How to have this while using an ORM?
Option 2: A lightweight alternative is to add a few "spare" columns in the business tables of different types (e.g.: "USER_DATE_1", "USER_DATE_2", etc.) I've seen that a few times. It will makes your DBA scream and is not really considered a good practice, but at least can facilitates a few things, e.g. (migration scripts, ORM integration).
Option 3: Another option is to store everything in a table with a structure property/data. But then it's really a disaster for database performance. Anything that is not completely trivial will require many joins. And the DBA will scream even more.
Option 4: It is a mix of options 2 and 3. Core tables are fixed, but a table with property/data can be used to somehow extend them.
In summary: think twice before you go this way. It can be done, but has a significant impact on the design and maintenance of the application.
This is somehow possible using meta-modeling techniques:
tables for table / column / types at the database level
key/value structures at the Java level
But this has obvious limitations (lack of strong typed objects) and can IMHO get quickly very complicated (not even sure how to deal with relations). I wouldn't use this approach to define domain objects entirely, but only to extend existing ones (products, articles, etc).
If I remember well, this is what some e-commerce solutions (e.g. BroadVision) were doing.
I think I have found a good answer myself. Those new no-sql (hbase, cassandra) database seems to be exactly what I was looking for. Thanks everyone for your answeres.

Categories