I'm working on project where I need to create objects from different data sources/formats. I would like to know what is the best way to organize source code to make it easy.
Let's say I have class User and I want to have ability to create objects from data from database and JSON. The purpose of it is letting user of my app to browse data online and offline. I'm using GSON and ORMLite. In addition fields in JSON and database may be different, but the "main" fields are the same. Is it a good idea to create class which contains all properties/fields from JSON and database? Something similar to class below:
#DatabaseTable(tableName = "user", daoClass = UserDaoImpl.class)
public class User {
public static final String ID_FIELD_NAME = "id";
public static final String USER_LOGIN_FIELD_NAME = "login";
public static final String USER_EMAIL_FIELD_NAME = "email";
public static final String SERIALIZED_COUNTRY_FIELD_NAME = "user_county";
// DB & JSON
#DatabaseField(generatedId = true, columnName = ID_FIELD_NAME)
int id;
// DB & JSON
#DatabaseField(columnName = USER_LOGIN_FIELD_NAME)
String login;
//DB & JSON
#DatabaseField(columnName = USER_EMAIL_FIELD_NAME)
String email;
//Only JSON
#SerializedName(SERIALIZED_COUNTRY_FIELD_NAME)
String country;
public Track() {
}
}
Is it a good idea to create class which contains all properties/fields from JSON and database?
I think the short answer is yes. You can certain use the same objects to represent the data in the database and via JSON.
When you will get into problems is when you need to change the data representations in the database but don't want to change your JSON API or vice versa. Then you will need 2 separate classes and a mapping function between them.
But if you can get away with one class then that's the best way.
consider the factory pattern, you can use it to abstract the creation of concrete user classes as a function of the data source.
make User into an interface, and have a data-source specific implementation of User for each type of data source.
Related
I have an annotated entity object with custom table and field names which i use with Spring Data JDBC (not JPA). Smth like:
#Data
#Table("custom_record_table")
public class Record {
#Id
#Column("id_field")
Long id;
String name;
#Column("name_short")
String shortName;
}
I'd like to get a map of properties to fields. Smth like:
{"id":"id_field","name":"name","shortName":"name_short"}
What's the proper way to get it?
For context: I plan to use this map to construct queries to load many-to-one refs along with main table. Now I get this map with plain reflections API scanning for fields and their annotations. It works, but I am feeling like inventing a bicycle...
What you are looking for is the JdbcMappingContext. It should be available as a bean, so you can simply autowire it in your application.
JdbcMappingContext mappingContext // ... autowired
Map<String, String> propToCols = new HashMap<>();
mappingContext.getRequiredPersistentEntity(Record.class).forEach(
rpp -> propToCols.put(rpp.getName(), rpp.getColumnName().getReference()
);
I wrote this without IDE so it will contain mistakes.
There are also special things to consider when you have references and stuff, but this should get you started.
you want to convert your objects to JSON. they are different libraries for it. people prefer Jackson library the most
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.ObjectWriter;
ObjectWriter ow = new ObjectMapper().writer().withDefaultPrettyPrinter();
String json = ow.writeValueAsString(object);
What if you create a static function inside Record?
#Data
#Table("custom_record_table")
public class Record
{
#Id
#Column("id_field")
Long id;
String name;
#Column("name_short")
String shortName;
static Map<String, String> getMap()
{
return Map.ofEntries (
Map.entry("id", "id_field"),
Map.entry("name", "name"),
Map.entry("shortName","name_short")
);
}
/* returning map would look like:
{"id":"id_field","name":"name","shortName":"name_short"} */
}
Note that this would get you a Map, in which you can retrieve the field values by the field keys you know.
map.get("id") will return id_field, and so on. This may be useful when you need to know which is the name that references the fields.
BUT, if you just want an String representation, even a JSON, it would be best to create a JSON from those fields.
Anyway, if I understood correctly, you want a Map object, so the first solution should do it.
I'm trying to identify the best way to do mongodb object versioning.
Based on the mongodb document versioning pattern, storing revisions in a history collection and having current version in main collection is recommended. According to that, each revision contains the complete object instead of storing diffs.
Then I went through ways to implement data versioning in mongoDB where it recommends a method to store a single document containing all the revisions inside it having a separate history collection.
Therefore, I'm trying to implement my own object versioning implementation for the following document model due its complexity.
Invoice.java
public class Invoice {
private long invoiceId;
private String userName;
private String userId;
private LocalDateTime createdDate;
private LocalDateTime lastModifiedDate;
private List<String> operationalUnits;
private List<BodyModel> body;
private List<ReviewModel> reviews;
private BigDecimal changedPrice;
private String submitterId;
private LocalDateTime submittedTime;
private String approverId;
private LocalDateTime approvedTime;
}
BodyModel.java
public class BodyModel {
private String packageId;
private List<ItemModel> items;
private List<String> reviews;
}
ReviewModel.java
public class ReviewModel {
private String publishedTime;
private String authorName;
private String authorId;
private String text;
}
ItemModel.java
public class ItemModel {
private String itemNumber;
private String description;
private String brandId;
private String packId;
private List<String> reviews;
}
ER Diagram (Simplified)
At the moment, I'm using Javers library. But, Javers keeps the Invoice model as the main entity and other models such as BodyModel, ReviewModel, ItemModel as separated valueObjects. As a result, instead of creating a single revision document, it creates separated documents for the valueObjects. Additionally, it always constructs the current objects from the base version plus all changes which leads to huge read time. Addtionally, I identified a valueObjects issue that comes with javers. Refer this question for more info: MongoDB document version update issue with JaVers
Following are the issues, I've if I'm going to create my own implementation using spring boot.
If I'm going to put revisionId in each of the revisions (as shown in the below object) when mongoDB save(), how do I find the current revisionId to be included ?
{
_Id: <ObjectId>,
invoiceId: <InvoiceId>,
invoice: <Invoice>,
revisionId: <revisionId>
}
For this I can keep a field for revisionId in InvoiceModel which can be updated when saving to the main collection. And at the same time, it can be used to save the revision into history collection. Is it the best possible way to do it in this case ?
If I'm only going to store diffs, then how do I obtain the current version of the object ?
For this, it feels essential to fetch the current object from the main collection and then compare it with new version (I already have the new version, because that's what I'm going to store in main collection) to create the diff. Then diff can be stored in history collection and new version in main collection. Is it the best possible way to store diffs ?
In both scnearios, aspects coming from AOP can be used to intercept the save() method of the base repository to accomplish the task. But I don't mainly consider about coding implementation details, I would like to know which method or methods would be efficient in storing revisions for a data model such as given above (would like to discuss about methods I've mentioned as well) ?
I'm using elastic search for storing the data (spring data elasticsearch ) and I, need to store geolocation in my document. The class structure is in the following format.
#Document(indexName = "outlet")
public class OutletIndex implements IESMapper {
#Id
private String path;
private String name;
#GeoPointField
private GeoPoint geoPoint;
// setters and getters
}
Since there are no setters for class GeoPoint it does not work with #ModelAttribute annotation in spring MVC controller. That I need to get it from the front so I'd updated it to:
#Document(indexName = "outlet")
public class OutletIndex implements IESMapper {
#Id
private String path;
private String name;
#GeoPointField
private GeoPoint geoPoint;
private String geoLocation;
public void setGeoLocation(String geoLocation) {
this.geoLocation = geoLocation;
if (geoLocation != null && geoLocation.trim() != "") {
String[] loc = geoLocation.split(",");
this.geoPoint = new GeoPoint(Double.parseDouble(loc[0]), Double.parseDouble(loc[1]));
}
}
// setters and getters
}
An additional field which holds it's string representation and within the Setter which also updates the GeoPiont.
Is there any better approach to do this?
EDIT: One more doubt, is there any way to use the string as geopoint(comma separated value)?
It seems like you are using geo_point data type with data in Elasticsearch of the format location:"latVal,lonVal". This is allowed with Elasticsearch as one of the valid formats for geo_point.
Elasticsearch just provides the data stored in the format you've given. For the same geo_point type in your ES schema, you can store it in multiple formats in different documents and when you try to get them ES will just return them in the format you've stored.
This thing causes issues, as if you have different formats for the same type to be handled specially for a type safe language like Java. You can either do 2 things: ensure a consistent type throughout (both while indexing and retrieving), handle each corner case on application side.
To avoid all this mess, a good rule of thumb I follow is to use the same format as the one provided by the Java client. Here in this case I would not use any custom de-serialization and serialization logic. Instead it would be better to save the location in the format location:{"lat": latVal, "lon": lonVal}. (GeoPoint class expects a double lat and a double lon)
If you ensure this, you will no longer need to think multiple times over the types you'll be receiving and their corner cases while handling them and at the same time avoid a lot of confusion.
I am using gson to produce json of a collection of objects in Java (Some objects have other collections too). This json will be used to populate the web page for users with different clearance levels. Therefore the detail which users can see differs. Web page only shows what it needs to show however if I use the same json for two different pages then html source code will have more data than it should have. Is there a way to inform gson which variables in which class should be added to the json? As far as I search I could not find an easy way. Either I will produce json myself or clear extra data from the json which gson produced.
I need to use same classes for different clearance levels and get different json.
You are trying to use Gson to generate multiple different JSON outputs of the same objects in the same JVM, which is going to be difficult, both in Gson and any good serialization library, because their express goal is essentially the opposite of what you're looking for.
The right thing to do would be to instead represent these different clearance levels with different classes, and simply serialize those different classes with Gson as normal. This way you separate the security model from the serialization, letting you safely pass this information around.
/**
* Core data class, contains all information the application needs.
* Should never be serialized for display to any end user, no matter their level.
*/
public class GlobalData {
private final String username;
private final String private_data;
private final String secure_data;
}
/** Interface for all data display operations */
public interface DisplayData {
/** Returns a JSON representation of the data to be displayed */
public String toJson();
}
/**
* Class for safe display to an untrusted user, only holds onto public
* data anyone should see.
*/
public class UserDisplayData implements DisplayData {
private final String username;
public UserDisplayData(GlobalData gd) {
username = gd.username;
}
public String toJson() {
return gson.toJson(this);
}
}
/**
* Class for safe display to a trusted user, holds private information but
* does not display secure content (passwords, credit cards, etc.) that even
* admins should not see.
*/
public class AdminDisplayData implements DisplayData {
private final String username;
private final String private_data;
public AdminDisplayData(GlobalData gd) {
username = gd.username;
private_data = gd.private_data;
}
public String toJson() {
// these could be different Gson instances, for instance
// admin might want to see nulls, while users might not.
return gson.toJson(this);
}
}
Now you can sanitize and serialize your data as two separate steps, and use type safety to ensure your GlobalData is never displayed.
public void getDisplayData(GlobalData gd, User user) {
if(user.isAdmin()) {
return new AdminDisplayData(gd);
} else {
return new UserDisplayData(gd);
}
}
public void showData(DisplayData data) {
String json = data.toJson();
// display json however you want
}
If you erroneously tried to call showData(gd) you'd get a clear compilation error that you've done something wrong, and it's a quick fix to get the correct result by calling showData(getDisplayData(gd, user)) which safely and clearly does exactly what you want.
you can add a Expose annotations like this on the filed you don't want:
#Expose(serialize = false, deserialize = false)
private String address;
some more information here:
https://sites.google.com/site/gson/gson-user-guide#TOC-Gson-s-Expose
I am using Play Framework 1.2.4 with Java and using JPA to persist my database objects. I have several Model classes to be rendered as JSON. But the problem is I would like to customize these JSON responses and simplify the objects just before rendering as JSON.
For instance, assume that I have an object named ComplexClass and having properties id, name, property1,...,propertyN. In JSON response I would like to render only id and name fields.
What is the most elegant way of doing this? Writing custom binder objects or is there simple JSON mapping such as using a template?
Play Framework 1.2.4 directly depends on the gson library so you could use that to render your JSON strings. All you have to do is use gson's #Expose annotation. So in your example, you would mark the fields you want in your JSON string like this:
public class ComplexClass {
#Expose
public Long id;
#Expose
public String name;
...
}
Then in your controller, you would just do this:
public static void someActionMethod() {
// get an instance of your ComplexClass here
ComplexClass complex = ...
Gson gson = new GsonBuilder().excludeFieldsWithoutExposeAnnotation().create()
String json = gson.toJson(complex);
renderJson(json);
}
See documentation here.
If ComplexClass is actually a play.db.jpa.Model and therefore the id field is abstracted away in a parent class and you can't put the #Expose annotation on it, then you could create your own ExclusionStrategy that skips fields that aren't annotated with #Expose and are not called id. So something like this (pseudo-code):
public final class ComplexClassExclusionStrategy implements ExclusionStrategy {
public boolean shouldSkipField(FieldAttributes attributes) {
if (name of field is "id") return false;
if (field is annotated with #Expose) return false;
return true;
}
Then the controller would altered slightly to look like this:
GsonBuilder builder = new GsonBuilder();
ComplexClassExclusionStrategy strategy = new ComplexClassExclusionStrategy();
builder.setExclusionStrategies(strategy);
Gson gson = builder.create();
String json = gson.toJson(complex);
renderJson(json);
Use FlexJSON, it's really easy. It allows you to create JSONSerializers which can include/exclude the fields you want.
Check out this article for some examples of using it with Play! Framework.
Here's a simple example:
public ComplexClass {
public Long id;
public String name;
// And lots of other fields you don't want
public String toJsonString() {
// Include id & name, exclude all others.
JSONSerializer ser = new JSONSerializer().include(
"id",
"name",
).exclude("*");
return ser.serialize(this);
}
}
You can add it to your dependencies.yml like so:
require:
- play
- net.sf.flexjson -> flexjson 2.1
What I usually do is write an interface for models that implements a toJSONString() method so that I can call renderJSON(someModel.toJSONString()) in the controller.
Link to official website
EDIT: Extra example for lists/collections
Ok, when you start serializing list you might get some unexpected results. This is because the order of evaluation is important. The first include() or exclude() takes precedence over the following ones.
Here's an example of serializing the childs of a parent entity (OneToMany relation).
JSONSerializer ser = new JSONSerializer();
// Exclude these standard fields from childs
ser.exclude(
"*.persistent",
"*.class",
"*.entityId"
);
// Include childs and all its other fields
ser.include(
"childs",
"childs.*"
);
// Exclude everything else
ser.exclude("*");
String data = ser.serialize(parent);
The * is a wildcard by the way. This piece of documentation explains it perfectly:
An exclude of *.class will match to any path depth. So if flexjson is serializing the field with path of "foo.bar.class" the * in *.class will match foo.bar.