Embedded entity "with Text" reading from originaly stored as Blob - java

in addition to the first question with working answer
I was not able to deserialize the following
private #Serialize List<ProduktSprache> produktsprachen;
the new class looks like this stored as Blob with orginaly having two fields defined as Text:
import java.io.Serializable;
import com.googlecode.objectify.annotation.Serialize;
public class ProduktSprache implements Serializable {
private static final long serialVersionUID = 1L;
private String sprache;
private String name;
private String details;
private String detailsText; // TEXT
private String detailsTextHTML; // TEXT

It's really hard to tell what's going on here, but it sounds like you have some data that was serialized (with java serialization) into the field. To figure this out, I'd try to simplify it as much as possible.
Serialized data is written as type Blob in the entity. So the first step is to change your produktsprachen field to type Blob. Now you can load the entity and you have a byte[] of data to work with.
Next, try to decode that byte[]. Use an ObjectInputStream to read the contents and inspect what you actually have present. Whatever structure you find there is what you eventually need to use as your #Serialized field type in your Objectify entity class.

Related

MongoDB complex object versioning with Spring Boot

I'm trying to identify the best way to do mongodb object versioning.
Based on the mongodb document versioning pattern, storing revisions in a history collection and having current version in main collection is recommended. According to that, each revision contains the complete object instead of storing diffs.
Then I went through ways to implement data versioning in mongoDB where it recommends a method to store a single document containing all the revisions inside it having a separate history collection.
Therefore, I'm trying to implement my own object versioning implementation for the following document model due its complexity.
Invoice.java
public class Invoice {
private long invoiceId;
private String userName;
private String userId;
private LocalDateTime createdDate;
private LocalDateTime lastModifiedDate;
private List<String> operationalUnits;
private List<BodyModel> body;
private List<ReviewModel> reviews;
private BigDecimal changedPrice;
private String submitterId;
private LocalDateTime submittedTime;
private String approverId;
private LocalDateTime approvedTime;
}
BodyModel.java
public class BodyModel {
private String packageId;
private List<ItemModel> items;
private List<String> reviews;
}
ReviewModel.java
public class ReviewModel {
private String publishedTime;
private String authorName;
private String authorId;
private String text;
}
ItemModel.java
public class ItemModel {
private String itemNumber;
private String description;
private String brandId;
private String packId;
private List<String> reviews;
}
ER Diagram (Simplified)
At the moment, I'm using Javers library. But, Javers keeps the Invoice model as the main entity and other models such as BodyModel, ReviewModel, ItemModel as separated valueObjects. As a result, instead of creating a single revision document, it creates separated documents for the valueObjects. Additionally, it always constructs the current objects from the base version plus all changes which leads to huge read time. Addtionally, I identified a valueObjects issue that comes with javers. Refer this question for more info: MongoDB document version update issue with JaVers
Following are the issues, I've if I'm going to create my own implementation using spring boot.
If I'm going to put revisionId in each of the revisions (as shown in the below object) when mongoDB save(), how do I find the current revisionId to be included ?
{
_Id: <ObjectId>,
invoiceId: <InvoiceId>,
invoice: <Invoice>,
revisionId: <revisionId>
}
For this I can keep a field for revisionId in InvoiceModel which can be updated when saving to the main collection. And at the same time, it can be used to save the revision into history collection. Is it the best possible way to do it in this case ?
If I'm only going to store diffs, then how do I obtain the current version of the object ?
For this, it feels essential to fetch the current object from the main collection and then compare it with new version (I already have the new version, because that's what I'm going to store in main collection) to create the diff. Then diff can be stored in history collection and new version in main collection. Is it the best possible way to store diffs ?
In both scnearios, aspects coming from AOP can be used to intercept the save() method of the base repository to accomplish the task. But I don't mainly consider about coding implementation details, I would like to know which method or methods would be efficient in storing revisions for a data model such as given above (would like to discuss about methods I've mentioned as well) ?

Spring data elasticsearch GeoPoint with spring mvc

I'm using elastic search for storing the data (spring data elasticsearch ) and I, need to store geolocation in my document. The class structure is in the following format.
#Document(indexName = "outlet")
public class OutletIndex implements IESMapper {
#Id
private String path;
private String name;
#GeoPointField
private GeoPoint geoPoint;
// setters and getters
}
Since there are no setters for class GeoPoint it does not work with #ModelAttribute annotation in spring MVC controller. That I need to get it from the front so I'd updated it to:
#Document(indexName = "outlet")
public class OutletIndex implements IESMapper {
#Id
private String path;
private String name;
#GeoPointField
private GeoPoint geoPoint;
private String geoLocation;
public void setGeoLocation(String geoLocation) {
this.geoLocation = geoLocation;
if (geoLocation != null && geoLocation.trim() != "") {
String[] loc = geoLocation.split(",");
this.geoPoint = new GeoPoint(Double.parseDouble(loc[0]), Double.parseDouble(loc[1]));
}
}
// setters and getters
}
An additional field which holds it's string representation and within the Setter which also updates the GeoPiont.
Is there any better approach to do this?
EDIT: One more doubt, is there any way to use the string as geopoint(comma separated value)?
It seems like you are using geo_point data type with data in Elasticsearch of the format location:"latVal,lonVal". This is allowed with Elasticsearch as one of the valid formats for geo_point.
Elasticsearch just provides the data stored in the format you've given. For the same geo_point type in your ES schema, you can store it in multiple formats in different documents and when you try to get them ES will just return them in the format you've stored.
This thing causes issues, as if you have different formats for the same type to be handled specially for a type safe language like Java. You can either do 2 things: ensure a consistent type throughout (both while indexing and retrieving), handle each corner case on application side.
To avoid all this mess, a good rule of thumb I follow is to use the same format as the one provided by the Java client. Here in this case I would not use any custom de-serialization and serialization logic. Instead it would be better to save the location in the format location:{"lat": latVal, "lon": lonVal}. (GeoPoint class expects a double lat and a double lon)
If you ensure this, you will no longer need to think multiple times over the types you'll be receiving and their corner cases while handling them and at the same time avoid a lot of confusion.

How to create objects from different data sources/formats

I'm working on project where I need to create objects from different data sources/formats. I would like to know what is the best way to organize source code to make it easy.
Let's say I have class User and I want to have ability to create objects from data from database and JSON. The purpose of it is letting user of my app to browse data online and offline. I'm using GSON and ORMLite. In addition fields in JSON and database may be different, but the "main" fields are the same. Is it a good idea to create class which contains all properties/fields from JSON and database? Something similar to class below:
#DatabaseTable(tableName = "user", daoClass = UserDaoImpl.class)
public class User {
public static final String ID_FIELD_NAME = "id";
public static final String USER_LOGIN_FIELD_NAME = "login";
public static final String USER_EMAIL_FIELD_NAME = "email";
public static final String SERIALIZED_COUNTRY_FIELD_NAME = "user_county";
// DB & JSON
#DatabaseField(generatedId = true, columnName = ID_FIELD_NAME)
int id;
// DB & JSON
#DatabaseField(columnName = USER_LOGIN_FIELD_NAME)
String login;
//DB & JSON
#DatabaseField(columnName = USER_EMAIL_FIELD_NAME)
String email;
//Only JSON
#SerializedName(SERIALIZED_COUNTRY_FIELD_NAME)
String country;
public Track() {
}
}
Is it a good idea to create class which contains all properties/fields from JSON and database?
I think the short answer is yes. You can certain use the same objects to represent the data in the database and via JSON.
When you will get into problems is when you need to change the data representations in the database but don't want to change your JSON API or vice versa. Then you will need 2 separate classes and a mapping function between them.
But if you can get away with one class then that's the best way.
consider the factory pattern, you can use it to abstract the creation of concrete user classes as a function of the data source.
make User into an interface, and have a data-source specific implementation of User for each type of data source.

Best way to store several ArrayLists in ORMLite for ApplicationSettings

Right now I am trying to persist the Settings of an Application to my DB with the help of ORMLite.
Well my settings consist of different String[] or ArrayList<String> or maybe a Map actually just a bunch of Strings. So my question is, what datatype should I use for such a scenario and optional maybe you could also provide me a small code example.
Right now I tried different things, but I never was happy with the solution. I tried stuff like:
private HashMap<String, ArrayList<String>> configurationMap;
private String[] stringArr1;
private String[] stringArr2;
private ArrayList<String> arrList1;
But when it comes to setup the DB Datatypes I always think, "I don't want an extra table to store the data" or "having a map serialized to the DB sucks...".
Maybe anybody has an idea how to solve this in a nice and convenient way?
For the arrayList you can simply do the following :
public class YourClass{
#GeneratedId
private int id;
#ForeignCollectionField
private Collection<MyString> bunchOfStrings = new ArrayList<MyString>();
...
}
In the MyString class you have the following :
public class MyString{
#DatabaseField(canBeNull = true, foreign = true)
private YourClass yourClass;
#DatabaseField
private String text;
....
}
And that's all

Java: merge instance w/ Oracle CLOB data using Hibernate

JDK 1.6x, Hibernate 3.2.7, Oracle 10g (ojdbc14.jar)
i have a (entity) class that contains a clob. Through a RESTful call i am passed a string that will be the content of the clob. i am having trouble stuffing the string into a clob for later persistence. here's the class....
public class MyClass implements java.io.Serializable {
private static final long serialVersionUID = 5507279748316866736L;
private long id;
private String name;
private String description;
private java.sql.Clob featuresJson;
...etc...
here's the deserialization code...
try {
String jsonStr = msoNode.path("features_json").getTextValue();
SerialClob clob = new SerialClob(jsonStr.toCharArray());
mso.setFeaturesJson(clob);
} catch (Exception e) {
log.error("MyClassDeserializer.deserialize(): Exception deserializing the features JSON." + e.getMessage());
}
after deserialization, i'm onto the Dao's merge statement...
MyClass savedOverlay = myClassDao.merge(overlay);
where "overlay" is a deserialized "MyClass" instance. at this point i can peek inside the clob and see the data -- however the returned instance has the clob field null'ed out, and the clob column in the database is null as well!
What's wrong with my deserialization code? i've tried a few other things, but get failures each and every time (at least that is consistent!)
SOLVED!
there is an annotation #Lob that needed to be specified on the column. also the same column that had been reverse-engineered to be of type java.sql.Clob needed to be changed to String.

Categories