Saving ENUM in spring data elasticsearch - java

I am trying to save my entity in elasticsearch using spring data elasticsearch, all the attributes are saved (including objects) except for enum its always stored as null, this is my entity
#Entity
#Document(indexName="invoices", type="invoices", shards = 1)
public class Invoice {
#Transient
#JsonIgnore
#org.springframework.data.annotation.Id
private String searchIndex;
#Field(type = FieldType.String)
private InvoiceStateEnum state;
with and without #Field attribute state is being saved as null even though the object is being saved has value for this enum.
Any help is appreciated

As spring-data-elasticsearch uses Jackson, you can put the #JsonFormat.Shape.STRING annotation to your enum:
#JsonFormat.Shape.STRING
public enum InvoiceStateEnum {
// your enum code
}

I was able to solve the issue by removing folder data under my project and rerun the application, seems like for some reason elastic search was not updating the records so I was getting null since the attribute was added recently.

Related

How to enforce unique field with MongoDB in Spring

I have a pojo with two fields that need to be unique, the id and the email. So I added the #Indexed(unique = true) annotation to the necessary fields like so
public class User {
#Id
#Indexed(unique = true)
private String id;
#Indexed(unique = true)
private String email;
private int money;
I then tested it out and it was not enforced. So I googled about and I found a previous answer here - Spring Data: Unique field in MongoDB document and subsequently deleted the collection, added spring.data.mongodb.auto-index-creation=true to my application.properties file and tried again.
However, the unique field still isn't enforced! I see there is another answer using ensureIndex() but it also has a great comment that was never answered- Why do we need to use the annotation if all the work is done on mongoTemplate?
So since the question is old enough that apparently the only working answer is depreciated (the new way is using createIndex()), I thought it was time for a new version. Is it possible to require a column in a mongo collection to be unique from Spring Boot?

Reducing model #Entity bloat with Spring Hibernate and MSSQL using DTO + Stored Proc

Edit: I think it would be helpful to explain my goal here first. My goal is to reduce and avoid model/#Entity bloat when using stored procedures with Hibernate. You can get raw data back from the persistent EntityManager when using a stored procedure, but that data will not be mapped. If you send in a stripped down model to Hibernate, Hibernate will only send you back the columns which are annotated as #Column on the #Entity model (almost forcing you to create a new #Entity for every stored procedure!) You can attempt to map this data with a DTO that has more properties, but they won't map to anything because all the fields which were not included on the model will return null.
I've been struggling to find an answer to this in my research. We use an MSSQL database and Spring with JPA/javax/Hibernate persistence, but do not rely on Hibernate for its ORM. All CRUD operations are done using stored procedures. We have several models (Spring #Entity) which work well for retrieving and mapping data. For example, a basic user model.
import javax.persistence;
#Entity
public class User {
#Id
#Column
private int userID;
#Column
private String userName;
#Column
private Date userDOB;
public User(UserDTO userDTO){
userName = userDTO.getUserName();
userID = userDTO.getUserID();
// Extra column userDOB, and no way to map accountDetails from the DTO
}
public int getUserID(){...}
public String getUserName(){...}
public Date getUserDOB(){...}
}
This works well when the stored procedure selects columns in a way that matches up with the model, however in cases where we want to selectively query data using joins, the columns (names and number of columns) often don't match up with the models. In this case, it makes sense to have DTOs to actually map the receiving data from the database using a constructor in the related Model. However, Spring doesn't like injecting the DTO directly (because it complains it isn't an entity), and injecting the model directly into the stored procedure query will fail, since the columns (number of columns!) don't match up.
public class UserDTO {
private String accountDetails;
private int userID;
private String userName;
public int getUserID(){...}
public String getUserName(){...}
}
I've tried using ModelMapper, but the first argument is an Object (the data), which isn't obtainable, since the data can't be mapped. I can of course call StoredProcedureQuery without a Model hint, and receive the raw data, but it will not be mapped.
public class UserRepo {
import javax.persistence;
public List<UserDTO> getUserAccountInfo(){
StoredProcedureQuery query = entityManager.createQuery(storedProcedureSelectUserAccount, User.class);
query.execute(); // Will fail here with SQL Server error: Unknown column userDOB.
List<UserDTO> result = query.getResults();
return result;
}
}
Others have suggested using raw Selects or other String based mapping strategies, but I would really appreciate some advice on retrieving and mapping the returned data using a DTO with the already written Stored Procedures. Thank you!

Spring Data Elasticsearch is not writing null values to inserted documents

I have an ES entity:
#Document(indexName = "company")
public class CompanyEntity {
#MultiField(
mainField = #Field(type = Text, name = "alias_name"),
otherFields = {#InnerField(suffix = "keyword", type = Keyword, nullValue = "NULL")})
#Nullable
private String aliasName;
...
}
If I create a CompanyEntity object and do not supply an aliasName, my expectation is that spring Data Elasticsearch would persist null values for entity properties that are Nullable. But this does not seem to be the case, even if I supply a value for the nullValue in the InnerField annotation.
I'm sure I have misconfigured an annotation or something, but I would really like to use Elasticsearch's null_value parameter as detailed here. But first I need to understand how to get SDE to persist null values.
Thank you for your time!
As null values can not be indexed or searched they are normally not stored by Spring Data Elasticsearch thus reducing the size of the indexed document.
The possibility to store null values nevertheless was added with this issue and will be contained in version 4.1.RC1 which should be released tomorrow.
Edit: 4.1.0.RC1 is released now

Hibernate SchemaFilterProvider get Java entity name

I would like Hibernate to disable certain classes from being validated on startup.
My particular use-case:
spring.jpa.hibernate.ddl-auto=validate
#Table (name = "SAME_TABLE")
public class Entity1 {
#Column
private Long value;
// rest of values
}
#Table (name = "SAME_TABLE")
public class SearchEntity2 {
#Column
private String value;
// rest of values
}
As you can see I have two classes mapped to the same table called SAME_TABLE. This is because I want to do wildcard searches on numeric field value
JPA Validation fails on Oracle (h2 succeeds suprisingly) because it detects that the String is not NUMERIC(10).
This question here by #b0gusb provides an excellent way of filtering out via table name:
How to disable schema validation in Hibernate for certain entities?
Unfortunately my table name is identical. Is there any way of getting to the Java class name from SchemaFilteror perhaps another way of doing this?
Thanks
X

How to use 'id' when using DocumentDB via MongDB API in Spring DATA?

I have used both the ways of mapping _id as described in the Spring Docs here.
using #Id annotation
having a field with name id without any annotation
in my previous project where we used MongoDB as database and Spring Data for DAO operations. It worked without any problem for both String a well as for BigInteger.
Now we are using DocumentDB with MongoDB API(as Spring Data does not support DocumentDB).
I am able to use all the Spring Data methods, but I am not able to use custom id.
Below is my entity:
public class S{
private String id;
/* other fields here */
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
/* getters and setters for other fields */
}
This is the DAO:
public interface SDao extends MongoRepository<S, String> {
}
Now if anywhere in my code I do:
s = new S();
s.setId("some-id-here");
The record gets successfully persisted in the DB with custom id some-id-here as String (not ObjectId), but after that it throws ClassCastException saying Long cannot be converted to Integer.
Same is the case when using BigInteger for id.
If I am not setting the custom id, i.e. I comment the setting of id as below:
s = new S();
// s.setId("some-id-here");
no exception is being thrown, but the record is being persisted with a random id provided by database itself as ObjectcId.
I want to save the record with custom id, so that I can easily update it when needed.
Currently if I have to update a record, I need to retrieve it using a key which is not mapped to _id and then update it and then delete the old record from the DB and then persist the updated one, which I feel is absolutely inefficient as I am not able to make use of _id.
My question is why am I getting ClassCastException, that too mentioning Conversion of Long to Integer
Is DocumentDB internally doing some conversion which is throwing this exception. If yes, how to tackle it? Is this a bug?
One alternative could be to let DocumentDB/ MongoDB create those IDs for you by default. In your class you can have another field which can serve as natural ID and create a unique index on that field for fetch optimization.
Refer https://docs.mongodb.com/manual/reference/method/db.collection.createIndex/ for indexes.
The id generation rules are explained here link

Categories