I would like to change the entity property from String to long. I have seen Nick answering similar problem in Change IntegerProperty to FloatProperty of existing AppEngine DataStore but I am writing in Java and need some code example since I don't know anything about the mapreduce.
e.g. we want to change userId from String to Long of this class.
I also would like to get advice on my thinking of storing date in long instead of String so that the time information can be consumed readily from android, GWT and more(over Rest Json or RPC). Right now, GWT does not have Jodatime and it has limited support of Java.util.Date and parsing.
If you really want to convert from String to Long, I can't see any other choice except to write a conversion snippet using raw GAE, eg:
import com.google.appengine.api.datastore.DatastoreServiceFactory;
import com.google.appengine.api.datastore.Entity;
import com.google.appengine.api.datastore.PreparedQuery;
import com.google.appengine.api.datastore.Query;
Query q = new Query (Task.class.getName());
PreparedQuery pq = DatastoreServiceFactory.getDatastoreService ().prepare (q);
for (Entity entity : pq.asIterable ())
{
String orig = entity.getProperty ("userId").toString ();
entity.removeProperty ("userId");
entity.setProperty ("userId", Long.parseLong (orig));
}
What is your persistence interface? JDO (mine), JPA, Objectify, Twig, raw GAE/J API? I don't think that many people can give you a code example without knowing this.
Also, please give the code extract of your existing (an underlying date-time, I presume) persistent entity including the data member you talk about.
Your class is using JPA not JDO. The latest version (v2.x) of the GAE JPA plugin allows persistence of (java.util.)Date as Long or String. This wouldn't cater for your migration of data (see the reply by Jonathan for that) but would allow you to persist future Date fields as Long. IIRC you can specify the "jdbcType" (DataNucleus extension annotation) as INTEGER would trigger that.
Related
I have a PostgresSQL production database, but I'm trying to run some of my automated tests against H2 in-memory. I'm trying to persist JSON formatted data to a table but while I'm able to write the data with no complaints, I get conversion exceptions when I read them back. I have no problem doing this in the production Postgres database.
The object I'm persisting is structured similar to the following:
#Entity
public class Record {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Integer id;
#Column(columnDefinition = "jsonb")
#Convert(converter = PersonalInfoConverter.class)
private PersonalInfo personalInfo;
public Record() {}
public Record(PersonalInfo personalInfo) {
this.personalInfo = personalInfo;
}
}
The PersonalInfoConverter just uses a Jackson ObjectMapper to de/serialise the object from/to a String (pretty standard stuff with writeValueAsString, and readValue). To get jsonb to work with H2, I used this trick, which basically sets jsonb as an alias for H2's JSON.
I kept running into conversion errors when reading records from the database, until I stumbled upon this question, which linked to a further discussion on Github about inserting JSON formatted strings into H2 tables. It sounds like to be able to get this to work properly, I need to specifically annotate the string inserted into the H2 database. I assumed that, if this were the case, then Hibernate should have handled this properly itself, but it didn't seem like it works out of the box. How do I configure my code to get this to work?
In the meantime, I'm working around this issue by using jsonb as an alias to H2's text type instead:
CREATE TYPE "JSONB" as text;
I've created a project to demonstrate the issue.
Hibernate does not know about the "JSON" SQL data type and how it needs to be handled. Just use text like you do now, that's totally fine. AFAIU the JSON data type in H2 is just like a domain type with validation i.e. you could replace it with TEXT CHECK is_json(..), so there is not much value in using that particular data type. You could tell hibernate to use #ColumnTransformer to append this FORMAT JSON, but then you'd have issues with PostgreSQL again. Overall, this cross database testing with proprietary features that Hibernate does not abstract over is simply a mess. I would suggest you simply drop H2 and use PostgreSQL with fsync=off for testing which is quite fast already.
We are using the spring-data-elasticsearch project to interface with our elasticsearch clusters, and have been using it now for around a year. Recently, we moved to elasticsearch 5.x (from 2.x) where we now have the "keyword" datatype.
I would like to index these keywords as lowercase values, which I know can be done with field normalizers. I can't find anywhere in the documentation or online where I can add a normalizer to a field through the annotation based mapping.
E.g
#Field(type = FieldType.keyword, <some_other_param = some_normalizer>)
Is this something that can be done? I know that we can use JSON based mapping definitions as well, so I will fall back to that option if needed, but would like to be able to do it this way if possible.
Any help would be very appreciated!
Since the pull request of #xhaggi has been merged (spring-data-elasticsearch 3.1.3+ or Spring Boot 2.1.1), we have a normalizer field in the #Field annotation.
To use it, we need:
declare a #Field or an #InnerField with params type = FieldType.Keyword, normalizer = "%NORMALIZER_NAME%"
add #Setting(settingPath = "%PATH_TO_NORMALIZER_JSON_FILE%") at the class level.
put the normalizer mapping into a json file at %PATH_TO_NORMALIZER_JSON_FILE%
Example of usage
FYI, for anyone looking at this, the answer is there is not a way to do this at this time.
You can do this, however, by creating your mappings file as JSON in the Elasticsearch format. See:
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html
You can then create that JSON file and link it to your Domain model with.
#Mapping(mappingPath = "some/path/mapping.json")
Note that this is not, in my experience, compatible with the provided annotation based mapping for fields.
There is a pending issue https://jira.spring.io/browse/DATAES-492 waiting for review.
I'm using simple Java classes which are the schema for my mongo db table.
There are several frameworks for serialization/ deserialization to/ from JSON and CRUD operations for mongo (I've looked into Jackson serializer and Morphia).
But none of them seems to provide a solution for handling changes:
Let's say I have this class as my schema:
Class Person
{
String name;
int age;
String occupation;
}
In my code, I will probably use a setter in some place for age:
Person newDbEntry = new Person();
newDbEntry.setAge(45);
newDbEntry.setOccupation("Carpenter");
Now let's say that at some point of the development process, it was decided that age field name needs to be changed to "theAge", and it was also decided to remove "occupation" field from this collection completely- to a new table.
The problem that I'm faced with is that all my queries look like this:
JsonObject query = new JsonObject().put("age",new JsonObject().put("$gte", 22);
In other words, all field names are written in queries as Strings (and also in all other mongo APIs- update, findAndModify, etc).
I'm looking for a way to "bind" all mentions of the field "age" in my code with the POJO class- so that when something in the POJO schema changes (like renaming this field), I'll have (ideally) compiler errors in all queries that mention this field.
As it currently stands, changes to schema cause no compiler errors and - more critically - usually no runtime errors. The old string query just quietly returns no results, or something similar. This makes changes to the schema very hard to implement.
How should this be done correctly?
Here's the solution that I ended up using:
Project lombok now supports FieldNames generation:
https://projectlombok.org/features/experimental/FieldNameConstants
So instead of using the name hardcoded as string:
serviceRepository.setField(id, “service.serviceName”, “newName”);
I use:
serviceRepository.setField(id, ConnectivityServiceDetails.Fields.service + "." + ConnectivityService.Fields.serviceName, “newName”);
This way, when we search in Intellij for usages of this field (or try to refactor it), it will find those places also automatically.
I was following some example in which we can able to build OData service with Olingo from Java (maven project). The provided example doesn't have any database interaction. They are using some Storage.class, which contains hard codded data.
You can find sample code on git. please refer example p0_all in provided url.
Does anyone knows how we can connect git example with some database and furthermore perform CRUD operations
Please do help me with some good examples or concept.
Thanking you in advance.
I recently built an oData producer using Olingo and found myself similarly frustrated. I think that part of the issue is that there really are a lot of different ways to build an oData service with Olingo, and the data access piece is entirely up to the developer to sort out in their own project.
Firstly, you need an application that has a database connection set up. So completely disregarding Olingo, you should have an app that connects to and can query a database. If you are uncertain of how to build a java application that can query a MySQL datasource, then you should Google around for tutorials that are related to that problem and have nothing to do with Olingo.
Next you need to write the methods and queries to perform CRUD operations in your application. Again, these methods have nothing to do with Olingo.
Where Olingo starts to come in to play is in your implementation of the processor classes. EntityCollectionProcessor, EntityProcessor etc. (note that there are other concerns such as setting up your CsdlEntityTypes and Schema/Service Document etc., but those are outside the scope of your question)
Lets start by looking at EntityCollectionProcessor. By implementing the EntityCollectionProcessor class you need to override the readEntityCollection() function. The purpose of this function is to parse the oData URI for the entity name, fetch an EntityCollection for that Entity, and then serialize the EntityCollection into an oData compliant response. Here's the implementation of readEntityCollection() from your example link:
public void readEntityCollection(ODataRequest request, ODataResponse response, UriInfo uriInfo, ContentType responseFormat)
throws ODataApplicationException, SerializerException {
// 1st we have retrieve the requested EntitySet from the uriInfo object
// (representation of the parsed service URI)
List<UriResource> resourcePaths = uriInfo.getUriResourceParts();
UriResourceEntitySet uriResourceEntitySet = (UriResourceEntitySet) resourcePaths.get(0);
// in our example, the first segment is the EntitySet
EdmEntitySet edmEntitySet = uriResourceEntitySet.getEntitySet();
// 2nd: fetch the data from backend for this requested EntitySetName
// it has to be delivered as EntityCollection object
EntityCollection entitySet = getData(edmEntitySet);
// 3rd: create a serializer based on the requested format (json)
ODataSerializer serializer = odata.createSerializer(responseFormat);
// 4th: Now serialize the content: transform from the EntitySet object to InputStream
EdmEntityType edmEntityType = edmEntitySet.getEntityType();
ContextURL contextUrl = ContextURL.with().entitySet(edmEntitySet).build();
final String id = request.getRawBaseUri() + "/" + edmEntitySet.getName();
EntityCollectionSerializerOptions opts = EntityCollectionSerializerOptions.with().id(id).contextURL(contextUrl).build();
SerializerResult serializerResult = serializer.entityCollection(serviceMetadata, edmEntityType, entitySet, opts);
InputStream serializedContent = serializerResult.getContent();
// Finally: configure the response object: set the body, headers and status code
response.setContent(serializedContent);
response.setStatusCode(HttpStatusCode.OK.getStatusCode());
response.setHeader(HttpHeader.CONTENT_TYPE, responseFormat.toContentTypeString());
}
You can ignore (and reuse) everything in this example except for the "2nd" step:
EntityCollection entitySet = getData(edmEntitySet);
This line of code is where Olingo finally starts to interact with our underlying system, and the pattern that we see here informs how we should set up the rest of our CRUD operations.
The function getData(edmEntitySet) can be anything you want, in any class you want. The only restriction is that it must return an EntityCollection. So what you need to do is call a function that queries your MySQL database and returns all records for the given entity (using the string name of the entity). Then, once you have a List, or Set (or whatever) of your records, you need to convert it to an EntityCollection.
As an aside, I think that this is probably where the disconnect between the Olingo examples and real world application comes from. The code behind that getData(edmEntitySet); call can be architected in infinitely different ways, depending on the design pattern used in the underlying system (MVC etc.), styling choices, scalability requirements etc.
Here's an example of how I created an EntityCollection from a List that returned from my query (keep in mind that I am assuming you know how to query your MySQL datasource and have already coded a function that retrieves all records for a given entity):
private List<Foo> getAllFoos(){
// ... code that queries dataset and retrieves all Foo records
}
// loop over List<Foo> converting each instance of Foo into and Olingo Entity
private EntityCollection makeEntityCollection(List<Foo> fooList){
EntityCollection entitySet = new EntityCollection();
for (Foo foo: fooList){
entitySet.getEntities().add(createEntity(foo));
}
return entitySet;
}
// Convert instance of Foo object into an Olingo Entity
private Entity createEntity(Foo foo){
Entity tmpEntity = new Entity()
.addProperty(createPrimitive(Foo.FIELD_ID, foo.getId()))
.addProperty(createPrimitive(Foo.FIELD_FOO_NAME, foo.getFooName()));
return tmpEntity;
}
Just for added clarity, getData(edmEntitySet) might look like this:
public EntityCollection getData(String edmEntitySet){
// ... code to determine which query to call based on entity name
List<Foo> foos = getAllFoos();
EntityCollection entitySet = makeEntityCollection(foos);
return entitySet;
}
If you can find an Olingo example that uses a DataProvider class, there are some basic examples of how you might set up the // ...code to determine which query to call based on entity name. I ended up modifying that pattern heavily using Java reflection, but that is totally unrelated to your question.
So getData(edmEntitySet) is a function that takes an entity name, queries the datasource for all records of that entity (returning a List<Foo>), and then converts that List<Foo> into an EntityCollection. The EntityCollection is made by calling the createEntity() function which takes the instance of my Foo object and turns it into an Olingo Entity. The EntityCollection is then returned to the readEntityCollection() function and can be properly serialized and returned as an oData response.
This example exposes a bit of the architecture problem that Olingo has with its own examples. In my example Foo is an object that has constants that are used to identify the field names, which are used by Olingo to generate the oData Schema and Service Document. This object has a method to return it's own CsdlEntityType, as well as a constructor, its own properties and getters/setters etc. You don't have to set your system up this way, but for the scalability requirements of my project this is how I chose to do things.
This is the general pattern that Olingo uses. Override methods of an interface, then call functions in a separate part of your system that interact with your data in the desired manner. Then convert the data into Olingo readable objects so they can do whatever "oData stuff" needs to be done in the response. If you want to implement CRUD for a single entity, then you need to implement EntityProcessor and its various CRUD methods, and inside those methods, you need to call the functions in your system (totally separate from any Olingo code) that create(), read() (single entity), update(), or delete().
I'm writing a demo app using Spring & MongoDB as a database.
My main domain class looks like:
#Document
public class Person {
#Id
private String id;
//Some other fields
private DBObject additionalData;
}
The key is that additionalData is a subdocument with no schema specified, it is kind of user-defined JSON. But when I am parsing this json (using (DBObject) JSON.parse(value) expression), it is stored as a string in MongoDB, and I need it to be a nested document structure.
Searched for couple of hours, found no solution. Any ideas?
I'm not really sure of the expected result of casting the result of
JSON.parse(value)
to DBObject, which is an interface, not a class.
Try casting the result to an implementation of DBObject BasicDBObject (or BasicDBList), or a Map<String, Object> as mentioned in the comments (it is also an interface, but it does work).
If you're working with Spring Data Rest, you will probably not need to deserialize "manually", Spring will do it for you. Check this answer for a basic example of what to do.
Having data with no schema specified may not be the best idea around (mongodb saves you from doing it at the database level, but you should do it at the application level), but I use similar tricks in production, and you can somehow make it work.