I embedding a User Defined Class (StudentProfile) inside another class (Account) using the #Embedded and #EmbeddedOnly JDO Annotations. Google Datastore indexes the attributes of the Embedded class by default and I would like to unindex many of the attributes in the Embedded Class (StudentProfile).
I have tried using the '#Extension(vendorName="datanucleus", key="gae.unindexed", value="true")' in the Account Class, where I declare the StudentProfile Attribute, and at the various attributes of StudentProfile class itself. Even after this, I am able to filter by studentAttribute.shortName (a field I annotated to be unindexed). Since Google Datastore documentation says that unindexed fields cannot be used as filters, I take this to mean that the unindexing effort was in vain.
I believe there is an #Unindex annotation in objectify, is there an equivalent in JDO?
I also tried to experiment with unindexing a normal attribute in the Account class (using the JDO Extension) and it works (ie returns null when I try to filter by that attribute). Here is the relevant code from each class. Any help is greatly aprreciated! Thanks!
#PersistenceCapable
public class Account {
// primary key and other attributes...
// the embedded class with renaming for the clashing fields
#Persistent(dependent="true", defaultFetchGroup="false")
#Extension(vendorName="datanucleus", key="gae.unindexed", value="true")
#Embedded(members = {
#Persistent(name="shortName", columns=#Column(name="shortName"), extensions=#Extension(vendorName="datanucleus", key="gae.unindexed", value="true")),
#Persistent(name="email", columns=#Column(name="personalEmail")),
#Persistent(name="institute", columns=#Column(name="originalInstitute"))
})
private StudentProfile studentProfile;
}
// the embeddedOnly class
#PersistenceCapable(embeddedOnly="true")
public class StudentProfile {
// other attributes...
// the filter attribute I used for querying
#Persistent
#Extension(vendorName="datanucleus", key="gae.unindexed", value="true")
private String shortName;
}
public StudentProfileAttributes getStudentProfile(String shortName) {
Query q = getPM().newQuery(Account.class);
q.declareParameters("String shortNameParam");
q.setFilter("studentProfile.shortName == shortNameParam");
// the following execution works and a result is returned!
#SuppressWarnings("unchecked")
List<Account> accountsList = (List<Account>) q.execute(shortName);
}
There is a blog post that tells you all about such things. And more recent versions (2.x) of GAE JDO plugin have an #Unindexed (which is clearly not part of JDO, being GAE-specific).
http://gae-java-persistence.blogspot.co.uk/2009/11/unindexed-properties.html
Related
Is it possible to dynamically create a GraphQL schema ?
We store the data in mongoDB and there is a possibility of new fields getting added. We do not want any code change to happen for this newly added field in the mongoDB document.
Is there any way we can generate the schema dynamically ?
Schema is defined in code, but for java(schema as pojo), when new
attribute is added, you have to update and recompile code, then
archive and deploy the jar again. Any way to generate schema by the
data instead of pre-define it?
Currently we are using java related projects (graphql-java, graphql-java-annotations) for GraphQL development.
You could use graphql-spqr, it allows you auto-generate a schema based on your service classes. In your case, it would look like this:
public class Pojo {
private Long id;
private String name;
// whatever Ext is, any (complex) object would work fine
private List<Ext> exts;
}
public class Ext {
public String something;
public String somethingElse;
}
Presumably, you have a service class containing your business logic:
public class PojoService {
//this could also return List<Pojo> or whatever is applicable
#GraphQLQuery(name = "pojo")
public Pojo getPojo() {...}
}
To expose this service, you'd just do the following:
GraphQLSchema schema = new GraphQLSchemaGenerator()
.withOperationsFromSingleton(new PojoService())
.generate();
You could then fire a query such as:
query test {
pojo {
id
name
exts {
something
somethingElse
} } }
No need for strange wrappers or custom code of any kind, nor sacrificing type safety. Works with generics, dependency injection, or any other jazz you may have in your project.
Full disclosure: I'm the author of graphql-spqr.
After some days' investigation. I found it is hard to generate schema dynamically in Java (or cost is so high).
Well, from another way. I think we can use Map as a compromised way to accomplish that.
POJO/Entity
public class POJO{
#GraphQLField
private Long id;
#GraphQLField
private String name;
// ...
#GraphQLField
private GMap exts;
}
GMap is a customized Map (Because Map/HashMap is a JDK inner class which could not make as GraphQL Schema but only extend).
GMap
public class GMap extends HashMap<String, String> {
#GraphQLField
public String get(#GraphQLName("key") String key) {
return super.get(key);
}
}
Retrieve data from Client
// query script
query test
{
your_method
{
id
name
exts {
get(key: "ext") // Add a extended attribute someday
}
}
}
// result
{
"errors":[],
"data":
{
"list":
[
{"id":1, name: "name1", exts: {"get": "ext1"}},
{"id":2, name: "name2", exts: {"get": "ext2"}}
]
}
}
OK, so I have an interesting problem. I am using java/maven/spring-boot/cassandra... and I am trying to create a dynamic instantiation of the Mapper setup they use.
I.E.
//Users.java
import com.datastax.driver.mapping.annotations.Table;
#Table(keyspace="mykeyspace", name="users")
public class Users {
#PartitionKey
public UUID id;
//...
}
Now, in order to use this I would have to explicitly say ...
Users user = (DB).mapper(Users.class);
obviously replacing (DB) with my db class.
Which is a great model, but I am running into the problem of code repetition. My Cassandra database has 2 keyspaces, both keyspaces have the exact same tables with the exact same columns in the tables, (this is not my choice, this is an absolute must have according to my company). So when I need to access one or the other based on a form submission it becomes a mess of duplicated code, example:
//myWebController.java
import ...;
#RestController
public class MyRestController {
#RequestMapping(value="/orders", method=RequestMethod.POST)
public string getOrders(...) {
if(Objects.equals(client, "first_client_name") {
//do all the things to get first keyspace objects like....
FirstClientUsers users = (db).Mapper(FirstClientUsers.class);
//...
} else if(Objects.equals(client, "second_client_name") {
SecondClientUsers users = (db).Mapper(SecondClientUsers.class);
//....
}
return "";
}
I have been trying to use methods like...
Class cls = Class.forName(STRING_INPUT_VARIABLE_HERE);
and that works ok for base classes but when trying to use the Accessor stuff it no longer works because Accessors have to be interfaces, so when you do Class cls, it is no longer an interface.
I am trying to find any other solution on how to dynamically have this work and not have to have duplicate code for every possible client. Each client will have it's own namespace in Cassandra, with the exact same tables as all other ones.
I cannot change the database model, this is a must according to the company.
With PHP this is extremely simple since it doesn't care about typecasting as much, I can easily do...
function getData($name) {
$className = $name . 'Accessor';
$class = new $className();
}
and poof I have a dynamic class, but the problem I am running into is the Type specification where I have to explicitly say...
FirstClientUsers users = new FirstClientUsers();
//or even
FirstClientUsers users = Class.forName("FirstClientUsers");
I hope this is making sense, I can't imagine that I am the first person to have this problem, but I can't find any solutions online. So I am really hoping that someone knows how I can get this accomplished without duplicating the exact same logic for every single keyspace we have. It makes the code not maintainable and unnecessarily long.
Thank you in advance for any help you can offer.
Do not specify the keyspace in your model classes, and instead, use the so-called "session per keyspace" pattern.
Your model class would look like this (note that the keyspace is left undefined):
#Table(name = "users")
public class Users {
#PartitionKey
public UUID id;
//...
}
Your initialization code would have something like this:
Map<String, Mapper<Users>> mappers = new ConcurrentHashMap<String, Mapper<Users>>();
Cluster cluster = ...;
Session firstClientSession = cluster.connect("keyspace_first_client");
Session secondClientSession = cluster.connect("keyspace_second_client");
MappingManager firstClientManager = new MappingManager(firstClientSession);
MappingManager secondClientManager = new MappingManager(secondClientSession);
mappers.put("first_client", firstClientManager.mapper(Users.class));
mappers.put("second_client", secondClientManager.mapper(Users.class));
// etc. for all clients
You would then store the mappers object and make it available through dependency injection to other components in your application.
Finally, your REST service would look like this:
import ...
#RestController
public class MyRestController {
#javax.inject.Inject
private Map<String, Mapper<Users>> mappers;
#RequestMapping(value = "/orders", method = RequestMethod.POST)
public string getOrders(...) {
Mapper<Users> usersMapper = getUsersMapperForClient(client);
// process the request with the right client's mapper
}
private Mapper<Users> getUsersMapperForClient(String client) {
if (mappers.containsKey(client))
return mappers.get(client);
throw new RuntimeException("Unknown client: " + client);
}
}
Note how the mappers object is injected.
Small nit: I would name your class User in the singular instead of Users (in the plural).
I do have a Repository
#Repository
public interface PointOfInterestRepository extends GraphRepository<Poi> {
// currently empty
}
with no custom methods defined. So I use the like of save(T... entities) which are predefined.
And I have my Poi class as follows
#NodeEntity(label = "PointOfInterest")
public class Poi {
#JsonIgnore
#GraphId
Long neo4jId;
#JsonManagedReference("node-poi")
#JsonProperty("node")
#Relationship(type = "BELONGS_TO", direction = Relationship.UNDIRECTED)
private Node node;
#JsonProperty("id")
#Property(name = "poiID")
private final String id;
#JsonProperty("uris")
#Property(name = "uris")
private final Set<URI> correspondingURIs = new HashSet<>();
/* Some more stuff I skip here*/
}
with getters for the fields.
Currently I am able to save such Pois to neo4j and retrieve them back, but when I try to work with those Nodes in the database via cypher it appears that the fields aren't mapped to neo4j properties.
I thought spring-data-neo4j would convert my class fields to neo4j graph properties. Am I wrong with that?
Note: The save calls seems to work very well. After that I can see the Nodes in the database and calling findAll() afterwards will return me all the saved Nodes (Pois) properly with all the correct values. But somehow, within the database, I cannot see any properties/fields.
The problem is the final fields. SDN would be unable to write values back to the entity when loaded from the graph because these fields are final (and SDN will use only the default no-args constructor), and as such, final fields are not supported.
Removing the final should fix this.
I was unable to read the full inherited class instances as described in following URL
http://www.datanucleus.org/products/datanucleus/jdo/orm/inheritance.html
Following describes the mapping of classes.
#PersistenceCapable(detachable = "true")
#Discriminator(strategy=DiscriminatorStrategy.CLASS_NAME)
#Inheritance(strategy=InheritanceStrategy.NEW_TABLE)
public class IdeaItem {
#PrimaryKey
#Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
#Column(jdbcType = "INTEGER", length = 11)
private long id;
#Column(name="IDEAID")
private Idea idea;
#Column(jdbcType = "INTEGER", length = 11)
private long showOrder;
}
#PersistenceCapable(detachable = "true")
#Inheritance(strategy=InheritanceStrategy.NEW_TABLE)
public class IdeaItemText extends IdeaItem {
#Column(jdbcType = "VARCHAR", length = 500)
private String text;
}
Data saving part working fine. I inserted "IdeaItemText" object and both "IdeaItem" and "IdeaItemText" tables got updated successfully.
Now I need to read Subclasses by putting "IdeaItem" as an Extent. I executed the following code.
Extent items = getPersistenceManager().getExtent(IdeaItem.class,true);
javax.jdo.Query q = getPersistenceManager().newQuery(items);
List data = (List)q.execute();
As in the JDO docs, this should return the whole object graph. But this is not returning any record. When I check the log, I found that it searching for a reacord where Discriminator Value equals to "com.mydomain.IdeaItem" which does not exists. When I removed the Discriminator annotation I got all the records in the table. Even though how I access the sub classes attributes ? Furthermore how I query subclass attributes with the base class Extent ?
So you didn't let the persistence mechanism know about the subclass (whether that is using auto-start mechanism, persistence.xml, calling pm.getExtent on the subclass, or simply instantiating the subclass.class). It can only query classes that it is "aware of"
Is it possible for a JPA entity class to contain two embedded (#Embedded) fields? An example would be:
#Entity
public class Person {
#Embedded
public Address home;
#Embedded
public Address work;
}
public class Address {
public String street;
...
}
In this case a Person can contain two Address instances - home and work. I'm using JPA with Hibernate's implementation. When I generate the schema using Hibernate Tools, it only embeds one Address. What I'd like is two embedded Address instances, each with its column names distinguished or pre-pended with some prefix (such as home and work). I know of #AttributeOverrides, but this requires that each attribute be individually overridden. This can get cumbersome if the embedded object (Address) gets big as each column needs to be individually overridden.
The generic JPA way to do it is with #AttributeOverride. This should work in both EclipseLink and Hibernate.
#Entity
public class Person {
#AttributeOverrides({
#AttributeOverride(name="street",column=#Column(name="homeStreet")),
...
})
#Embedded public Address home;
#AttributeOverrides({
#AttributeOverride(name="street",column=#Column(name="workStreet")),
...
})
#Embedded public Address work;
}
#Embeddable public class Address {
#Basic public String street;
...
}
}
If you want to have the same embeddable object type twice in the same entity, the column name defaulting will not work: at least one of the columns will have to be explicit. Hibernate goes beyond the EJB3 spec and allows you to enhance the defaulting mechanism through the NamingStrategy. DefaultComponentSafeNamingStrategy is a small improvement over the default EJB3NamingStrategy that allows embedded objects to be defaulted even if used twice in the same entity.
From Hibernate Annotations Doc: http://docs.jboss.org/hibernate/stable/annotations/reference/en/html_single/#d0e714
When using Eclipse Link, an alternative to using AttributeOverrides it to use a SessionCustomizer. This solves the issue for all entities in one go:
public class EmbeddedFieldNamesSessionCustomizer implements SessionCustomizer {
#SuppressWarnings("rawtypes")
#Override
public void customize(Session session) throws Exception {
Map<Class, ClassDescriptor> descriptors = session.getDescriptors();
for (ClassDescriptor classDescriptor : descriptors.values()) {
for (DatabaseMapping databaseMapping : classDescriptor.getMappings()) {
if (databaseMapping.isAggregateObjectMapping()) {
AggregateObjectMapping m = (AggregateObjectMapping) databaseMapping;
Map<String, DatabaseField> mapping = m.getAggregateToSourceFields();
ClassDescriptor refDesc = descriptors.get(m.getReferenceClass());
for (DatabaseMapping refMapping : refDesc.getMappings()) {
if (refMapping.isDirectToFieldMapping()) {
DirectToFieldMapping refDirectMapping = (DirectToFieldMapping) refMapping;
String refFieldName = refDirectMapping.getField().getName();
if (!mapping.containsKey(refFieldName)) {
DatabaseField mappedField = refDirectMapping.getField().clone();
mappedField.setName(m.getAttributeName() + "_" + mappedField.getName());
mapping.put(refFieldName, mappedField);
}
}
}
}
}
}
}
}
In case you are using hibernate you can also use a different naming scheme which adds unique prefixes to columns for identical embedded fields. See Automatically Add a Prefix to Column Names for #Embeddable Classes