I have a simple hierarchy in neo4j directly derived from the business model.
#Node
public class Team {
#Id private String teamId;
private String name;
}
#Node
public class Driver {
#Id private String driverId;
private String name;
#Relationship(direction = Relationship.Direction.OUTGOING)
private Team team;
}
#Node
public class Car {
#Id private String carId;
private String name;
#Relationship(direction = Relationship.Direction.OUTGOING)
private Driver driver;
}
which results in the corresponding graph (Team)<--(Driver)<--(Car) usually all requests start at Car.
A new use case needs to create a tree structure starting at Team nodes. The Cypher query aggregates the data on neo and returns it to SDN.
public List<Projection> loadHierarchy() {
return neo4jClient.query("""
MATCH(t:Team)<--(d:Driver)<--(c:Car)
WITH t, d, collect(distinct c{.carId, .name}) AS carsEachDriver
WITH t, collect({driver: d{.driverId, .name}, cars: carsEachDriver }) AS driverEachTeam
WITH collect({team: t{.teamId, .name}, drivers: driverEachTeam }) as teams
RETURN teams
""")
.fetchAs(Projection.class)
.mappedBy((typeSystem, record) -> new Projection() {
#Override
public Team getTeam() {
return record.get... // how to access single object?
}
#Override
public List<Retailers> getRetailers() {
return record.get... // how to access nested list objects?
}
})
.all();
}
The result is a list of following objects:
{
"drivers": [
{
"driver": {
"name": "Mike",
"driverId": "15273c10"
},
"cars": [
{
"carId": "f4ca4581",
"name": "green car"
},
{
"carId": "11f3bcae",
"name": "red car"
}
]
}
],
"team": {
"teamId": "4586b33f",
"name": "Blue Racing Team"
}
}
The problem is now, how to map the response into an according Java model. I don't use the entity classes.
I tried multi-level projection with nested interfaces.
public interface Projection {
Team getTeam();
List<Drivers> getDrivers();
interface Drivers {
Driver getDriver();
List<Cars> getCars();
}
interface Driver {
String getDriverId();
String getName();
}
interface Car {
String getCarId();
String getName();
}
interface Team {
String getTeamId();
String getName();
}
}
I struggle to access the nested lists and objects, to put them into the model.
SDN is the Spring Boot Starter in version 2.6.3.
An example how to map a nested object in a list would be a good starting point.
Or may be my approach is totally wrong? Any help is appreciated.
Projection are not meant to be something like a view or wrapper of arbitrary data.
Within the context you can get a Neo4jMappingContext instance.
You can use this to obtain the mapping function for an already existing entity.
With this, you do not have to take care about mapping the Car and (partially because of the team relationship) the Drivers.
BiFunction<TypeSystem, MapAccessor, Car> mappingFunction = neo4jMappingContext.getRequiredMappingFunctionFor(Car.class);
The mapping function accepts an object of type MapAccessor.
This is a Neo4j Java driver type that is implemented besides others also by Node and MapValue.
You can use those values from your result e.g. drivers in a loop (should be possible to call asList on the record) and within this loop you would also assign the cars.
Of course using the mapping function would only make sense if you have a lot more properties to map because nothing in the return structure (as you already said between the lines) applies to the entity structure regarding the relationships.
Here is an example of using the mapping function and direct mapping.
You have to decide what matches best for your use case.
public Collection<Projection> loadHierarchy() {
var teamMappingFunction = mappingContext.getRequiredMappingFunctionFor(Team.class);
var driverMappingFunction = mappingContext.getRequiredMappingFunctionFor(Driver.class);
return neo4jClient.query("""
MATCH(t:Team)<--(d:Driver)<--(c:Car)
WITH t, d, collect(distinct c{.carId, .name}) AS carsEachDriver
WITH t, collect({driver: d{.driverId, .name}, cars: carsEachDriver }) AS driverEachTeam
WITH {team: t{.teamId, .name}, drivers: driverEachTeam } as team
RETURN team
""")
.fetchAs(Projection.class)
.mappedBy((typeSystem, record) -> {
Team team = teamMappingFunction.apply(typeSystem, record.get("team"));
List<DriverWithCars> drivers = record.get("team").get("drivers").asList(value -> {
var driver = driverMappingFunction.apply(typeSystem, value);
var cars = value.get("carsEachDriver").asList(carValue -> {
return new Car(value.get("name").asString());
});
return new DriverWithCars(driver, cars); // create wrapper object incl. cars
});
return new Projection(team, drivers);
})
.all();
}
(Disclaimer: I did not execute this on a data set, so there might be typos or wrong access to the record)
Please note that I changed the cypher statement a little bit to have get one Team per record instead of the whole list.
Maybe this is already what you have asked for.
Related
Building a microservice in java using -
spring-boot version 2.2.6.RELEASE
graphql-spring-boot-starter version 5.0.2
Trying to persist record in MongoDB using graphql mutation, I successfully persisted through Single Object like below -
type ParentElement {
id: ID!
type: String
child: String
}
But when trying out with nested object, I am seeing following error -
Caused by: com.coxautodev.graphql.tools.SchemaError: Expected type 'ChildElement' to be a GraphQLInputType, but it wasn't! Was a type only permitted for object types incorrectly used as an input type, or vice-versa?
My Schema is as follows -
schema {
query: Query
mutation: Mutation
}
type ChildElement {
make: String
model: String
}
type ParentElement {
id: ID!
type: String
child: ChildElement
}
type Query {
findAllElements: [ParentElement]
}
type Mutation {
createElement(id: String, type: String, child: ChildElement): ParentElement
}
Pojo Classes & Mutation are as follows -
#Document(collection="custom_element")
public class ParentElement {
private String id;
private String type;
private ChildElement child;
}
public class ChildElement {
private String make;
private String model;
}
#Component
public class ElementMutation implements GraphQLMutationResolver {
private ElementRepository elementRepository;
public ElementMutation(ElementRepository elementRepository) {
this.elementRepository = elementRepository;
}
public ParentElement createElement(String id, String type, ChildElement child) {
ParentElement element = new ParentElement()
elementRepository.save(element);
return element;
}
}
#Component
public class ElementQuery implements GraphQLQueryResolver {
private ElementRepository elementRepository;
#Autowired
public ElementQuery(ElementRepository elementRepository) {
this.elementRepository = elementRepository;
}
public Iterable<ParentElement> findAllElements() {
return elementRepository.findAll();
}
}
#Repository
public interface ElementRepository extends MongoRepository<ParentElement, String>{
}
I want to save following json representation in mongo db -
{
"id": "custom_id",
"type": "custom_type",
"child": {
"make": "custom_make",
"model": "Toyota V6",
}
}
I tried several things but when starting server always getting same exception. The above json is a simple representation. I want to save much more complex one, the difference with other examples available online is that I don't want to create separate mongo object for child element as shown with Book-Author example available online.
Graphql type and input are two different things. You cannot use a type as a mutation input. That is exactly what the exception is about: Expected type 'ChildElement' to be a GraphQLInputType, but it wasn't, the library was expecting an input but found something else (an object type).
To solve that problem, create a child input:
input ChildInput {
make: String
model: String
}
type Mutation {
createElement(id: String, type: String, child: ChildInput): ParentElement
}
You can also have a look at this question: Can you make a graphql type both an input and output type?
I am using the following query:
NativeSearchQuery nsq = new NativeSearchQueryBuilder()
.withQuery(qb)
.withPageable(PageRequest.of(page, PAGE_SIZE))
.withSort(sb)
.build();
With the following SortBuilder:
SortBuilders.geoDistanceSort("customer.address.geoLocation",
customer.getAddress().getGeoLocation().toGeoPoint())
.order(SortOrder.ASC)
.unit(DistanceUnit.KILOMETERS);
Which is producing the desired sort query:
"sort": [
{
"_geo_distance" : {
"customer.address.geoLocation" : [
{
"lat" : 40.4221663,
"lon" : -3.7148336
}
],
"unit" : "km",
"distance_type" : "arc",
"order" : "asc",
"validation_method" : "STRICT",
"ignore_unmapped" : false
}
}
]
Aditionaly each resulting document is returning the distance to to the reference parameter in km:
"sort": [2.4670609224864997]
I need this value as part as my domain but I simply cannot push it into my object. I tried a simple approach as definig it into my domain as what it seems to be a float[] but I keep getting null. I'm using Jackson for (de)serializing.
private float[] sort;
public float[] getSort() {
return this.sort;
}
public void setSort(float[] score) {
this.sort = score;
}
My repository:
public interface ElasticsearchProductRepository
extends ElasticsearchRepository<Product, String> {
Page<Product> search(SearchQuery searchQuery);
}
Product:
#Entity
#Table(name = "product")
#Document(indexName = "product", createIndex = true, type = "_doc")
#Setting(settingPath = "elasticsearch/product-index.json")
#DynamicTemplates(mappingPath = "elasticsearch/product-dynamic-templates.json")
public class Product {
...
#org.springframework.data.annotation.Id
#javax.persistence.Id
#GeneratedValue(strategy = GenerationType.AUTO)
#JsonView(ResponseView.class)
private long id;
#Field(analyzer = "autocomplete", type = FieldType.Text, searchAnalyzer="standard")
#JsonView(CreationRequestView.class)
private String productName;
private float[] sort;
...
public float[] getSort() {
return this.sort;
}
public void setSort(float[] score) {
this.sort = score;
}
}
I added the following scripted field:
"script_fields": {
"distance": {
"script": {
"lang": "painless",
"source": "doc['sort']",
"params": {
}
}
}
}
But it is not finding the sort field.
"reason": "No field found for [sort] in mapping with types []"
It seems that doc references doc._source in reality... how can I reference this great piece of data from an Elastic script?
Currently in Spring Data Elasticsearch the returned entity is populated from the values returned in the _source field. The returned sort is not part of that, but a part of the additional information return for each search hit.
For the next release (4.0) we are currently in the process of rewriting how returned hits are handled. The next version will have a SearchHit<T> class where T is the entity domain class, and the SearchHit object contains this entity along with other information like score, highlight, scripted fields, sort etc.
It is planned that the repository methods then can for example return Lists or Pages of these SearchHit objects.
But currently, the sort from the query will lead to a sorted return, but you cannot retrieve the sort values themselves - which in your case is bad, because they are calculated dynamically.
Edit:
You can retrieve the value by defining a property for the geo_distance in your entity and annotate this with the #ScriptedField annotation, and then provide the script in your native query. An example can be found in the at tests for Spring Data Elasticsearch. This example works with the ElaticsearchTemplate, but it should make no difference passing this query into a repository search method.
Edit 25.12.2019:
The current master branch, which will become Spring Data Elasticsearch 4.0, has this implemented. The found documents are returned in a SearchHit object that also contains the sortValues.
Is it possible to dynamically create a GraphQL schema ?
We store the data in mongoDB and there is a possibility of new fields getting added. We do not want any code change to happen for this newly added field in the mongoDB document.
Is there any way we can generate the schema dynamically ?
Schema is defined in code, but for java(schema as pojo), when new
attribute is added, you have to update and recompile code, then
archive and deploy the jar again. Any way to generate schema by the
data instead of pre-define it?
Currently we are using java related projects (graphql-java, graphql-java-annotations) for GraphQL development.
You could use graphql-spqr, it allows you auto-generate a schema based on your service classes. In your case, it would look like this:
public class Pojo {
private Long id;
private String name;
// whatever Ext is, any (complex) object would work fine
private List<Ext> exts;
}
public class Ext {
public String something;
public String somethingElse;
}
Presumably, you have a service class containing your business logic:
public class PojoService {
//this could also return List<Pojo> or whatever is applicable
#GraphQLQuery(name = "pojo")
public Pojo getPojo() {...}
}
To expose this service, you'd just do the following:
GraphQLSchema schema = new GraphQLSchemaGenerator()
.withOperationsFromSingleton(new PojoService())
.generate();
You could then fire a query such as:
query test {
pojo {
id
name
exts {
something
somethingElse
} } }
No need for strange wrappers or custom code of any kind, nor sacrificing type safety. Works with generics, dependency injection, or any other jazz you may have in your project.
Full disclosure: I'm the author of graphql-spqr.
After some days' investigation. I found it is hard to generate schema dynamically in Java (or cost is so high).
Well, from another way. I think we can use Map as a compromised way to accomplish that.
POJO/Entity
public class POJO{
#GraphQLField
private Long id;
#GraphQLField
private String name;
// ...
#GraphQLField
private GMap exts;
}
GMap is a customized Map (Because Map/HashMap is a JDK inner class which could not make as GraphQL Schema but only extend).
GMap
public class GMap extends HashMap<String, String> {
#GraphQLField
public String get(#GraphQLName("key") String key) {
return super.get(key);
}
}
Retrieve data from Client
// query script
query test
{
your_method
{
id
name
exts {
get(key: "ext") // Add a extended attribute someday
}
}
}
// result
{
"errors":[],
"data":
{
"list":
[
{"id":1, name: "name1", exts: {"get": "ext1"}},
{"id":2, name: "name2", exts: {"get": "ext2"}}
]
}
}
First, I am using Spring MVC.
I have a "Skill"-modelclass, where I placed the #JsonIgnoreProperties
#JsonIgnoreProperties({"personSkills","berufsgruppes","skills"})
#JsonPropertyOrder({"idSkill", "name", "levelBezeichnung", "skill"})
I am using it because there are many-to-many or many-to-one or one-to-many relationships and without this property it causes an StackOverFlowException (Infinite Error). One skill can have many skills, so there is a kind of recursion.
I implemented an Sub-Class for "Skill" named "SkillBean", which has one more attribute "checked" thats just relevant for the application not for database.
public class Skill implements java.io.Serializable {
private Set<Skill> skills = new HashSet<Skill>(0);
...
#OneToMany(fetch = FetchType.LAZY, mappedBy = "skill")
public Set<Skill> getSkills() {
return this.skills;
}
public void setSkills(Set<Skill> skills) {
this.skills = skills;
}
public class SkillBean extends Skill implements Serializable{
public boolean checked;
public SkillBean() {
}
public SkillBean(Skill skill, boolean checked) {
this.checked = checked;
}
public boolean isChecked() {
return checked;
}
public void setChecked(boolean checked) {
this.checked = checked;
}
}
Im Using BeanUtils.copyProperties() to copy a Skill-Object into a SkillBean-Object. This works fine. I need to reorder the skills because currently I get the lowest Child-Skill first and not its parent. For this, I am trying to reorder objects and trying to build a tree in a list. Every skill has a Set of its children.
private ArrayList<SkillBean> formatSkillMap(HashMap<Integer, SkillBean> map) {
Map<Integer, SkillBean> tempSkills = (Map<Integer, SkillBean>) map.entrySet().stream().filter(p -> p.getValue().getSkill() == null)
.collect(Collectors.toMap(Entry::getKey, Entry::getValue));
ArrayList<SkillBean> list = new ArrayList<SkillBean>(tempSkills.values());
for (int i = 0; i < list.size(); i++) {
SkillBean sb = list.get(i);
tempSkills = (Map<Integer, SkillBean>) map.entrySet().stream().filter(p -> p.getValue().getSkill() != null)
.filter(p -> p.getValue().getSkill().getIdSkill() == sb.getIdSkill()).collect(Collectors.toMap(Entry::getKey, Entry::getValue));
Set<Skill> test = new HashSet<Skill>(tempSkills.values());
list.get(i).setSkills(test);
}
return list;
But the list doesnt return the Sub-skillset Could anyone tell me why the subskills are not serialized? When I return the subset of this parent-skill it their subskills are serialized.
0: {
"idSkill": 34
"name": "Methodik"
"levelBezeichnung": {
"#id": 1
"idLevelBezeichnung": 1
"bezeichnung": "Standard"
"handler": {}
"hibernateLazyInitializer": {}
}-
"checked": true
}
Without reordering it looks sth like this, but the problem is that the skill with id=34 is the parent skill and 9 is the subskill. I want it exactly the other way around. There could be three levels.
9: {
"idSkill": 9
"name": "Standards"
"levelBezeichnung": {
"#id": 1
"idLevelBezeichnung": 1
"bezeichnung": "Standard"
"handler": {}
"hibernateLazyInitializer": {}
}-
"skill": {
"idSkill": 34
"name": "Methodik"
"levelBezeichnung": 1
}-
"checked": true
}
Finally, I end up with this:
Ignore parent of a skill or ignore children of a skill to avoid infinite recursion. In some case you don't need to ignore one of them. If you have not that much data it could work. I'am talking about 150 nodes where each node knows its parent/children.
I am querying for the path from bottom to top of my lowest skill with a custom sql query.
I am putting all my skills on the highest level in a map. That means, I have access to all my skills, cause (as I said) every node knows his children.
I am searching in my map from top to bottom and delete all references that I don't need based on the path, which I already got.
The whole code is a bit complex and I'm using recursion to made it less complex. In the end I am not that pleased with this solution because there are many loops in it and so far I am having some trouble with performance-issues.
I need to discover whether it is a database-query problem or a problem caused by the loops.
My resource is
#GET
#Path("/items")
public MyCollection<Items> getItems()throws Exception{
//Code to return MyCollection<items>
}
My Item class is
#XmlRootElement
public class Item{
private int id;
private String name;
//Have getters and Setters.
}
And My collection class is Generic as below.
public class MyCollection<T> extends MyBaseCollection{
private java.util.Collection<T> items;
private int count;
}
When i try to generate doc using enunciate. The sample Json has only the item and count and the fields of Item class is not getting reflected.
My sample Json generated is
{
"items" : [ {
}, {
}, {
}, {
}, {
}, {
}, {
}, {
} ],
"count" : ...,
}
How to get id,name inside the Item in the generated sample Json?
Thanks.
This is a limitation that i have run into as well, there is no way to specify #TypeHint on a nested object. To support documentation, consider creating a custom collection that defines "items" as a collection of specific class instead of generic.
If you have an idea of how you would want this to work (using the generic type) I suggest submitting enhancement request to Enunciate team.
I have a similar problem where I am returning a Map and I can't #TypeHint this.