I'm trying to find a way to convert a JTS Geometry to an ElasticSeach Geometry in order to make a geo query, but I didn't find a convenient way.
Using ElasticSearch 7.11.1 with Java API, to make a geospatial query, I should use a
GeoShapeQueryBuilder
returned by
QueryBuilders.geoShapeQuery(String, org.elasticsearch.geometry.Geometry)
method.
But, in my project, I'm using the JTS geometry (see https://en.wikipedia.org/wiki/JTS_Topology_Suite), where a geometry is an instance of class:
org.locationtech.jts.geom.Geometry
Obviously, I can't cast JTS Geometry to ElasticSearch Geometry, but I should convert the instance is somw way.
Has anyone encountered similar problem?Thank you very much
You can get coordinates from your jts.Geometry object and build whatever elasticsearch.Geometry you need e.g for Polygon you would write something like this:
val coordinates = geometry.coordinates
val mappedCoordinates = coordinates.map { Coordinate(it.x, it.y) }
val toPolygonGeometry = PolygonBuilder(CoordinatesBuilder().coordinates(mappedCoordinates)).toPolygonGeometry()
When I use gremlin-server connection using gremlin-driver in Java, I am not able to use "sideEffect" of GraphTraversal.
graph = EmptyGraph.instance()
cluster = Cluster.open("conf/remote-objects.yaml");
graphTraversalSource = graph.traversal().withRemote(DriverRemoteConnection.using(cluster));
My query that uses sideEffect looks like:
AtomicLong level1 = new AtomicLong(0);
graphTraversalSource.V().hasLabel("user")
.has("uuid", "1234")
.sideEffect(it -> it.get().property("level", level1.getAndIncrement())).emit().repeat(in())
.until(loops().is(5)).valueMap("uuid", "name", "level");
This query used to work when I was using janusgraph-dynamodb-storage-backend as dependency and running gremlin server within Java application and connecting to dyamodb. When i switched to using remote connection to gremlin server running in EC2, i started getting below error message:
java.util.concurrent.CompletionException: io.netty.handler.codec.EncoderException: WebSocketGremlinRequestEncoder must produce at least one message., took 3.895 sec
If I remove the sideEffect part from the above query, it works fine. I really need to add a custom property during traversal and include that in results without saving it in the database.
You have a few problems. The first problem is that you are trying to remote a lambda in the sideEffect() Lambdas can't be serialized to Gremlin bytecode - at least not in the form you've provided. However, you can do this:
gremlin> cluster = Cluster.open("conf/remote-objects.yaml")
==>localhost/127.0.0.1:8182
gremlin> g = graph.traversal().withRemote(DriverRemoteConnection.using(cluster))
==>graphtraversalsource[emptygraph[empty], standard]
gremlin> g.addV('person').as('p').addE('link').to('p')
==>e[1][0-link->0]
gremlin> g.V().sideEffect(Lambda.function("it.get().property('level',1)")).valueMap()
==>[level:[1]]
Note that I had to import import org.apache.tinkerpop.gremlin.util.function.* to the console to make that last line work there - That will be fixed for 3.2.7/3.3.0.
So, you could pass your lambda that way, but:
I don't think your traversal will work as before because you are referencing a variable local to the client with level1 - the server is not going to know anything about that.
TinkerPop generally recommends that you avoid lambdas.
I don't quite follow what your Gremlin is doing to provide a suggestion on how to resolve this. You do give this hint:
I really need to add a custom property during traversal and include that in results without saving it in the database.
...but the Gremlin above does write the value of level1 to the database so I'm not sure of what you are after.
I spent a week at Gremlin shell trying to compose one query to
get all incoming and outgoing vertexes, including their edges and directions. All i tried everything.
g.V("name","testname").bothE.as('both').select().back('both').bothV.as('bothV').select(){it.map()}
output i need is (just example structure ):
[v{'name':"testname"}]___[ine{edge_name:"nameofincomingedge"}]____[v{name:'nameofconnectedvertex']
[v{'name':"testname"}]___[oute{edge_name:"nameofoutgoingedge"}]____[v{name:'nameofconnectedvertex']
So i just whant to get 1) all Vertices with exact name , edge of each this vertex (including type inE or outE), and connected Vertex. And ideally after that i want to get their map() so i'l get complete object properties. i dont care about the output style, i just need all of information present, so i can manipulate with it after. I need this to train my Gremlin, but Neo4j examples are welcome. Thanks!
There's a variety of ways to approach this. Here's a few ideas that will hopefully inspire you to an answer:
gremlin> g = TinkerGraphFactory.createTinkerGraph()
==>tinkergraph[vertices:6 edges:6]
gremlin> g.V('name','marko').transform{[v:it,inE:it.inE().as('e').outV().as('v').select().toList(),outE:it.outE().as('e').inV().as('v').select().toList()]}
==>{v=v[1], inE=[], outE=[[e:e[9][1-created->3], v:v[3]], [e:e[7][1-knows->2], v:v[2]], [e:e[8][1-knows->4], v:v[4]]]}
The transform converts the incoming vertex to a Map and does internal traversal over in/out edges. You could also use path as follows to get a similar output:
gremlin> g.V('name','marko').transform{[v:it,inE:it.inE().outV().path().toList().toList(),outE:it.outE().inV().path().toList()]}
==>{v=v[1], inE=[], outE=[[v[1], e[9][1-created->3], v[3]], [v[1], e[7][1-knows->2], v[2]], [v[1], e[8][1-knows->4], v[4]]]}
I provided these answers using TinkerPop 2.x as that looked like what you were using as judged from the syntax. TinkerPop 3.x is now available and if you are just getting started, you should take a look at the latest that has to offer:
http://tinkerpop.incubator.apache.org/
Under 3.0 syntax you might do something like this:
gremlin> g.V().has('name','marko').as('a').bothE().bothV().where(neq('a')).path()
==>[v[1], e[9][1-created->3], v[3]]
==>[v[1], e[7][1-knows->2], v[2]]
==>[v[1], e[8][1-knows->4], v[4]]
I know that you wanted to know what the direction of the edge in the output but that's easy enough to detect on analysis of the path.
UPDATE: Here's the above query written with Daniel's suggestion of otherV usage:
gremlin> g.V().has('name','marko').bothE().otherV().path()
==>[v[1], e[9][1-created->3], v[3]]
==>[v[1], e[7][1-knows->2], v[2]]
==>[v[1], e[8][1-knows->4], v[4]]
To see the data from this you can use by() to pick apart each Path object - The extension to the above query applies valueMap to each piece of each Path:
gremlin> g.V().has('name','marko').bothE().otherV().path().by(__.valueMap(true))
==>[{label=person, name=[marko], id=1, age=[29]}, {label=created, weight=0.4, id=9}, {label=software, name=[lop], id=3, lang=[java]}]
==>[{label=person, name=[marko], id=1, age=[29]}, {label=knows, weight=0.5, id=7}, {label=person, name=[vadas], id=2, age=[27]}]
==>[{label=person, name=[marko], id=1, age=[29]}, {label=knows, weight=1.0, id=8}, {label=person, name=[josh], id=4, age=[32]}]
I checked the code in the neo4j manual and changed version to 2.0
The code at this link looks like this:
for ( Path position : Traversal.description()
.depthFirst()
.relationships( Rels.KNOWS )
.relationships( Rels.LIKES, Direction.INCOMING )
.evaluator( Evaluators.toDepth( 5 ) )
.traverse( node ) ){
output += position + "\n";}
When I write the same code in my program, it gives me deprecation warning for org.neo4j.kernel.Traversal.
My question is for neo4j v2.0 what is the way to do traversals using core java API. I also tried the same using cypher queries but they are slow (takes more than 1 sec) for my queries and I have read in the comparison here that java traversal APIs are faster than cypher ones.
I also want to try out dijkstra algorithm in neo4j but when I try the code given in the manual for dijkstra I again get the deprecation warning.
Where can I find the examples/code illustrating the use of core java traversal API in neo4j v2.0 ?
You should use the new TraversalDescription-framework. The TraversalDescription-object is accessable via your GraphDatabaseService by calling traversalDescription() on it. Defining the traversal then is similar to the old methods.
I am new to BO, I need to find universe name and the corresponding metadata information like(Table name, column names, join conditions etc...). I am unable to find proper way to start. I looked with Data Access SDK, Semantic SDk.
Can any one please provide me the sample code or procedure for starting..
I googled a lot but i am unable to find any sample examples
I looked into this link but that code will work only on R2 Server.
http://www.forumtopics.com/busobj/viewtopic.php?t=67088
Help is Highly Apprecitated.....
Assuming you're talking about IDT based universes, you'll need to code some Java. The JavaDoc for the API is available here.
In a nutshell, you do something like this:
SlContext context = SlContext.create() ;
LocalResourceService service = context.getService(LocalResourceService.class) ;
String blxFile = service.retrieve("universe.unx","output directory") ;
RelationalBusinessLayer businessLayer = (RelationalBusinessLayer)service.load(blxFile);
RootFolder rootFolder = businessLayer.getRootFolder() ;
Once you have a hook on the rootFolder, you can use the getChildren() method to drill into the folder structure and access the various subfolders/business objects available.
You may also want to check the CmsResourceService class to access universes stored on the repository.
To get the information you are after will require a 2 part solution. Part 1 use the Rebean SDK looking at WebI reports for the Universe and object names being used with in it.
Part 2 is to break out your favorite COM programming tool, since I try to avoid COM I use the Excel Macro editor, and access the BusinessObjects Designer library. Main code snippets that I currently have are:
Dim boUniv As Designer.Universe
Dim tbl As Designer.Table
For Each tbl In boUniv.Tables
Debug.Print tbl.Name
Next tbl
This prints all of the tables in a universe.
You will need to combine the 2 parts on your own for a dependency list between WebI reports and Universes.