I get the error in one of the polygons i am importing.
Write failed with error code 16755 and error message 'Can't extract geo keys: { _id: "b9c5ac0c-e469-4b97-b059-436cd02ffe49", _class: .... ] Duplicate vertices: 0 and 15'
Full stack Trace: https://gist.github.com/boundaries-io/927aa14e8d1e42d7cf516dc25b6ebb66#file-stacktrace
GeoJson MultiPolygon I am importing using Spring Data MongoDB
public class MyPolgyon {
#Id
String id;
#GeoSpatialIndexed(type=GeoSpatialIndexType.GEO_2DSPHERE)
GeoJsonPoint position;
#GeoSpatialIndexed(type=GeoSpatialIndexType.GEO_2DSPHERE)
GeoJsonPoint location;
#GeoSpatialIndexed(type=GeoSpatialIndexType.GEO_2DSPHERE)
GeoJsonPolygon polygon;
public static GeoJsonPolygon generateGeoJsonPolygon(List<LngLatAlt> coordinates) {
List<Point> points = new ArrayList<Point>();
for ( LngLatAlt point: coordinates) {
org.springframework.data.geo.Point dataPoint = new org.springframework.data.geo.Point( point.getLongitude() ,point.getLatitude());
points.add(dataPoint);
}
return new GeoJsonPolygon(points);
}
How can i avoid this error in Java?
I can load the geojson fine in http://geojson.io
here is the GEOJSON: https://gist.github.com/boundaries-io/4719bfc386c3728b36be10af29860f4c#file-rol-ca-part1-geojson
removal of duplicates using:
for (com.vividsolutions.jts.geom.Coordinate coordinate : geometry.getCoordinates()) {
Point lngLatAtl = new Point(coordinate.x, coordinate.y);
boolean isADup = points.contains(lngLatAtl);
if ( !notDup ){
points.add(lngLatAtl);
}else{
LOGGER.debug("Duplicate, [" + lngLatAtl.toString() +"] index[" + count +"]");
}
count++;
}
Logging:
2017-10-27 22:38:18 DEBUG TestBugs:58 - Duplicate, [Point [x=-97.009868, y=52.358242]] index[15]
2017-10-27 22:38:18 DEBUG TestBugs:58 - Duplicate, [Point [x=-97.009868, y=52.358242]] index[3348]
In this case you have duplicate vertex at index 0 and index 1341 for 2nd polygon.
[ -62.95859676499998, 46.20653318300003 ]
The insertion fails when Mongo db is trying to build the 2d sphere index for the document. Remove the coordinate at index 1341 and you should be able to persist successfully.
You just have to cleanse the data when you find the error.
You can write a small program to read the error from mongo db and provide the update back to the client. Client can act on those messages and try again the request.
More information on geo errors can be found here.
You can look at the code here for GeoParser to find how/what errors are generated. For the specific error you got you can take a look here GeoParser. This error is generated by S2 library that Mongodb uses for validation.
Related
I am trying to develop and application to read and write to RF tags. Reading is flawless, but I'm having issues with writing. Specifically the error "GetStatus Write RFID_API_UNKNOWN_ERROR data(x)- Field can Only Take Word values"
I have tried reverse-engineering the Zebra RFID API Mobile by obtaining the .apk and decoding it, but the code is obfuscated and I am not able to decypher why that application's Write works and mine doesn't.
I see the error in the https://www.ptsmobile.com/rfd8500/rfd8500-rfid-developer-guide.pdf at page 185, but I have no idea what's causing it.
I've tried forcefully changing the writeData to Hex, before I realized that the API does that on its own, I've tried changing the Length of the writeData as well, but it just gets a null value. I'm so lost.
public boolean WriteTag(String sourceEPC, long Password, MEMORY_BANK memory_bank, String targetData, int offset) {
Log.d(TAG, "WriteTag " + targetData);
try {
TagData tagData = null;
String tagId = sourceEPC;
TagAccess tagAccess = new TagAccess();
tagAccess.getClass();
TagAccess.WriteAccessParams writeAccessParams = tagAccess.new WriteAccessParams();
String writeData = targetData; //write data in string
writeAccessParams.setAccessPassword(Password);
writeAccessParams.setMemoryBank(MEMORY_BANK.MEMORY_BANK_USER);
writeAccessParams.setOffset(offset); // start writing from word offset 0
writeAccessParams.setWriteData(writeData);
// set retries in case of partial write happens
writeAccessParams.setWriteRetries(3);
// data length in words
System.out.println("length: " + writeData.length()/4);
System.out.println("length: " + writeData.length());
writeAccessParams.setWriteDataLength(writeData.length()/4);
// 5th parameter bPrefilter flag is true which means API will apply pre filter internally
// 6th parameter should be true in case of changing EPC ID it self i.e. source and target both is EPC
boolean useTIDfilter = memory_bank == MEMORY_BANK.MEMORY_BANK_EPC;
reader.Actions.TagAccess.writeWait(tagId, writeAccessParams, null, tagData, true, useTIDfilter);
} catch (InvalidUsageException e) {
System.out.println("INVALID USAGE EXCEPTION: " + e.getInfo());
e.printStackTrace();
return false;
} catch (OperationFailureException e) {
//System.out.println("OPERATION FAILURE EXCEPTION");
System.out.println("OPERATION FAILURE EXCEPTION: " + e.getResults().toString());
e.printStackTrace();
return false;
}
return true;
}
With
Password being 00
sourceEPC being the Tag ID obtained after reading
Memory Bank being MEMORY_BANK.MEMORY_BANK_USER
target data being "8426017056458"
offset being 0
It just keeps giving me "GetStatus Write RFID_API_UNKNOWN_ERROR data(x)- Field can Only Take Word values" and I have no idea why this is the case, nor I know what a "Word value" is, and i've searched for it. This is all under the "OperationFailureException", as well. Any help would be appreciated, as there's almost no resources online for this kind of thing.
Even this question is a bit older, I had the same problem so as far as I know this should be the answer.
Your target data "8426017056458" length is 13 and at writeAccessParams.setWriteDataLength(writeData.length()/4)
you are devide it with four. Now if you are trying to write the target data it is longer than the determined WriteDataLength. And this throws the Error.
One 'word' is 4 Hex => 16 Bits long. So your Data have to be filled up first and convert it to Hex.
In order to detect visitors demographics based on their behavior I used SVM algorithm from SPARK MLlib:
JavaRDD<LabeledPoint> data = MLUtils.loadLibSVMFile(sc.sc(), "labels.txt").toJavaRDD();
JavaRDD<LabeledPoint> training = data.sample(false, 0.6, 11L);
training.cache();
JavaRDD<LabeledPoint> test = data.subtract(training);
// Run training algorithm to build the model.
int numIterations = 100;
final SVMModel model = SVMWithSGD.train(training.rdd(), numIterations);
// Clear the default threshold.
model.clearThreshold();
JavaRDD<Tuple2<Object, Object>> scoreAndLabels = test.map(new SVMTestMapper(model));
Unfortunately final SVMModel model = SVMWithSGD.train(training.rdd(), numIterations); throws ArrayIndexOutOfBoundsException :
Caused by: java.lang.ArrayIndexOutOfBoundsException: 4857
labels.txt is a txt file composed from:
Visitor criteria(is male) | List[siteId: access number]
1 27349:1 23478:1 35752:1 9704:2 27896:1 30050:2 30018:1
1 36214:1 26378:1 26606:1 26850:1 17968:2
1 21870:1 41294:1 37388:1 38626:1 10711:1 28392:1 20749:1
1 29328:1 34370:1 19727:1 29542:1 37621:1 20588:1 42426:1 30050:6 28666:1 23190:3 7882:1 35387:1 6637:1 32131:1 23453:1
I tried with a lot of data and algorithms and as seen it gives an error for site Ids bigger than 5000.
Is there any solution to overcome it or there is another library for this issue? Or because the data is matrix is too sparse should use SVD?
I've been trying to use the Neo4j Spatial plugin with data loaded via Java. I have added the plugin, and when I start an empty database this is confirmed by the following GET request to the server.
{
"extensions": {
"SpatialPlugin": {
"addSimplePointLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addSimplePointLayer",
"findClosestGeometries": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/findClosestGeometries",
"addNodesToLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addNodesToLayer",
"addGeometryWKTToLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addGeometryWKTToLayer",
"findGeometriesWithinDistance": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/findGeometriesWithinDistance",
"addEditableLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addEditableLayer",
"addCQLDynamicLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addCQLDynamicLayer",
"addNodeToLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addNodeToLayer",
"getLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/getLayer",
"findGeometriesInBBox": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/findGeometriesInBBox",
"updateGeometryFromWKT": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/updateGeometryFromWKT"
}
},
"node": "http://localhost:7474/db/data/node",
"node_index": "http://localhost:7474/db/data/index/node",
"relationship_index": "http://localhost:7474/db/data/index/relationship",
"extensions_info": "http://localhost:7474/db/data/ext",
"relationship_types": "http://localhost:7474/db/data/relationship/types",
"batch": "http://localhost:7474/db/data/batch",
"cypher": "http://localhost:7474/db/data/cypher",
"indexes": "http://localhost:7474/db/data/schema/index",
"constraints": "http://localhost:7474/db/data/schema/constraint",
"transaction": "http://localhost:7474/db/data/transaction",
"node_labels": "http://localhost:7474/db/data/labels",
"neo4j_version": "2.3.2"
}
However, when I stop the server, load my spatial data via Java with a SpatialIndexProvider.SIMPLE_WKT_CONFIG index, then adding it with:
try (Transaction tx = db.beginTx()) {
Index<Node> index = db.index().forNodes("location", SpatialIndexProvider.SIMPLE_WKT_CONFIG);
for (String line : lines) {
String[] columns = line.split(",");
Node node = db.createNode();
node.setProperty("wkt", String.format("POINT(%s %s)", columns[4], columns[3]));
node.setProperty("name", columns[0]);
index.add(node, "dummy", "value");
}
tx.success();
}
After a restart, I get the error:
2016-02-23 13:44:36.747+0000 ERROR [o.n.k.KernelHealth] setting TM not OK. Kernel has encountered some problem, please perform necessary action (tx recovery/restart) No index provider 'spatial' found. Maybe the intended provider (or one more of its dependencies) aren't on the classpath or it failed to load.
in Messages.log inside the graph.db. Is there anything obvious that I'm doing wrong?
I'm on windows 8, Neo4j 2.3.2, Java 8 and neo4j-spatial-0.15-neo4j-2.3.0.jar
Did you unzip the full spatial zip into the plugins directory?
Otherwise some classes that spatial needs can't be found.
I am trying to use GraphHopperWeb in order to see route response on the web.
I am probably does not know how to use it, but i don't get what is wrong.
I am using the following code (in java) in order to find some path from one point to other point as the following:
GHRequest req = new GHRequest(32.070113, 34.790266, 32.067103, 34.777861).
setWeighting("fastest")
.setVehicle("car");
GraphHopperWeb gh = new GraphHopperWeb();
gh.load("http://localhost:8989/route");
gh.setInstructions(true);
GHResponse response = gh.route(req);
System.out.println(response);
I get the following print screen:
nodes:24: (32.07012,34.79021), (32.07011,34.7902), (32.07006,34.79019), (32.07028,34.78867), (32.07029,34.78852), (32.07029,34.78847), (32.06994,34.78814), (32.06942,34.78761), (32.06931,34.7875), (32.06912,34.78731), (32.06862,34.78667), (32.06756,34.78546), (32.06627,34.78391), (32.06617,34.78375), (32.06604,34.7836), (32.06622,34.78317), (32.06768,34.78009), (32.06769,34.77998), (32.06767,34.77992), (32.06654,34.77915), (32.06624,34.77894), (32.0666,34.77764), (32.06709,34.7778), (32.06708,34.77785), [(0,Continue onto הנציב,6.186,742), (2,Turn right onto שדרות יהודית,164.23,19706), (-2,Turn left onto מנחם בגין, 2,142.253,9310), (0,Continue onto מנחם בגין,517.411,31039), (2,Turn right onto לינקולן,394.313,28388), (-1,Turn slight left onto יהודה הלוי,183.917,13240), (2,Turn right onto בלפור,128.87,9278), (2,Turn right onto שדרות רוטשילד,56.539,4070), (2,Turn right onto המגיד,4.589,550), (4,Finish!,0.0,0)]
but when i open explorer with the address "http://localhost:8989/route" i am getting the following error:
{"info":{"copyrights":["GraphHopper","OpenStreetMap contributors"],"errors":[{"details":"java.lang.IllegalStateException","message":"At least 2 points has to be specified, but was:0"}]}}
I don't get how can i see GHResponse (the routing path i have found) on the map thru explorer?
The usage of the external GraphHopperWeb client is similar to the usage of the embedded routing described here and the points for visualization can be fetched via getPoints or the turn instructions via getInstructions:
GHRequest req = new GHRequest(32.070113, 34.790266, 32.067103, 34.777861)
.setWeighting("fastest")
.setVehicle("car");
GHResponse res = gh.route(req);
if(res.hasErrors()) {
// handle or throw exceptions res.getErrors()
return;
}
// get path geometry information (latitude, longitude and optionally elevation)
PointList pl = res.getPoints();
// distance of the full path, in meter
double distance = res.getDistance();
// time of the full path, in milliseconds
long millis = res.getTime();
// get information per turn instruction
InstructionList il = res.getInstructions();
I run the following code on my local (mac) machine and on a remote unix server.:
public void deleteValue(final String id, final String value) {
log.info("Removing value " + value);
final Collection<String> valuesBeforeRemoval = getValues(id);
final MutationBatch m = keyspace.prepareMutationBatch();
m.withRow(VALUES_CF, id).deleteColumn(value);
try {
m.execute();
} catch (final ConnectionException e) {
log.error("Unable to delete location " + value, e);
}
final Collection<String> valuesAfterRemoval = getValues(id);
if (valuesAfterRemoval.size()!=(valuesBeforeRemoval.size()-1)) {
log.error("value " + value + " was supposed to be removed from list " + valuesBeforeRemoval + " but it wasn't: " + valuesAfterRemoval);
}
...
}
protected Collection<String> getValues(final String id) {
try {
final OperationResult<ColumnList<String>> operationResult = keyspace
.prepareQuery(VALUES_CF).getKey(id).execute();
final ColumnList<String> result = operationResult.getResult();
if (result.isEmpty()) {
log.info("No value found for id: " + id);
return new ArrayList<String>();
}
return result.getColumnNames();
} catch (final ConnectionException e) {
log.error("Unable to retrieve session " + id, e);
}
return new ArrayList<String>();
}
Locally, that line is never executed, which makes sense:
log.error("value " + value + " was supposed to be removed from list " + valuesBeforeRemoval + " but it wasn't: " + valuesAfterRemoval);
but that line is executed on my dev server:
[ERROR] [main] [n.o.w.s.d.SessionDaoCassandraImpl] [2013-03-08 13:12:24,801]
[] - value 3 was supposed to be removed from list [3, 2, 1, 0, 7, 6, 5, 4, 9, 8] but it wasn't: [3, 2, 1, 0, 7, 6, 5, 4, 9, 8]
I am using com.netflix.astyanax
Both my local machine and the remote dev server connect to the very
same cassandra instance.
Both my local machine and the remote dev server run the very same test
creating a new row family, and adding 10 records before one is deleted.
When the error occurs on dev, log.error("Unable to delete
location " + value, e); was not executed (i.e. running the deletion
command didn't produce any exception).
I am 100% positive that no other code is affecting the content of the
database while I am running the test on dev so this isn't some
strange concurrency issue.
What could possibly explain that the deleteColumn(value) request runs without producing any error but still does not remove the column from the database?
ADDITIONAL INFO
Here is how I created the keyspace:
create keyspace sessiondata
with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy'
and strategy_options = {replication_factor:1};
Here is how I created the column family values, referenced as VALUES_CF in the code above:
create column family values
with comparator = UTF8Type
;
Here is how the keyspace referenced in the java code above is defined:
final AstyanaxContext.Builder contextBuilder = getBuilder();
final AstyanaxContext<Keyspace> keyspaceContext = contextBuilder
.forKeyspace(keyspaceName).buildKeyspace(
ThriftFamilyFactory.getInstance());
keyspaceContext.start();
keyspace = keyspaceContext.getEntity();
where getBuilder is:
private Builder getBuilder() {
final AstyanaxConfigurationImpl conf = new AstyanaxConfigurationImpl()
.setDiscoveryType(NodeDiscoveryType.NONE)
.setRetryPolicy(new RunOnce());
final ConnectionPoolConfigurationImpl poolConf = new ConnectionPoolConfigurationImpl("MyPool")
.setPort(port)
.setMaxConnsPerHost(1)
.setSeeds(value);
return new AstyanaxContext.Builder()
.forCluster(cluster)
.withAstyanaxConfiguration(conf)
.withConnectionPoolConfiguration(poolConf)
.withConnectionPoolMonitor(new CountingConnectionPoolMonitor());
}
SECOND UPDATE
First, the issues are not solely related to deletes. I observe similar problems when updating records in the database, reading them, and not being able to read the updates I just wrote
Second, I created a test that does 100 times the following operations:
write a row into cassandra
update that row in cassandra
read back that row from cassandra and check whether the row was indeed updated, and checking again regularly after delays if it wasn't
What I observe from that test is that:
again, when I run that code locally, all 100 iterations pass right away (no retry ever needed)
when I run that code on the remote server, some of the iterations pass, some fail. When they fail, no matter how large the delay (I wait up to 10 seconds), the test always fail.
At this point, I am really not sure how any cassandra setup could explain this behavior since I connect to the very same server for my tests and since the delays I insert are much larger than any additional latency I may need to run the test when connecting from my local machine.
The only relevant difference seems to be which machine the code is running on.
THIRD UPDATE
If in the test mentioned in the previous update, I insert a delay between the 2 writes, the code starts passing if the delay is >= 1,000 ms. A delay of, say, 100 ms doesn't help. I also modified the builder to set the default read and write consistencies to the most demanding: ALL, and that had no impact on the results of the test (still failing about half of the time unless delay between writes >1s):
final AstyanaxConfigurationImpl conf = new AstyanaxConfigurationImpl()
.setDiscoveryType(NodeDiscoveryType.NONE)
.setRetryPolicy(new RunOnce()).setDefaultReadConsistencyLevel(ConsistencyLevel.CL_ALL).setDefaultWriteConsistencyLevel(ConsistencyLevel.CL_ALL);
To debug, try printing the full row instead of just the column names. When I say the full row I mean the column name, column value and the time stamp. A long shot is clocks are wrong on one of your test machines and this is throwing out your tests on the other.
Another thing to double check is that ip is indeed what you think it is, in both your application and cassandra. When you retrieve it print it between something, like println("-" + ip "-"). Before and after your try block for the execute in deleteSecureLocation do a get for only that column, not the entire row. I'm not too sure how to do that in astynax, on the cli it would be get[id][ip].
Something to keep in mind is that a delete won't fail even if there's nothing to delete. To cassandra it's a write, the only thing that will make it a delete is if on read it's the latest timestamped entry against that row/column name.