I am using the latest version of alfresco 5.1 version.
one of my requirement is to create properties (key / value) where user enter the key as well as the value.
so I have done that like this
Map<QName, Serializable> props = new HashMap<QName, Serializable>();
props.put(QName.createQName("customProp1"), "prop1");
props.put(QName.createQName("customProp2"), "prop2");
ChildAssociationRef associationRef = nodeService.createNode(nodeService.getRootNode(storeRef), ContentModel.ASSOC_CHILDREN, QName.createQName(GUID.generate()), ContentModel.TYPE_CMOBJECT, props);
Now what I want to do is search the nodes with these newly created properties. I was able to search the newly created property like this.
public List<NodeRef> findNodes() throws Exception {
authenticate("admin", "admin");
StoreRef storeRef = new StoreRef(StoreRef.PROTOCOL_WORKSPACE, "SpacesStore");
List<NodeRef> nodeList = null;
Map<QName, Serializable> props = new HashMap<QName, Serializable>();
props.put(QName.createQName("customProp1"), "prop1");
props.put(QName.createQName("customProp2"), "prop2");
ChildAssociationRef associationRef = nodeService.createNode(nodeService.getRootNode(storeRef), ContentModel.ASSOC_CHILDREN, QName.createQName(GUID.generate()), ContentModel.TYPE_CMOBJECT, props);
NodeRef nodeRef = associationRef.getChildRef();
String query = "#cm\\:customProp1:\"prop1\"";
SearchParameters sp = new SearchParameters();
sp.addStore(storeRef);
sp.setLanguage(SearchService.LANGUAGE_LUCENE);
sp.setQuery(query);
try {
ResultSet results = serviceRegistry.getSearchService().query(sp);
nodeList = new ArrayList<NodeRef>();
for (ResultSetRow row : results) {
nodeList.add(row.getNodeRef());
System.out.println(row.getNodeRef());
}
System.out.println(nodeList.size());
} catch (Exception e) {
e.printStackTrace();
}
return nodeList;
}
The alfresco-global.properties indexer configuration is
index.subsystem.name=buildonly
index.recovery.mode=AUTO
dir.keystore=${dir.root}/keystore
Now my question is
Is it possible to achieve the same using the solr4 indexer ?
Or Is there any way to use buildonly indexer for a particular query ?
In your query
String query = "#cm\\:customProp1:\"prop1\"";
remove cm as you are building the QName on the fly so it does not come under cm i.e. (ContentModel) properties. So your query will be
String query = "#\\:customProp1:\"prop1\"";
Hope this will work for you
First, double check if you're simply experiencing eventual consistency, as described below. If you are, and if this presents a problem for you, consider switching to CMIS queries while staying on SOLR.
http://docs.alfresco.com/5.1/concepts/solr-event-consistency.html
Other than this, check if the node has been indexed at all. If it has, take a closer look at how you build your query.
How to find List of unindexed file in alfresco
Related
I am new to shapefile processing. Kindly guide me on how to achieve my below query.
I am using this shapefile tl_2018_us_aiannh.shp from census.gov : TIGER-LINE. I am to obtain the census block group entities like Block, Tract, County subdivision and County details from the shapefile based on the latitude and longitude provided by the user.
My requirement is to achieve this by shapefile alone and not through any API's.
Can someone help on which framework I can achieve this?
What I've tried/using so far:
I have used GeoTools to read the shapefile . Can I continue using the same? Will my requirement be achievable by this tool?
I have gone through a documentation from census.gov which states:
The Census Bureau assigns a code and these appear in fields such as
“TRACTCE”, where “CE” stands for Census. Finally, state-submitted
codes end in “ST”, such as “SLDLST”, and local education agency codes
end in “LEA”, as in “ELSDLEA”.
Which I tried in my code by:
File file = new File("D:\\tl_2018_us_aiannh.shp");
try {
Map<String, String> connect = new HashMap();
connect.put("url", file.toURI().toString());
DataStore dataStore = DataStoreFinder.getDataStore(connect);
String[] typeNames = dataStore.getTypeNames();
String typeName = typeNames[0];
System.out.println("Reading content " + typeName);
SimpleFeatureSource featureSource = dataStore
.getFeatureSource(typeName);
SimpleFeatureCollection collection = featureSource.getFeatures();
SimpleFeatureIterator iterator = collection.features();
try {
while (iterator.hasNext()) {
SimpleFeature feature = iterator.next();
GeometryAttribute sourceGeometry = feature
.getDefaultGeometryProperty();
String name = (String) (feature).getAttribute("TRACTCE");
Property property = feature.getProperty("TRACTCE");
System.out.println(property);
}
} finally {
iterator.close();
}
} catch (Throwable e) {
e.getMessage();
}
But I am receiving null as the value.
Any help would be much helpful.
I have found the solution to this. Hope this would be helpful to someone in need.
SimpleFeature is the type that has the attributes of shape files that you can check when you try to debug or print a line on runtime. You can use the SimpleFeature to get the property. The attributes can be achieved by:
try {
while (iterator.hasNext()) {
SimpleFeature feature = iterator.next();
Property intptlat = feature.getProperty("TRACTCE");
}
}
Make sure you are choosing the Block Groups as the layer type for download in Tiger-Line or which ever site is concerned, where you download the shape file.
The user logs in on the website and creates different events. This event is saved into the neo4j database as a node and I make the "EVENT_CREATOR" realtionship between the user and the event node.
I am trying to implement pagination for all the user's events on my website (using Play2 framework) and I need for example if user accesses the first page, I load the first ten events; 2nd page to load the 10th- 20th events, and so on...
this is my query:
match(n);
...
skip k;
limit 10;
return n;
At the moment I am getting all the events created by the user and add them to the array list.
private static List<PublicEvent> getEvents(int page, int pageSize) {
List<PublicEvent> events = new ArrayList<PublicEvent>();
GraphDatabaseService db = Neo4JHelper.getDatabase();
try (Transaction tx = db.beginTx()) {
Index<Node> userIndex = db.index().forNodes(ModelIndex.Users);
IndexHits<Node> userNodes = userIndex.get(ModelGraphProperty.UserProfile.UserName, SessionUtilities.getCurrentUser());
Node me = userNodes.next(); //current logged in user
PagingIterator paginator = new PagingIterator(me.getRelationships(GraphRelation.RelTypes.EVENT_CREATOR).iterator(), pageSize); // get all the events that were created by this user
paginator.page(page);
// adding all the created events by this user to an array
if (paginator.hasNext()) {
Relationship eventCreator = (Relationship)paginator.next();
Node event = eventCreator.getOtherNode(me);
events.add(new PublicEvent(event));
}
tx.success();
}
db.shutdown();
return events;
}
I want to update the code to run Cypher queries and I add the following lines of code (using the example https://www.tutorialspoint.com/neo4j/neo4j_cypher_api_example.htm )
GraphDatabaseService db = Neo4JHelper.getDatabase();
ExecutionEngine execEngine = new ExecutionEngine(db); //HERE I GET AN ERROR
ExecutionResult execResult = execEngine.execute("MATCH (n) RETURN n");
String results = execResult.dumpToString();
System.out.println(results);
it is expecting a second parameter: logger. What is the error or is there anything I am doing wrong?
RestGraphDatabase db= (RestGraphDatabase)Neo4JHelper.getDatabase();
RestCypherQueryEngine engine=new RestCypherQueryEngine(db.getRestAPI());
Map<String, Object> params = new HashMap<String, Object>();
params.put( "id", eventId );
String query="match (s) where id(s) = {id} return s;";
QueryResult result=engine.query(query,params);
if(result.iterator().hasNext()) {
//HERE PUT WHATEVER YOU NEED
}
Take a look at the documentation:
https://neo4j.com/docs/java-reference/current/
I have to restrict some field which are there in processtemplate class. Below is the method which i have developed. When i pass some id it gives me runtime exception. Please help me out
public ProcessTemplate get(String id) throws GRIDRecordsDataManagerException {
ProcessTemplate entity = null;
try {
BasicDBObject qry = new BasicDBObject();
Map<String, Object> whereMap = new HashMap<String, Object>();
whereMap.put("id", id);
qry.putAll(whereMap);
BasicDBObject field = new BasicDBObject();
field.put("name", 1);
field.put("status", 1);
field.put("description", 1);
DBCursor results = dbCollection.find(qry, field);
if (results != null && results.hasNext()) {
DBObject dbObj = results.next();
entity = new ProcessTemplate();
entity.setId((String) dbObj.get("id"));
entity.setProcessName((String) dbObj.get("name"));
entity.setStatus((String) dbObj.get("status"));
entity.setDescription((String) dbObj.get("description"));
System.out.println(entity);
}
} catch (Exception e) {
}
return entity;
}
Answer copied from jira.mongodb.org
When querying a GridFS collection on specific fields, it works when no
GridFS file was ever stored on that collection. Once a file was saved
to the collection, the query on specific fields fails with "can't load
partial GridFSFile file".
...
For now I'll reset the object associated with the collection, quite a
hack though: if (Objects.equal(GridFSDBFile.class,
coll.getObjectClass())) { coll.setObjectClass(null); }
...
Hi, A collection can have an associated ObjectClass and this
information is cached, allowing it to be set once and then reused
elsewhere in your code. Once it is set you have to explicitly unset
it. GridFS is a specification for storing and retrieving files that is
built upon the driver. GridFS is opinionated about how it is to be
used, as such it sets the ObjectClass for the files collection when
you create a GridFS instance. The reason it throws an error is the
GridFSFile is not expected to be used in the way you've show as it
could represent a partial part of a file and which is why it throws
the "can't load partial GridFSFile file" runtime error. As you've
found out the associated ObjectClass can only unset by resetting the
ObjectClass back to null.
In your case it is translated to:
BasicDBObject qry = new BasicDBObject("id",id); //You can save 3 lines of code here, btw
BasicDBObject field = new BasicDBObject();
...
if (Objects.equal(GridFSDBFile.class, dbCollection.getObjectClass()))
dbCollection.setObjectClass(null);
DBCursor results = dbCollection.find(qry, field);
...
When executing queries on a standalone Neo4J server using the RestCypherEngine, what is the best practice to retrieve a collection of nodes?
I have this code snippet running....
public DbService() {
gd = new RestGraphDatabase("http://neo4jbox:7474/db/data/");
engine = new RestCypherQueryEngine(gd.getRestAPI());
}
public String testData() {
try (Transaction tx = gd.beginTx()) {
QueryResult<Map<String, Object>> result;
result = engine.query(
"match (n:Person{username:'jomski2009'}) return n ",
null);
Iterator<Map<String, Object>> itr = result.iterator();
while (itr.hasNext()) {
Map<String, Object> item = itr.next();
log.info(item.get("n"));
}
tx.success();
return result.toString();
}
}
When I run the code, I get the following result...
services.DbService : http://neo4jbox:7474/db/data/node/177
which is a link to the node rather than the node itself. Now I know that if I return just a subset of the properties of the node in the same query that works well. What I'd like to know is how do I retrieve complete node object without necessarily specifying the properties in the query?
Thanks for your help guys.
It is just the to-string representation of a RestNode, it still has the properties. But not the relationships fetched those will be fetched on demand.
I would recommend to try to fetch primitive values over the wire with Cypher, works best as it minimizes the transferred data and you only get what you need.
I'm using Neo4j (1.9) and lucene core 3.5, through the support offered by the neo4j-lucene-index library.
In my code, I create a new node index like that:
HashMap<String, String> stringMap = new HashMap<String, String>();
stringMap.put("type", "fulltext");
stringMap.put(IndexManager.PROVIDER, "lucene");
stringMap.put("to_lower_case", "false");
stringMap.put("similarity", MySimilarity.class.getName());
stringMap.put("analyzer", MyAnalyzer.class.getName());
simpleIndex = (LuceneIndex<Node>) template.getGraphDatabaseService()
.index().forNodes("simple-index", stringMap);
and query the node index by means of a multi-field query:
Analyzer analyzer = new MyAnalyzer();
Query query = null;
try {
query = MultiFieldQueryParser.parse(Version.LUCENE_35,
queriesArray, fieldsArray, flagsArray, analyzer);
} catch (ParseException e) {
e.printStackTrace();
return;
}
if (query == null)
return;
QueryContext qc = new QueryContext(query).sortByScore();
IndexHits<Node> hits = simpleIndex.query(qc);
How can I change the Similarity implementation, used for sorting by score, to MySimilarity class (which extends the DefaultSimilarity and overrides some of its methods)? The reported query always uses the DefaultSimilarity and I was not able to find any hook to set the similarity used by the searcher activated during my query.
The fact that the supplied similarity isn't used during queries is a bug and should be fixed. Created https://github.com/neo4j/neo4j/issues/375 to track it.