The user logs in on the website and creates different events. This event is saved into the neo4j database as a node and I make the "EVENT_CREATOR" realtionship between the user and the event node.
I am trying to implement pagination for all the user's events on my website (using Play2 framework) and I need for example if user accesses the first page, I load the first ten events; 2nd page to load the 10th- 20th events, and so on...
this is my query:
match(n);
...
skip k;
limit 10;
return n;
At the moment I am getting all the events created by the user and add them to the array list.
private static List<PublicEvent> getEvents(int page, int pageSize) {
List<PublicEvent> events = new ArrayList<PublicEvent>();
GraphDatabaseService db = Neo4JHelper.getDatabase();
try (Transaction tx = db.beginTx()) {
Index<Node> userIndex = db.index().forNodes(ModelIndex.Users);
IndexHits<Node> userNodes = userIndex.get(ModelGraphProperty.UserProfile.UserName, SessionUtilities.getCurrentUser());
Node me = userNodes.next(); //current logged in user
PagingIterator paginator = new PagingIterator(me.getRelationships(GraphRelation.RelTypes.EVENT_CREATOR).iterator(), pageSize); // get all the events that were created by this user
paginator.page(page);
// adding all the created events by this user to an array
if (paginator.hasNext()) {
Relationship eventCreator = (Relationship)paginator.next();
Node event = eventCreator.getOtherNode(me);
events.add(new PublicEvent(event));
}
tx.success();
}
db.shutdown();
return events;
}
I want to update the code to run Cypher queries and I add the following lines of code (using the example https://www.tutorialspoint.com/neo4j/neo4j_cypher_api_example.htm )
GraphDatabaseService db = Neo4JHelper.getDatabase();
ExecutionEngine execEngine = new ExecutionEngine(db); //HERE I GET AN ERROR
ExecutionResult execResult = execEngine.execute("MATCH (n) RETURN n");
String results = execResult.dumpToString();
System.out.println(results);
it is expecting a second parameter: logger. What is the error or is there anything I am doing wrong?
RestGraphDatabase db= (RestGraphDatabase)Neo4JHelper.getDatabase();
RestCypherQueryEngine engine=new RestCypherQueryEngine(db.getRestAPI());
Map<String, Object> params = new HashMap<String, Object>();
params.put( "id", eventId );
String query="match (s) where id(s) = {id} return s;";
QueryResult result=engine.query(query,params);
if(result.iterator().hasNext()) {
//HERE PUT WHATEVER YOU NEED
}
Take a look at the documentation:
https://neo4j.com/docs/java-reference/current/
Related
I have a Java method in my code, in which I am using following line of code to fetch any data from azure cosmos DB
Iterable<FeedResponse<Object>> feedResponseIterator =
cosmosContainer
.queryItems(sqlQuery, queryOptions, Object.class)
.iterableByPage(continuationToken, pageSize);
Now the whole method looks like this
public List<LinkedHashMap> getDocumentsFromCollection(
String containerName, String partitionKey, String sqlQuery) {
List<LinkedHashMap> documents = new ArrayList<>();
String continuationToken = null;
do {
CosmosQueryRequestOptions queryOptions = new CosmosQueryRequestOptions();
CosmosContainer cosmosContainer = createContainerIfNotExists(containerName, partitionKey);
Iterable<FeedResponse<Object>> feedResponseIterator =
cosmosContainer
.queryItems(sqlQuery, queryOptions, Object.class)
.iterableByPage(continuationToken, pageSize);
int pageCount = 0;
for (FeedResponse<Object> page : feedResponseIterator) {
long startTime = System.currentTimeMillis();
// Access all the documents in this result page
page.getResults().forEach(document -> documents.add((LinkedHashMap) document));
// Along with page results, get a continuation token
// which enables the client to "pick up where it left off"
// in accessing query response pages.
continuationToken = page.getContinuationToken();
pageCount++;
log.info(
"Cosmos Collection {} deleted {} page with {} number of records in {} ms time",
containerName,
pageCount,
page.getResults().size(),
(System.currentTimeMillis() - startTime));
}
} while (continuationToken != null);
log.info(containerName + " Collection has been collected successfully");
return documents;
}
My question is that can we use same line of code to execute delete query like (DELETE * FROM c)? If yes, then what it would be returning us in Iterable<FeedResponse> feedResponseIterator object.
SQL statements can only be used for reads. Delete operations must be done using DeleteItem().
Here are Java SDK samples (sync and async) for all document operations in Cosmos DB.
Java v4 SDK Document Samples
How not to clear the JPA cache?
Code do Searching destination, poi2dicon, poi3dicon, destinationcategory with findByCode and findByName.
However, when executing the same parameter (findByCode("A")) in the for statement, the select statement is executed in the repository.
I know that the value corresponding to findByCode("A") exists in the cache of JPA.
Then, in the for statement, the query of findByCode("A") should be executed only once, but in reality, as many queries as rows.size are searched.
I wonder how to keep the value of findByCode("A") in the cache, and why the content of findByCode("A") is lost in the code now.
List<Destination> result = new ArrayList<>();
int headerLength = rows.get(0).size();
for (int i = 1; i < rows. size(); i++) {
Map<String, String> destinationMap = new HashMap<>();
makeDestinaionMap(destinationMap)
Destination findDestination = destinationRepository.findByCode(destinationMap.get("Identification Code"));
Poi2dIcon poi2dIcon = poi2dIconService.findByName(destinationMap.get("2DIcon")).get(0);
Poi3dIcon poi3dIcon = poi3dIconService.findByName(destinationMap.get("3DIcon")).get(0);
DestinationCategory destinationCategory = destinationCategoryService.findByName(destinationMap.get("Category")).get(0);
// builder -> save
Destination destination = Destination.builder()
...build()
if (findDestination != null) {
destination.setId(findDestination.getId());
}
result.add(destination);
}
destinationRepository.saveAll(result);
Best Regards
I am using the latest version of alfresco 5.1 version.
one of my requirement is to create properties (key / value) where user enter the key as well as the value.
so I have done that like this
Map<QName, Serializable> props = new HashMap<QName, Serializable>();
props.put(QName.createQName("customProp1"), "prop1");
props.put(QName.createQName("customProp2"), "prop2");
ChildAssociationRef associationRef = nodeService.createNode(nodeService.getRootNode(storeRef), ContentModel.ASSOC_CHILDREN, QName.createQName(GUID.generate()), ContentModel.TYPE_CMOBJECT, props);
Now what I want to do is search the nodes with these newly created properties. I was able to search the newly created property like this.
public List<NodeRef> findNodes() throws Exception {
authenticate("admin", "admin");
StoreRef storeRef = new StoreRef(StoreRef.PROTOCOL_WORKSPACE, "SpacesStore");
List<NodeRef> nodeList = null;
Map<QName, Serializable> props = new HashMap<QName, Serializable>();
props.put(QName.createQName("customProp1"), "prop1");
props.put(QName.createQName("customProp2"), "prop2");
ChildAssociationRef associationRef = nodeService.createNode(nodeService.getRootNode(storeRef), ContentModel.ASSOC_CHILDREN, QName.createQName(GUID.generate()), ContentModel.TYPE_CMOBJECT, props);
NodeRef nodeRef = associationRef.getChildRef();
String query = "#cm\\:customProp1:\"prop1\"";
SearchParameters sp = new SearchParameters();
sp.addStore(storeRef);
sp.setLanguage(SearchService.LANGUAGE_LUCENE);
sp.setQuery(query);
try {
ResultSet results = serviceRegistry.getSearchService().query(sp);
nodeList = new ArrayList<NodeRef>();
for (ResultSetRow row : results) {
nodeList.add(row.getNodeRef());
System.out.println(row.getNodeRef());
}
System.out.println(nodeList.size());
} catch (Exception e) {
e.printStackTrace();
}
return nodeList;
}
The alfresco-global.properties indexer configuration is
index.subsystem.name=buildonly
index.recovery.mode=AUTO
dir.keystore=${dir.root}/keystore
Now my question is
Is it possible to achieve the same using the solr4 indexer ?
Or Is there any way to use buildonly indexer for a particular query ?
In your query
String query = "#cm\\:customProp1:\"prop1\"";
remove cm as you are building the QName on the fly so it does not come under cm i.e. (ContentModel) properties. So your query will be
String query = "#\\:customProp1:\"prop1\"";
Hope this will work for you
First, double check if you're simply experiencing eventual consistency, as described below. If you are, and if this presents a problem for you, consider switching to CMIS queries while staying on SOLR.
http://docs.alfresco.com/5.1/concepts/solr-event-consistency.html
Other than this, check if the node has been indexed at all. If it has, take a closer look at how you build your query.
How to find List of unindexed file in alfresco
This is the DAO I have created:
public Poll updatePoll(int id){
Session s = factory.getCurrentSession();
Transaction t = s.beginTransaction();
Poll poll = (Poll) s.get(Poll.class, id);
Citizen citizen = (Citizen) s.get(Citizen.class, 1);
List<Poll> list = citizen.getPolledList();
boolean check = list.contains(poll);
if(!check){
Query q = s.createSQLQuery("update Poll set poll_count = poll_count + 1 where poll_id = id");
q.executeUpdate();
s.update(poll);
}else{
return poll;
}
s.close();
return poll;
}
This is the Action created:
public String submitVote(){
ServletContext ctx = ServletActionContext.getServletContext();
ProjectDAO dao = (ProjectDAO)ctx.getAttribute("DAO");
Poll poll = dao.updatePoll(poll_id);
String flag = "error";
if (poll != null){
ServletActionContext.getRequest().getSession(true).setAttribute("POLL", poll);
flag = "voted";
}
return flag;
}
I know I have been going horribly wrong and the code I'm posting might be utter rubbish. But I hope the intent is clear, thus if possible please lent me a helping hand. My project is mainly in JSP (Struts 2), jQuery and MySQL 5.1, so please do not suggest PHP codes as I've found earlier.
The framework is used to wrap the servlet stuff from user, you should use its features if you want doing something like
ServletActionContext.getRequest().getSession(true)
But
Map m = ActionContext.getContext().getSession();
When executing queries on a standalone Neo4J server using the RestCypherEngine, what is the best practice to retrieve a collection of nodes?
I have this code snippet running....
public DbService() {
gd = new RestGraphDatabase("http://neo4jbox:7474/db/data/");
engine = new RestCypherQueryEngine(gd.getRestAPI());
}
public String testData() {
try (Transaction tx = gd.beginTx()) {
QueryResult<Map<String, Object>> result;
result = engine.query(
"match (n:Person{username:'jomski2009'}) return n ",
null);
Iterator<Map<String, Object>> itr = result.iterator();
while (itr.hasNext()) {
Map<String, Object> item = itr.next();
log.info(item.get("n"));
}
tx.success();
return result.toString();
}
}
When I run the code, I get the following result...
services.DbService : http://neo4jbox:7474/db/data/node/177
which is a link to the node rather than the node itself. Now I know that if I return just a subset of the properties of the node in the same query that works well. What I'd like to know is how do I retrieve complete node object without necessarily specifying the properties in the query?
Thanks for your help guys.
It is just the to-string representation of a RestNode, it still has the properties. But not the relationships fetched those will be fetched on demand.
I would recommend to try to fetch primitive values over the wire with Cypher, works best as it minimizes the transferred data and you only get what you need.