Neo4j Plugin Recursive - java

I'm trying to create my own function for neo4j that recursively goes through a graph and returns any nodes and edges that are connected to an edge with a long value greater than 100.
I know there is a simple CYPHER query for it but by doing this, I can know how to proceed with more complex stuff on my own.
pseudocode
get all relations from node matching Id where the relationship is type 'TypeExample'.
if the relation has a long property "Count" and Count > 100, go to 1.
IF 5 nodes deep, stop. return list of nodes and edges with interface IPath.
package example;
import java.util.Iterator;
import java.util.stream.Stream;
import java.util.stream.StreamSupport;
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.graphdb.Label;
import org.neo4j.graphdb.Node;
import org.neo4j.graphdb.Relationship;
import org.neo4j.graphdb.ResourceIterator;
import org.neo4j.logging.Log;
import org.neo4j.procedure.*;
import org.neo4j.procedure.Description;
import org.neo4j.procedure.Name;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.neo4j.graphdb.index.Index;
import org.neo4j.graphdb.index.IndexManager;
public class NodeFinder {
#Context
public GraphDatabaseService db;
#Context
public Log log;
#Procedure
#Description("finds Nodes one step away")
public Stream<SomeList> GetRelations(#Name("nodeId") long nodeId, #Name("depth") long depth, #Name("rel") String relType) {
Recursive(nodeId);
//return list of Nodes and Edges
}
private void Recursive(long id) {
Node node = db.getNodeById(nodeId);
Iterable<Relationship> rels = node.getRelationships();
for (Relationship rel : rels) {
long c = (long) rel.getProperty("Count");
if (c > 100) {
Recursive(rel.getEndNodeId());
}
}
}
}

Related

How to iterate a List of Model in java and update the value of some of model element in list

I have a array list of a Model class which has multiple String type variables.
This Array list values is populated from JDBC template result set.
Now I want to iterate this Array List and update some of these model element based upon some conditions.
My Model Class:
import lombok.Getter;
import lombok.Setter;
#Getter
#Setter
public class WADataModel {
public String STATUS;
public String AUTO_DATE;
public String RECORD_TYPE;
public String VENDOR_NAME;
public String CREATED_DATE;
public String ACTION_CODE;
public String CITY;
public String GROUP_NUMBER;
public String GROUP_POLICY_NUMBER;
public String SUBGROUP_NUMBER;
public String SUBGROUP_POLICY_NUMBER;
public String SYSTEM;
public String PLAN_NUMBER;
}
My DAO Class:
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.jdbc.core.BeanPropertyRowMapper;
import org.springframework.jdbc.core.RowMapper;
import org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate;
import org.springframework.stereotype.Component;
#Component
public class PlanCrosswalks {
#Autowired
#Qualifier("nemisNamedJdbcTemplate")
private NamedParameterJdbcTemplate nemisJdbcTemplate;
#Autowired
private FetchDatafFromProp fetchDatafFromProp;
#Value("${query.state}")
private String queryState;
public List<WADataModel> doWACrosswalk(List<WADataModel> claimDataList) throws ApplicationExceptions{
RowMapper<WACrosswalkModel> rowMapper = new BeanPropertyRowMapper<>(WACrosswalkModel.class);
List<WACrosswalkModel> statusResult = new ArrayList<>();
Map<String,String> crosswalkQueryMap = new HashMap<>();
Map<String,String> paramMap = new HashMap<>();
crosswalkQueryMap = fetchDatafFromProp.getDataForGeneration();
statusResult = nemisJdbcTemplate.query(crosswalkQueryMap.get(queryState+ Constants.UNDERSCORE + Constants.FETCH_SUB_GROUP_POLICY),paramMap,rowMapper);
for(WADataModel model : claimDataList)
//Here I want to update ClaimDataList elements like SUBGROUP_POLICY_NUMBER,GROUP_POLICY_NUMBER based upon some conditions by iterating whole List.
return claimDataList;
}
}
I want to iterate "claimDataList" and check whether plan_number is null and update subGroupPolicyNumber accordingly based upon the value of plan_number.
I can iterate List of model but don't know how to update the values in List of model.
Please help me to update the values in "claimDataList"
Write a method to update one model.
Then you can iterate over the list of models, filter them and call this method for the resuming models.
I would prefer a stream:
claimDataList.stream()
.filter(model -> model.PLAN_NUMBER == null)
.forEach(this::planNumberNull);
private void planNumberNull(WADataModel model) {
model.SUBGROUP_POLICY_NUMBER = ...
}
for(WADataModel model : claimDataList){
if(model.PLAN_NUMBER==null){
model.SUBGROUP_POLICY_NUMBER = <>
}
}
If I have understood, you want to check that the value 'PLAN_NUMBER' is not null, and then fill the variable 'SUBGRUOUP_POLICY_NUMBER'.
You can do this with lambdas (since j8), which is optimal.
claimDataList.stream()
.filter(x -> x.getPLAN_NUMBER() != null)
.forEach(y -> y.setSUBGROUP_POLICY_NUMBER(y.getPLAN_NUMBER()));

XIRR implementation in Java

I tried using https://github.com/RayDeCampo/java-xirr but in some cases it's throwing the exception. If anyone has already solve the issue will be super helpful.
Not working example:
import java.math.BigDecimal;
import java.math.RoundingMode;
import java.util.ArrayList;
import java.util.List;
import org.decampo.xirr.NewtonRaphson;
import org.decampo.xirr.Transaction;
import org.decampo.xirr.Xirr;
public class XIRR {
private static List<Transaction> txns = new ArrayList<>();
public static void main(String[] args) {
//txns.add(new Transaction(-100.0, "2022-05-29"));
//txns.add(new Transaction(-100.0, "2022-05-29"));
txns.add(new Transaction(-300.0, "2022-05-29"));
txns.add(new Transaction(295.47, "2022-05-31"));
double xirr = Xirr.builder()
.withTransactions(txns)
.withNewtonRaphsonBuilder(NewtonRaphson.builder().withIterations(10000).withTolerance(0.000001))
.withGuess(0.1)
.xirr()
*100;
System.out.println("xirr = " + xirr);
}
}
It is throwing an exception like
Exception in thread "main" org.decampo.xirr.OverflowException: Candidate overflow: {guess=0.1, iteration=140, candidate=-Infinity, value=-14359.837609828492, derivative=-4.844899455942689E-307}
at org.decampo.xirr.NewtonRaphson$Calculation.setCandidate(NewtonRaphson.java:166)
at org.decampo.xirr.NewtonRaphson$Calculation.solve(NewtonRaphson.java:213)
at org.decampo.xirr.NewtonRaphson.inverse(NewtonRaphson.java:89)
at org.decampo.xirr.NewtonRaphson.findRoot(NewtonRaphson.java:70)
at org.decampo.xirr.NewtonRaphson$Builder.findRoot(NewtonRaphson.java:136)
at org.decampo.xirr.Xirr.xirr(Xirr.java:155)
at org.decampo.xirr.Xirr$Builder.xirr(Xirr.java:262)
at com.app.experiments.XIRR.main(XIRR.java:27)
From excel, correct XIRR is -0.937760641

Difference between map and Flow

From reading a Google groups post from 2016 : “.map() is converted to a .via()”
src : https://groups.google.com/g/akka-user/c/EzHygZpcCHg
Are the following lines of code equivalent :
Source.repeat(json).take(3).via(mapToDtoFlow).to(printSink).run(actorSystem);
Source.repeat(json).take(3).map(x -> mapper.readValue(x, RequestDto.class)).to(printSink).run(actorSystem);
Are there scenarios when a map should be used instead of flow?
src :
RequestDTO :
import com.fasterxml.jackson.annotation.JsonFormat;
import lombok.Builder;
import lombok.Getter;
import lombok.Setter;
import lombok.ToString;
import lombok.extern.jackson.Jacksonized;
import java.util.Date;
#Getter
#Setter
#Builder
#ToString
#Jacksonized
public class RequestDto {
#JsonFormat(pattern = "yyyy-MM-dd HH:mm:sss")
private final Date datePurchased;
}
StreamManager (contains main method) :
import akka.Done;
import akka.NotUsed;
import akka.actor.typed.ActorSystem;
import akka.actor.typed.javadsl.Behaviors;
import akka.stream.javadsl.Flow;
import akka.stream.javadsl.Sink;
import akka.stream.javadsl.Source;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.util.concurrent.CompletionStage;
public class StreamManager {
final static ObjectMapper mapper = new ObjectMapper();
private static final Flow<String, RequestDto, NotUsed> mapToDtoFlow = Flow.of(String.class)
.map(input -> mapper.readValue(input, RequestDto.class))
.log("error");
public static void main(String args[]) {
final ActorSystem actorSystem = ActorSystem.create(Behaviors.empty(), "actorSystem");
final Sink<RequestDto, CompletionStage<Done>> printSink = Sink.foreach(System.out::println);
final String json = "{\"datePurchased\":\"2022-03-03 21:32:017\"}";
Source.repeat(json).take(3).via(mapToDtoFlow).to(printSink).run(actorSystem);
Source.repeat(json).take(3).map(x -> mapper.readValue(x, RequestDto.class)).to(printSink).run(actorSystem);
}
}
map is converted to a via, but it's not an exactly syntactically equivalent via as you'd get from Flow.of().map().
The first would translate to a .via(Map(f)), where Map is a GraphStage which implements the map operation.
In the second case, the mapToDtoFlow (ignoring the log) would itself be (in Scala notation) Flow[String].via(Map(f)) so you'd be adding another layer of via: .via(Flow[String].via(Map(f))).
For all intents and purposes, they're the same (I suspect that the materializer, when it comes time to interpret the RunnableGraph you've built, will treat them identically).
Taking the .log into account, mapToDtoFlow is equivalent (again in Scala):
Flow[String]
.via(Map(f))
.via(Log(...))
There are basically three levels of defining streams in Akka Streams, from highest level to lowest level:
the Java/Scala DSLs
the Java/Scala Graph DSLs
GraphStages
The DSLs merely specify succinct ways of building GraphStages and the fundamental way to link GraphStages with Flow shape is through the via operation.

Multiple queries in the same view

I would like to run multiple queries then show results in a page such as :
https://adminlte.io/themes/v3/index.html
I create a first controller query :
package controllers;
import models.Sysuser;
import play.mvc.Controller;
import play.mvc.Result;
import play.mvc.Security;
import views.html.sitemap.index;
import javax.inject.*;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import play.libs.concurrent.HttpExecutionContext;
import static java.util.concurrent.CompletableFuture.supplyAsync;
import play.db.*;
import io.ebean.*;
import play.Logger;
import java.util.List;
import java.util.ArrayList;
import models.LocationExtractedData;
#Security.Authenticated(Secured.class)
public class SiteMap extends Controller {
private Database db;
private final HttpExecutionContext httpExecutionContext;
private static final Logger.ALogger logger = Logger.of(SiteMap.class);
#Inject
public SiteMap(Database db,
HttpExecutionContext httpExecutionContext) {
this.db = db;
this.httpExecutionContext = httpExecutionContext;
}
public CompletionStage<Result> index() {
return SearchSomething().thenApplyAsync((List<LocationExtractedData> infos) -> {
return ok(views.html.sitemap.index.render( Sysuser.findByUserName(request().username()), infos) );
}, httpExecutionContext.current());
}
public CompletionStage<List<LocationExtractedData>> SearchSomething() {
return CompletableFuture.supplyAsync(() -> {
return db.withConnection(
connection -> {
// Imagines this is a complexe QUERY (Later in the future...)
final String sql = "SELECT sysuser_id, role_id "
+"from sysuser_role "
+"where sysuser_id = '1' "
+"and role_id in ('1','2','3','4','5') ";
final RawSql rawSql = RawSqlBuilder.parse(sql).create();
Query<LocationExtractedData> query = Ebean.find(LocationExtractedData.class);
query.setRawSql(rawSql);
List<LocationExtractedData> list = query.findList();
return list;
});
}, httpExecutionContext.current());
}
}
Can you telling me how to run multiple and optimized queries in the same time for my page full of dashboards, charts and tables!
If i create multiple list of ebeanLists ( queries ), does this will affect the loading of my page ?
IF not, then, what should i do ?
Thank you in advance,
Typically, in an application similar to the link you have provided, you create reusable APIs following the MVC design pattern. Querying the database from the controller is very much against that pattern.
Each API should be atomic, creating a single API to run 1 query to fetch all of the data for that page is not the correct approach.
If you are looking for performance for your API you should get familiar with asynchronous programming. Running your APIs async will allow your back end to process multiple front end requests at the same time, greatly improving performance.

Spring Boot JPA Hibernate JVM heap is not released

The following is a simplification of a more complicated setup. I describe this simple case to show the effect.
I start Spring Boot and call a method. In this method all the contents of a MySQL database table is read via
Iterable<myPojo> myPojos = myPojoRepository.findAll();
Afterwards I leave this method.
After finishing this method I get the message
Started Application in 70.893 seconds (JVM running for 72.899)
So Spring Boot is idling afterwards.
But still the memory is not released.
How can I avoid that the JVM Heap is not released after the application is idling?
This is the result of VisualJM after the application is idling:
char[] 1.080.623.712 (21.0%) 24.040.578 (23.4%)
byte[] 1.034.070.352 (20.1%) 17.280.824 (16.8%)
java.lang.String 768.935.872 (14.9%) 24.029.246 (23.4%)
java.lang.Object[] 556.181.104 (10.8%) 5.320.276 (5.1%)
org.hibernate.engine.internal.MutableEntityEntry
231.287.232 (4.5%) 2.628.264 (2.5%)
org.hibernate.engine.spi.EntityKey
224.752.040 (4.3%) 5.618.801 (5.4%)
byte[][] 212.407.904 (4.1%) 3.318.832 (3.2%)
hello.web.model.MyPojo 185.852.968 (3.6%) 3.318.803 (3.2%)
java.util.HashMap$Node 145.238.976 (2.8%) 3.025.812 (2.9%)
com.mysql.jdbc.ByteArrayRow 132.752.120 (2.5%) 3.318.803 (3.2%)
org.hibernate.engine.internal.EntityEntryContext$ManagedEntityImpl
126.156.720 (2.4%) 2.628.265 (2.5%)
hello.web.model.MyPojoCompoundKey
120.376.680 (2.3%) 3.009.417 (2.9%)
java.util.HashMap$Node[] 108.307.328 (2.1%) 16.558 (0.0%)
java.lang.Float 79.651.320 (1.5%) 3.318.805 (3.2%)
int[] 41.885.056 (0.8%) 54.511 (0.0%)
java.util.LinkedHashMap$Entry 15.519.616 (0.3%) 242.494 (0.2%)
java.io.File 11.323.392 (0.2%) 235.904 (0.2%)
org.springframework.boot.devtools.filewatch.FileSnapshot
10.550.400 (0.2%) 219.800 (0.2%)
java.lang.String[] 8.018.808 (0.1%) 52.031 (0.0%)
java.lang.reflect.Method 6.015.040 (0.1%) 37.594 (0.0%)
java.io.File[] 2.283.528 (0.0%) 16.746 (0.0%)
My effective-pom.xml shows that hibernate 5.0.9.Final is used.
The table my_pojo contains 3.3 million entries.
MyPojoRepository:
package hello.web.model;
import com.querydsl.core.types.Predicate;
import org.springframework.data.querydsl.QueryDslPredicateExecutor;
import org.springframework.data.repository.PagingAndSortingRepository;
import java.util.List;
public interface MyPojoRepository
extends PagingAndSortingRepository<MyPojo, Long>,
QueryDslPredicateExecutor<MyPojo> {
List<MyPojo> findAll(Predicate predicate);
}
MyPojo:
package hello.web.model;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import org.apache.commons.beanutils.BeanComparator;
import org.apache.commons.collections.comparators.ComparatorChain;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.IdClass;
import java.io.Serializable;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
#Data
#Entity
#Builder
#AllArgsConstructor
#IdClass(MyPojoCompoundKey.class)
public class MyPojo implements Serializable, Comparable<MyPojo> {
public MyPojo() { }
#Id
private String myId1;
#Id
private String myId2;
#Id
private String myId3;
private Float myId4;
private String myId5;
#Override
public int compareTo(MyPojo o) {
return this.getMyId3().compareTo(o.getMyId3());
}
protected boolean canEqual(Object other) {
return other instanceof MyPojo;
}
public static void sortByMyId1MyId3(List<MyPojo> myPojos) {
ComparatorChain chain = new ComparatorChain(Arrays.asList(
new BeanComparator("myId1"),
new BeanComparator("myId3")
));
Collections.sort(myPojos, chain);
}
}
myId1-3 and myId5 have a length of 10 characters in avarage.
So again:
How can I avoid that the JVM Heap is not released after the application is idling?

Categories