I tried using https://github.com/RayDeCampo/java-xirr but in some cases it's throwing the exception. If anyone has already solve the issue will be super helpful.
Not working example:
import java.math.BigDecimal;
import java.math.RoundingMode;
import java.util.ArrayList;
import java.util.List;
import org.decampo.xirr.NewtonRaphson;
import org.decampo.xirr.Transaction;
import org.decampo.xirr.Xirr;
public class XIRR {
private static List<Transaction> txns = new ArrayList<>();
public static void main(String[] args) {
//txns.add(new Transaction(-100.0, "2022-05-29"));
//txns.add(new Transaction(-100.0, "2022-05-29"));
txns.add(new Transaction(-300.0, "2022-05-29"));
txns.add(new Transaction(295.47, "2022-05-31"));
double xirr = Xirr.builder()
.withTransactions(txns)
.withNewtonRaphsonBuilder(NewtonRaphson.builder().withIterations(10000).withTolerance(0.000001))
.withGuess(0.1)
.xirr()
*100;
System.out.println("xirr = " + xirr);
}
}
It is throwing an exception like
Exception in thread "main" org.decampo.xirr.OverflowException: Candidate overflow: {guess=0.1, iteration=140, candidate=-Infinity, value=-14359.837609828492, derivative=-4.844899455942689E-307}
at org.decampo.xirr.NewtonRaphson$Calculation.setCandidate(NewtonRaphson.java:166)
at org.decampo.xirr.NewtonRaphson$Calculation.solve(NewtonRaphson.java:213)
at org.decampo.xirr.NewtonRaphson.inverse(NewtonRaphson.java:89)
at org.decampo.xirr.NewtonRaphson.findRoot(NewtonRaphson.java:70)
at org.decampo.xirr.NewtonRaphson$Builder.findRoot(NewtonRaphson.java:136)
at org.decampo.xirr.Xirr.xirr(Xirr.java:155)
at org.decampo.xirr.Xirr$Builder.xirr(Xirr.java:262)
at com.app.experiments.XIRR.main(XIRR.java:27)
From excel, correct XIRR is -0.937760641
Related
From reading a Google groups post from 2016 : “.map() is converted to a .via()”
src : https://groups.google.com/g/akka-user/c/EzHygZpcCHg
Are the following lines of code equivalent :
Source.repeat(json).take(3).via(mapToDtoFlow).to(printSink).run(actorSystem);
Source.repeat(json).take(3).map(x -> mapper.readValue(x, RequestDto.class)).to(printSink).run(actorSystem);
Are there scenarios when a map should be used instead of flow?
src :
RequestDTO :
import com.fasterxml.jackson.annotation.JsonFormat;
import lombok.Builder;
import lombok.Getter;
import lombok.Setter;
import lombok.ToString;
import lombok.extern.jackson.Jacksonized;
import java.util.Date;
#Getter
#Setter
#Builder
#ToString
#Jacksonized
public class RequestDto {
#JsonFormat(pattern = "yyyy-MM-dd HH:mm:sss")
private final Date datePurchased;
}
StreamManager (contains main method) :
import akka.Done;
import akka.NotUsed;
import akka.actor.typed.ActorSystem;
import akka.actor.typed.javadsl.Behaviors;
import akka.stream.javadsl.Flow;
import akka.stream.javadsl.Sink;
import akka.stream.javadsl.Source;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.util.concurrent.CompletionStage;
public class StreamManager {
final static ObjectMapper mapper = new ObjectMapper();
private static final Flow<String, RequestDto, NotUsed> mapToDtoFlow = Flow.of(String.class)
.map(input -> mapper.readValue(input, RequestDto.class))
.log("error");
public static void main(String args[]) {
final ActorSystem actorSystem = ActorSystem.create(Behaviors.empty(), "actorSystem");
final Sink<RequestDto, CompletionStage<Done>> printSink = Sink.foreach(System.out::println);
final String json = "{\"datePurchased\":\"2022-03-03 21:32:017\"}";
Source.repeat(json).take(3).via(mapToDtoFlow).to(printSink).run(actorSystem);
Source.repeat(json).take(3).map(x -> mapper.readValue(x, RequestDto.class)).to(printSink).run(actorSystem);
}
}
map is converted to a via, but it's not an exactly syntactically equivalent via as you'd get from Flow.of().map().
The first would translate to a .via(Map(f)), where Map is a GraphStage which implements the map operation.
In the second case, the mapToDtoFlow (ignoring the log) would itself be (in Scala notation) Flow[String].via(Map(f)) so you'd be adding another layer of via: .via(Flow[String].via(Map(f))).
For all intents and purposes, they're the same (I suspect that the materializer, when it comes time to interpret the RunnableGraph you've built, will treat them identically).
Taking the .log into account, mapToDtoFlow is equivalent (again in Scala):
Flow[String]
.via(Map(f))
.via(Log(...))
There are basically three levels of defining streams in Akka Streams, from highest level to lowest level:
the Java/Scala DSLs
the Java/Scala Graph DSLs
GraphStages
The DSLs merely specify succinct ways of building GraphStages and the fundamental way to link GraphStages with Flow shape is through the via operation.
I'm trying to create my own function for neo4j that recursively goes through a graph and returns any nodes and edges that are connected to an edge with a long value greater than 100.
I know there is a simple CYPHER query for it but by doing this, I can know how to proceed with more complex stuff on my own.
pseudocode
get all relations from node matching Id where the relationship is type 'TypeExample'.
if the relation has a long property "Count" and Count > 100, go to 1.
IF 5 nodes deep, stop. return list of nodes and edges with interface IPath.
package example;
import java.util.Iterator;
import java.util.stream.Stream;
import java.util.stream.StreamSupport;
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.graphdb.Label;
import org.neo4j.graphdb.Node;
import org.neo4j.graphdb.Relationship;
import org.neo4j.graphdb.ResourceIterator;
import org.neo4j.logging.Log;
import org.neo4j.procedure.*;
import org.neo4j.procedure.Description;
import org.neo4j.procedure.Name;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.neo4j.graphdb.index.Index;
import org.neo4j.graphdb.index.IndexManager;
public class NodeFinder {
#Context
public GraphDatabaseService db;
#Context
public Log log;
#Procedure
#Description("finds Nodes one step away")
public Stream<SomeList> GetRelations(#Name("nodeId") long nodeId, #Name("depth") long depth, #Name("rel") String relType) {
Recursive(nodeId);
//return list of Nodes and Edges
}
private void Recursive(long id) {
Node node = db.getNodeById(nodeId);
Iterable<Relationship> rels = node.getRelationships();
for (Relationship rel : rels) {
long c = (long) rel.getProperty("Count");
if (c > 100) {
Recursive(rel.getEndNodeId());
}
}
}
}
The following is a simplification of a more complicated setup. I describe this simple case to show the effect.
I start Spring Boot and call a method. In this method all the contents of a MySQL database table is read via
Iterable<myPojo> myPojos = myPojoRepository.findAll();
Afterwards I leave this method.
After finishing this method I get the message
Started Application in 70.893 seconds (JVM running for 72.899)
So Spring Boot is idling afterwards.
But still the memory is not released.
How can I avoid that the JVM Heap is not released after the application is idling?
This is the result of VisualJM after the application is idling:
char[] 1.080.623.712 (21.0%) 24.040.578 (23.4%)
byte[] 1.034.070.352 (20.1%) 17.280.824 (16.8%)
java.lang.String 768.935.872 (14.9%) 24.029.246 (23.4%)
java.lang.Object[] 556.181.104 (10.8%) 5.320.276 (5.1%)
org.hibernate.engine.internal.MutableEntityEntry
231.287.232 (4.5%) 2.628.264 (2.5%)
org.hibernate.engine.spi.EntityKey
224.752.040 (4.3%) 5.618.801 (5.4%)
byte[][] 212.407.904 (4.1%) 3.318.832 (3.2%)
hello.web.model.MyPojo 185.852.968 (3.6%) 3.318.803 (3.2%)
java.util.HashMap$Node 145.238.976 (2.8%) 3.025.812 (2.9%)
com.mysql.jdbc.ByteArrayRow 132.752.120 (2.5%) 3.318.803 (3.2%)
org.hibernate.engine.internal.EntityEntryContext$ManagedEntityImpl
126.156.720 (2.4%) 2.628.265 (2.5%)
hello.web.model.MyPojoCompoundKey
120.376.680 (2.3%) 3.009.417 (2.9%)
java.util.HashMap$Node[] 108.307.328 (2.1%) 16.558 (0.0%)
java.lang.Float 79.651.320 (1.5%) 3.318.805 (3.2%)
int[] 41.885.056 (0.8%) 54.511 (0.0%)
java.util.LinkedHashMap$Entry 15.519.616 (0.3%) 242.494 (0.2%)
java.io.File 11.323.392 (0.2%) 235.904 (0.2%)
org.springframework.boot.devtools.filewatch.FileSnapshot
10.550.400 (0.2%) 219.800 (0.2%)
java.lang.String[] 8.018.808 (0.1%) 52.031 (0.0%)
java.lang.reflect.Method 6.015.040 (0.1%) 37.594 (0.0%)
java.io.File[] 2.283.528 (0.0%) 16.746 (0.0%)
My effective-pom.xml shows that hibernate 5.0.9.Final is used.
The table my_pojo contains 3.3 million entries.
MyPojoRepository:
package hello.web.model;
import com.querydsl.core.types.Predicate;
import org.springframework.data.querydsl.QueryDslPredicateExecutor;
import org.springframework.data.repository.PagingAndSortingRepository;
import java.util.List;
public interface MyPojoRepository
extends PagingAndSortingRepository<MyPojo, Long>,
QueryDslPredicateExecutor<MyPojo> {
List<MyPojo> findAll(Predicate predicate);
}
MyPojo:
package hello.web.model;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import org.apache.commons.beanutils.BeanComparator;
import org.apache.commons.collections.comparators.ComparatorChain;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.IdClass;
import java.io.Serializable;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
#Data
#Entity
#Builder
#AllArgsConstructor
#IdClass(MyPojoCompoundKey.class)
public class MyPojo implements Serializable, Comparable<MyPojo> {
public MyPojo() { }
#Id
private String myId1;
#Id
private String myId2;
#Id
private String myId3;
private Float myId4;
private String myId5;
#Override
public int compareTo(MyPojo o) {
return this.getMyId3().compareTo(o.getMyId3());
}
protected boolean canEqual(Object other) {
return other instanceof MyPojo;
}
public static void sortByMyId1MyId3(List<MyPojo> myPojos) {
ComparatorChain chain = new ComparatorChain(Arrays.asList(
new BeanComparator("myId1"),
new BeanComparator("myId3")
));
Collections.sort(myPojos, chain);
}
}
myId1-3 and myId5 have a length of 10 characters in avarage.
So again:
How can I avoid that the JVM Heap is not released after the application is idling?
I am digging on Jackson 2 and I want to know where and how the getter-method name gets converted into a property name.
I have tried:
PropertyName foo = new PropertyName("getKarli");
System.out.println(foo.getSimpleName());
I and I have found BeanProperty.Std() but this one have a lot of wired constructors. The api is bigger then expected :-) Is there a Jackson class and method where I can just pass the method and get back the correct property text used in the json?
EDIT:
I have also tried this one but that gives me a NullPointer
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.databind.BeanProperty;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.PropertyName;
import com.fasterxml.jackson.databind.PropertyNamingStrategy;
import com.fasterxml.jackson.databind.introspect.Annotated;
import com.fasterxml.jackson.databind.introspect.AnnotatedMethod;
import com.fasterxml.jackson.databind.introspect.BeanPropertyDefinition;
import com.fasterxml.jackson.databind.node.ObjectNode;
public class Test {
public String getKarli() {
return null;
}
public static void main(String[] a) throws Exception {
node.remove("geheim");
System.out.println(node.toString());
Annotated aa = new AnnotatedMethod(Test.class.getMethod("getKarli"), null, null);
System.out.println(
new ObjectMapper().getSerializationConfig().getAnnotationIntrospector().findNameForSerialization(aa)
);
// new BeanProperty.Std()
}
}
Found it.
String name = BeanUtil.okNameForRegularGetter(p, p.getName(), true);
if(name == null) name = BeanUtil.okNameForIsGetter(p, p.getName(), true);
I am running my code with mockito framework. Framework is creating mocked object for One Implementation and not creating any mock object for other object due to that it is throwing null pointer exceptions. Here is my code and output:
package com.sohi;
import java.io.IOException;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTableInterface;
import org.apache.hadoop.hbase.client.HTablePool;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.util.Bytes;
public class HbaseExample {
private HTablePool pool;
private static final String HTABLE_NAME = "table1";
public String getValue(String rowKey, String columnFamily, String columnName) throws IOException {
HTableInterface table = pool.getTable(HTABLE_NAME);
Get get = new Get(Bytes.toBytes(rowKey)).addColumn(Bytes.toBytes(columnFamily), Bytes.toBytes(columnName));
System.out.println("Is table Null ? " + (table == null));
Result result = table.get(get);
System.out.println("is result null ? " + (result == null));
byte [] val = result.value();
return Bytes.toString(val);
}
}
My Mockito Test class is :
import static org.junit.Assert.*;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.HTablePool;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.util.Bytes;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.mockito.runners.MockitoJUnitRunner;
import com.sohi.HbaseExample;
#RunWith(MockitoJUnitRunner.class)
public class HbaseExampleTest {
#Mock
HTablePool pool;
#Mock
HTable hTable;
#Mock
Result result;
#InjectMocks
HbaseExample hbase = new HbaseExample();
private static final String HTABLE_NAME = "table1";
private static final String ROW_KEY = "k1";
private static final String COLUMN_FAMILY = "col1";
private static final String COLUMN_NAME = "c1";
private static final String CELL_VALUE = "v1";
#Test
public void test1() throws Exception {
Get get1 = new Get(Bytes.toBytes(ROW_KEY)).addColumn(Bytes.toBytes(COLUMN_FAMILY), Bytes.toBytes(COLUMN_NAME));
Mockito.when(pool.getTable(HTABLE_NAME)).thenReturn(hTable);
Mockito.when(hTable.get(get1)).thenReturn(result);
Mockito.when(result.value()).thenReturn(Bytes.toBytes(CELL_VALUE));
String str = hbase.getValue(ROW_KEY, COLUMN_FAMILY, COLUMN_NAME);
assertEquals(str, CELL_VALUE);
}
}
Output is :
Is table Null ? false
is result null ? true
And Also throwing null pointer exception near result.value().
only table object is getting mocked.
The problem is here:
Mockito.when(hTable.get(get1)).thenReturn(result);
This does not match your actual call, because your get1 is not equal to the Get object that is actually passed. (It looks the same, but Get does not override equals() and so uses the default behaviour of treating any two different objects as being unequal.)
I suggest that you use a Captor to capture the Get object and add asserts to verify that the correct information is present. (I think this is a better way to write this sort of test anyway - it keeps all the assertions together, and leads to better error messages if you pass the wrong thing.)