Related
TLDR: How to implement this function?
public static <T, R> Function<T, R> cachedRecursive(final BiFunction<T, Function<T,R>, R> bifunc) {
}
I need to somehow extract the second argument from the BiFunction so I can return a proper result for the function.
This project is for learning purposes, although I'm stuck with the last part of my task.
First part of the task is to create a Cache class extended from the LinkedHashMap, and this is my Implementation:
public class Cache<K,V> extends LinkedHashMap<K, V> {
private static int MaxSize;
public Cache (int maxSize) {
super(maxSize,1f,false);
MaxSize = maxSize;
}
public Cache () {
super();
}
public int getMaximalCacheSize () {
return MaxSize;
}
#Override
protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
return size() > MaxSize;
}
}
As for the second part, it is to create a class for which the function definitions will be added:
public class FunctionCache {
private static class Pair<T, U> {
private T stored_t;
private U stored_u;
public Pair(T t, U u) {
stored_t = t;
stored_u = u;
}
public boolean equals(Object t) {
if (t == this) {
return true;
}
return t == stored_t;
}
public int hashCode () {
return stored_t.hashCode();
}
public T get_first() {
return stored_t;
}
public U get_second() {
return stored_u;
}
}
private final static int DEFAULT_CACHE_SIZE = 10000;
public static <T, R> Function<T, R> cached(final Function<T, R> func, int maximalCacheSize) {
Cache<T, R> cache = new Cache<T,R>(maximalCacheSize);
return input -> cache.computeIfAbsent(input, func);
}
public static <T, R> Function<T, R> cached(final Function<T, R> func) {
Cache<T, R> cache = new Cache<T,R>(DEFAULT_CACHE_SIZE);
return input -> cache.computeIfAbsent(input, func);
}
public static <T, U, R> BiFunction<T, U, R> cached(BiFunction<T, U, R> bifunc, int maximalCacheSize) {
Cache<T, R> cache = new Cache<T, R>(maximalCacheSize);
return (t, u) -> {
Pair<T,U> pairKey = new Pair<T,U>(t,u);
Function<Pair<T,U>, R> something = input -> {
return bifunc.apply(input.get_first(), input.get_second());
};
if (!cache.containsKey(pairKey.get_first())) {
R result = something.apply(pairKey);
cache.put(pairKey.get_first(), result);
return result;
} else {
return cache.get(pairKey.get_first());
}
};
}
public static <T, U, R> BiFunction<T, U, R> cached(BiFunction<T, U, R> bifunc) {
Cache<T, R> cache = new Cache<T, R>(DEFAULT_CACHE_SIZE);
return (t, u) -> {
Pair<T,U> pairKey = new Pair<T,U>(t,u);
Function<Pair<T,U>, R> something = input -> {
return bifunc.apply(input.get_first(), input.get_second());
};
if (!cache.containsKey(pairKey.get_first())) {
R result = something.apply(pairKey);
cache.put(pairKey.get_first(), result);
return result;
} else {
return cache.get(pairKey.get_first());
}
};
}
public static <T, R> Function<T, R> cachedRecursive(final BiFunction<T, Function<T,R>, R> bifunc) {
}
}
This is my problem:
public static <T, R> Function<T, R> cachedRecursive(final BiFunction<T, Function<T,R>, R> bifunc) {
}
I have absolutely no idea how to implement the cachedRecursive function, the previous functions are working with a simple fibonacci test perfectly, However the goal of this task is to implement the cachedRecursive function that takes a BiFunction with the first argument as the input and the second argument a function. Just to complete the code, this is the main class I used to test:
public class cachedFunction extends FunctionCache {
public static void main(String[] args) {
#SuppressWarnings({ "rawtypes", "unchecked" })
BiFunction<BigInteger, BiFunction, BigInteger> fibHelper = cached((n, f) -> {
if (n.compareTo(BigInteger.TWO) <= 0) return BigInteger.ONE;
return ((BigInteger) (f.apply(n.subtract(BigInteger.ONE), f)))
.add((BigInteger)f.apply(n.subtract(BigInteger.TWO), f));
}, 50000);
Function<BigInteger, BigInteger> fib = cached((n) -> fibHelper.apply(n,fibHelper));
System.out.println(fib.apply(BigInteger.valueOf(1000L)));
}
}
There are many drawbacks and mistakes in your code:
static size variables shared across different cache instances (therefore breaking it);
code duplication;
incorrect equals/hashCode contract implementation;
suppressing what should be fixed rather than suppressed;
the code is overly bloated;
and some minor ones (like _-containing lower-cased names, etc).
If you don't mind, I simplify it:
final class Functions {
private Functions() {
}
// memoize a simple "unknown" function -- simply delegates to a private encapsulated method
static <T, R> Function<T, R> memoize(final Function<? super T, ? extends R> f, final int maxSize) {
return createCacheFunction(f, maxSize);
}
// memoize a recursive function
// note that the bi-function can be converted to an unary function and vice versa
static <T, R> Function<T, R> memoize(final BiFunction<? super T, ? super Function<? super T, ? extends R>, ? extends R> f, final int maxSize) {
final Function<UnaryR<T, Function<T, R>>, R> memoizedF = memoize(unaryR -> f.apply(unaryR.t, unaryR.r), maxSize);
return new Function<T, R>() {
#Override
public R apply(final T t) {
// this is the "magic"
return memoizedF.apply(new UnaryR<>(t, this));
}
};
}
private static <T, R> Function<T, R> createCacheFunction(final Function<? super T, ? extends R> f, final int maxSize) {
final Map<T, R> cache = new LinkedHashMap<T, R>(maxSize, 1F, false) {
#Override
protected boolean removeEldestEntry(final Map.Entry eldest) {
return size() > maxSize;
}
};
return t -> cache.computeIfAbsent(t, f);
}
// these annotations generate proper `equals` and `hashCode`, and a to-string implementation to simplify debugging
#EqualsAndHashCode
#ToString
private static final class UnaryR<T, R> {
#EqualsAndHashCode.Include
private final T t;
#EqualsAndHashCode.Exclude
private final R r;
private UnaryR(final T t, final R r) {
this.t = t;
this.r = r;
}
}
}
And the test that tests both results and the memoization contract ("no recalculation, if memoized"):
public final class FunctionsTest {
#Test
public void testMemoizeRecursive() {
final BiFunction<BigInteger, Function<? super BigInteger, ? extends BigInteger>, BigInteger> fib = (n, f) -> n.compareTo(BigInteger.valueOf(2)) <= 0 ? BigInteger.ONE : f.apply(n.subtract(BigInteger.ONE)).add(f.apply(n.subtract(BigInteger.valueOf(2))));
#SuppressWarnings("unchecked")
final BiFunction<BigInteger, Function<? super BigInteger, ? extends BigInteger>, BigInteger> mockedFib = Mockito.mock(BiFunction.class, AdditionalAnswers.delegatesTo(fib));
final Function<BigInteger, BigInteger> memoizedFib = Functions.memoize(mockedFib, 1000);
final BigInteger memoizedResult = memoizedFib.apply(BigInteger.valueOf(120));
Mockito.verify(mockedFib, Mockito.times(120))
.apply(Matchers.any(), Matchers.any());
Assertions.assertEquals("5358359254990966640871840", memoizedResult.toString());
Assertions.assertEquals(memoizedResult, memoizedFib.apply(BigInteger.valueOf(120)));
Mockito.verifyNoMoreInteractions(mockedFib);
}
}
Problem
I am writing a Result type in Java, and I have found a need for it to have a method that performs an operation which may fail, and then encapulates the value or exception in a new Result object.
I had hoped this would work:
#FunctionalInterface
public interface ThrowingSupplier<R, E extends Throwable>
{
R get() throws E;
}
public class Result<E extends Throwable, V>
{
...
public static <E extends Throwable, V> Result<E, V> of(ThrowingSupplier<V, E> v)
{
try
{
return value(v.get());
}
catch(E e)
{
return error(e);
}
}
...
}
But Java cannot catch an exception defined by a type parameter.
I have also tried using instanceof, but that also cannot be used for generics. Is there any way I can implement this method?
Definitions
This is my result type before the addition of the of method. It's intended to be similar to both Haskell's Either and rust's Result, while also having a meaningful bind operation:
public class Result<E extends Throwable, V>
{
private Either<E, V> value;
private Result(Either<E, V> value)
{
this.value = value;
}
public <T> T match(Function<? super E, ? extends T> ef, Function<? super V, ? extends T> vf)
{
return value.match(ef, vf);
}
public void match(Consumer<? super E> ef, Consumer<? super V> vf)
{
value.match(ef, vf);
}
/**
* Mirror of haskell's Monadic (>>=)
*/
public <T> Result<E, T> bind(Function<? super V, Result<? extends E, ? extends T>> f)
{
return match(
(E e) -> cast(error(e)),
(V v) -> cast(f.apply(v))
);
}
/**
* Mirror of Haskell's Monadic (>>) or Applicative (*>)
*/
public <T> Result<E, T> then(Supplier<Result<? extends E, ? extends T>> f)
{
return bind((__) -> f.get());
}
/**
* Mirror of haskell's Applicative (<*)
*/
public Result<E, V> peek(Function<? super V, Result<? extends E, ?>> f)
{
return bind(v -> f.apply(v).then(() -> value(v)));
}
public <T> Result<E, T> map(Function<? super V, ? extends T> f)
{
return match(
(E e) -> error(e),
(V v) -> value(f.apply(v))
);
}
public static <E extends Throwable, V> Result<E, V> error(E e)
{
return new Result<>(Either.left(e));
}
public static <E extends Throwable, V> Result<E, V> value(V v)
{
return new Result<>(Either.right(v));
}
/**
* If the result is a value, return it.
* If it is an exception, throw it.
*
* #return the contained value
* #throws E the contained exception
*/
public V get() throws E
{
boolean has = match(
e -> false,
v -> true
);
if (has)
{
return value.fromRight(null);
}
else
{
throw value.fromLeft(null);
}
}
/**
* Upcast the Result's type parameters
*/
private static <E extends Throwable, V> Result<E, V> cast(Result<? extends E, ? extends V> r)
{
return r.match(
(E e) -> error(e),
(V v) -> value(v)
);
}
}
And the Either type, designed to closely mirror Haskell's Either:
/**
* A container for a disjunction of two possible types
* By convention, the Left constructor is used to hold an error value and the Right constructor is used to hold a correct value
* #param <L> The left alternative type
* #param <R> The right alternative type
*/
public abstract class Either<L, R>
{
public abstract <T> T match(Function<? super L, ? extends T> lf, Function<? super R, ? extends T> rf);
public abstract void match(Consumer<? super L> lf, Consumer<? super R> rf);
public <A, B> Either<A, B> bimap(Function<? super L, ? extends A> lf, Function<? super R, ? extends B> rf)
{
return match(
(L l) -> left(lf.apply(l)),
(R r) -> right(rf.apply(r))
);
}
public L fromLeft(L left)
{
return match(
(L l) -> l,
(R r) -> left
);
}
public R fromRight(R right)
{
return match(
(L l) -> right,
(R r) -> r
);
}
public static <L, R> Either<L, R> left(L value)
{
return new Left<>(value);
}
public static <L, R> Either<L, R> right(R value)
{
return new Right<>(value);
}
private static <L, R> Either<L, R> cast(Either<? extends L, ? extends R> either)
{
return either.match(
(L l) -> left(l),
(R r) -> right(r)
);
}
static class Left<L, R> extends Either<L, R>
{
final L value;
Left(L value)
{
this.value = value;
}
#Override
public <T> T match(Function<? super L, ? extends T> lf, Function<? super R, ? extends T> rf)
{
return lf.apply(value);
}
#Override
public void match(Consumer<? super L> lf, Consumer<? super R> rf)
{
lf.accept(value);
}
}
static class Right<L, R> extends Either<L, R>
{
final R value;
Right(R value)
{
this.value = value;
}
#Override
public <T> T match(Function<? super L, ? extends T> lf, Function<? super R, ? extends T> rf)
{
return rf.apply(value);
}
#Override
public void match(Consumer<? super L> lf, Consumer<? super R> rf)
{
rf.accept(value);
}
}
}
Example Usage
The main use of this is to convert exception-throwing operations into monadic ones. This allows for (checked) exception-throwing methods to be used in streams and other functional contexts, and also allows for pattern matching and binding on the return type.
private static void writeFiles(List<String> filenames, String content)
{
filenames.stream()
.map(
(String s) -> Result.of(
() -> new FileWriter(s) //Open file for writing
).peek(
(FileWriter f) -> Result.of(
() -> f.write(content) //Write file contents
)
).peek(
(FileWriter f) -> Result.of(
() -> f.close()) //Close file
)
).forEach(
r -> r.match(
(IOException e) -> System.out.println("exception writing to file: " + e), //Log exception
(FileWriter f) -> System.out.println("successfully written to file '" + f + "'") //Log success
)
);
}
Just use the optimistic assumption that the interface fulfills the contract, as ordinary Java code will always do (enforced by the compiler). If someone bypasses this exception-checking, it’s not your responsibility to fix that:
public static <E extends Exception, V> Result<E, V> of(ThrowingSupplier<V, E> v) {
try {
return value(v.get());
}
catch(RuntimeException|Error x) {
throw x; // unchecked throwables
}
catch(Exception ex) {
#SuppressWarnings("unchecked") E e = (E)ex;
return error(e);
}
}
Note that even the Java programming language agrees that it is okay to proceed with this assumption, e.g.
public static <E extends Exception, V> Result<E, V> of(ThrowingSupplier<V, E> v) throws E {
try {
return value(v.get());
}
catch(RuntimeException|Error x) {
throw x; // unchecked throwables
}
catch(Exception ex) {
throw ex; // can only be E
}
}
is valid Java code, as under normal circumstances, the get method can only throw E or unchecked throwables, so it is valid to rethrow ex here, when throws E has been declared. We only have to circumvent a deficiency of the Java language when we want to construct a Result parameterized with E.
You need access to the class of the exception and then use some generics in the catch block.
One simple way is to pass the Class<E> class to the Result.of method:
public static <E extends Throwable, V> Result<E, V> of(
ThrowingSupplier<V, E> v,
Class<E> errorType) {
try {
return value(v.get());
} catch(Throwable e) {
if (errorType.isInstance(e)) {
return error(errorType.cast(e));
}
throw new RuntimeException(e); // rethrow as runtime?
}
}
Usage:
Result.of(() -> new FileWriter(s), IOException.class)
Class.isInstance is the dynamic equivalent of the instanceof static operator, while Class.cast is the same as statically casting: (E) e, except that we don't get a warning from the compiler.
EDIT: You need to think what to do when the catched Throwable is not of the type of the exception you are expecting. I've wrapped it in a RuntimeException and have rethrown it. This allows to keep using a fluent style for your monad, but is not transparent any more, as now any exception is wrapped in an unchecked exception. Maybe you could add a 3rd argument to Result.of to handle this specific case...
Update: this seems not to work at all. I'm keeping it here for now because I've linked to is elsewhere, and because it uses a method provided in other accepted answers, which I would like to continue to investigate.
Using Federico's answer and the answer linked in the comment, I have deduced a solution with the same method signature as the original problem, and I have created a class which encapsulates this functionality for future use.
The Result implementation:
public class Result<E extends Exception, V>
{
...
public static <E extends Exception, V> Result<E, V> of(ThrowingSupplier<V, E> v)
{
try
{
return value(v.get());
}
catch(Exception e)
{
Class<E> errType = Reflector.getType();
if (errType.isInstance(e))
{
return error(errType.cast(e));
}
else
{
throw (RuntimeException) e;
}
}
}
...
}
And the Reflector:
import java.lang.reflect.ParameterizedType;
/**
* This class only exists to provide a generic superclass to {#link Reflector}
* #param <E> The type for the subclass to inspect
*/
abstract class Reflected<E>
{ }
/**
* This class provides the ability to obtain information about its generic type parameter.
* #param <E> The type to inspect
* #author
*/
#Deprecated
public class Reflector<E> extends Reflected<E>
{
/**
* Returns the class corresponding to the type {#code <E>}.
* #param <E> The type to inspect
* #return The class corresponding to the type {#code <E>}
*/
public static <E> Class<E> getType()
{
return new Reflector<E>().getParameterType();
}
private Reflector() {}
private Class<E> getParameterType()
{
final ParameterizedType type = (ParameterizedType) this.getClass().getGenericSuperclass();
return (Class<E>) type.getActualTypeArguments()[0];
}
}
I built a simple document store, there are entities that have fields of different types, I have a Float, Int and String type. The entity contains an array list of values, if someone updates the schema of the entity I would like to be able to try to convert the values to the new type.
public interface FieldType<T> {
ArrayList<T> values;
}
public class FloatField implements FieldType<Float> {
}
public class StringField implements FieldType<String> {
}
I have thought about using a abstract class with methods as below
public abstract class Field<T> implements FieldType<T> {
abstract public <T> castFromString(String value);
abstract public <T> castFromFloat(Float value);
abstract public <T> castFromInt(Int value);
}
public class FloatField extends Field<Float> {
#override
public <Float> castFromString(String value){
Float castValue = null;
try {
castValue = Float.parseFloat(value);
} catch(Exception e){
}
return castValue;
}
}
I did not really like this solution as I would have to add a new abstract method each time I added an extra type to the system.
Any ideas how I could implement this better?
Maybe you could use the Function<T, R> interface?
public abstract class Field<T> implements FieldType<T> {
...
public <F> T convert(F value, Function<F, T> converter) {
try {
return converter.apply(value);
} catch(Exception e) {
return null;
}
}
...
}
And then specify the converter using a lambda expression or a method reference:
field.convert("1234", BigDecimal::new); //with a method reference
field.convert("1234", s -> new BigDecimal(s)) //with a lambda
This would replace all of your convertXXX methods by one since the return type is inferred from the passed Function.
EDIT:
If you want automatic converting, you would of course have to hard-code these since you wouldn't want to write conversion methods for all 4240 classes in the Java API. This gets messy, though. Maybe something like this in a static helper class or in FieldType itself?
public class WhereverYouWantThis {
private static HashMap<Class<?>, HashMap<Class<?>, Function<?, ?>>> converters = new HashMap<>();
static {
putConverter(String.class, Float.class, Float::parseFloat);
}
private static <T, R> void putConverter(Class<T> t, Class<R> r, Function<T, R> func) {
HashMap<Class<?>, Function<?, ?>> map = converters.get(t);
if(map == null) converters.put(t, map = new HashMap<>());
map.put(r, func);
}
public static <T, R> Function<T, R> getConverter(Class<T> t, Class<R> r) {
HashMap<Class<?>, Function<?, ?>> map = converters.get(t);
if(map == null) return null;
#SuppressWarnings("unchecked")
Function<T, R> func = (Function<T, R>) map.get(r);
return func;
}
public static <T, R> R convert(T o, Class<R> to) {
#SuppressWarnings("unchecked")
Function<T, R> func = (Function<T, R>) getConverter(o.getClass(), to);
return func == null ? null : func.apply(o);
}
}
I don't think you need generics for this. Instead, just try to create a Float from the input String and return null if there be a problem:
public Float castFromString(String value) {
Float castValue = null;
try {
castValue = Float.parseFloat(value);
} catch(Exception e){
// log here
}
return castValue;
}
The reason I don't think generics are needed is that the types involved in the conversion are named/known in your helper methods.
I ask for something which I see impossible and I'll delete question if it is.
I have got method:
public Object convertBy(Function... functions) {
}
and those functions are :
interface FLines extends Function {
#Override
default Object apply(Object t) {
return null;
};
public List<String> getLines(String fileName);
}
interface Join extends Function {
#Override
default Object apply(Object t) {
return null;
};
public String join(List<String> lines);//lines to join
}
interface CollectInts extends Function {
#Override
default Object apply(Object t) {
return null;
};
public List<Integer> collectInts(String s);
}
interface Sum<T, R> extends Function<T, R> {
#Override
default Object apply(Object t) {
return null;
};
public R sum(T list);//list of Integers
}
Abstract methods in those interfaces return values of different types. I pass lambdas to my convertBy method.
I would like to set convertBy return type the same as return type of functions[functions.length - 1].
Is this is possible?
EDIT:
I've changed the signature of the method and the signature of the methods inside the interface. It works but only if I do cast in the marked places in the main posted below. The weird things it needs cast only in 3 out of 4 method's invocations, I would like to get rid of casts at all in the main.
import java.util.List;
import java.util.function.Function;
public class InputConverter<T> {
private T value;
public InputConverter(T value) {
this.value = value;
}
public <T, R> R convertBy(Function<T, R> special, Function... functions) {
if (functions.length == 0) {
FLines flines = (FLines) special;
return (R) flines.getLines((value instanceof String) ? (String) value : null);
} else if (functions.length == 1) {
FLines flines = (FLines) functions[0];
Join join = (Join) special;
return (R) join.join(flines.getLines((String) value));
} else if (functions.length == 2) {
if (functions[0] instanceof FLines) {
FLines flines = (FLines) functions[0];
Join join = (Join) functions[1];
CollectInts collectInts = (CollectInts) special;
return (R) collectInts.collectInts(join.join(flines.getLines((String) value)));
} else {
Join join = (Join) functions[0];
CollectInts collectInts = (CollectInts) functions[1];
Sum sum = (Sum) special;
return (R) sum.sum(collectInts.collectInts(join.join((List<String>) value)));
}
} else {
FLines flines = (FLines) functions[0];
Join join = (Join) functions[1];
CollectInts collectInts = (CollectInts) functions[2];
Sum sum = (Sum) special;
return (R) sum.sum(collectInts.collectInts(join.join(flines.getLines((String) value))));
}
}
/* public Integer convertBy(Join join, CollectInts collectInts, Sum sum) {
return sum.sum(collectInts.collectInts(join.join((List<String>) value)));
}*/
}
interface FLines<T, R> extends Function {
#Override
default Object apply(Object t) {
return null;
};
public R getLines(T fileName);
// public List<String> getLines(String fileName);
}
interface Join<T,R> extends Function {
#Override
default Object apply(Object t) {
return null;
};
public R join(T lines);//lines to join
// public String join(List<String> lines);//lines to join
}
interface CollectInts<T, R> extends Function {
#Override
default Object apply(Object t) {
return null;
};
public R collectInts(T t);
// public List<Integer> collectInts(String s);
}
interface Sum<T, R> extends Function<T, R> {
#Override
default Object apply(Object t) {
return null;
};
public R sum(T list);//list of Integers
}
The main method:
FLines<String, List<String>> flines ....
Join<List<String>, String> join ...
CollectInts<String, List<Integer>> collectInts ...
Sum<List<Integer>, Integer> sum ...
String fname =/* System.getProperty("user.home") + "/*/ "LamComFile.txt";
InputConverter<String> fileConv = new InputConverter<>(fname);
List<String> lines = fileConv.convertBy(flines);//cannot cast from Object to List<String>
String text = fileConv.convertBy( join, flines);//cannot cast from Object to String
List<Integer> ints = fileConv.convertBy(collectInts,flines, join);//cannot cast from Object to List<Integer>
Integer sumints = fileConv.convertBy(sum, flines, join, collectInts);//works without cast!
I don't understand why compiler understands what sum returns but don't infer what for instance collectInts returns.
It seems, you have some misunderstanding about generic type hierarchies. When you want to extend a generic type, you have to make a fundamental decision about the actual types of the extended class or interface. You may specify exact types like in
interface StringTransformer extends Function<String,String> {}
(here we create a type that extends a generic type but is not generic itself)
or you can create a generic type which uses its own type parameter for specifying the actual type argument of the super class:
interface NumberFunc<N extends Number> extends Function<N,N> {}
Note, how we create a new type parameter N with its own constraints and use it to parametrize the superclass to require its type parameters to match ours.
In contrast, when you declare a class like
interface FLines<T, R> extends Function
you are extending the raw type Function and create new type parameters <T, R> which are entirely useless in your scenario.
To stay at the above examples, you may implement them as
StringTransformer reverse = s -> new StringBuilder(s).reverse().toString();
NumberFunc<Integer> dbl = i -> i*2;
and since they inherit properly typed methods, you may use these to combine the functions:
Function<String,Integer> f = reverse.andThen(Integer::valueOf).andThen(dbl);
System.out.println(f.apply("1234"));
Applying this to your scenario, you could define the interfaces like
interface FLines extends Function<String,List<String>> {
#Override default List<String> apply(String fileName) {
return getLines(fileName);
}
public List<String> getLines(String fileName);
}
interface Join extends Function<List<String>,String> {
#Override default String apply(List<String> lines) {
return join(lines);
}
public String join(List<String> lines);
}
interface CollectInts extends Function<String,List<Integer>> {
#Override default List<Integer> apply(String s) {
return collectInts(s);
}
public List<Integer> collectInts(String s);
}
interface Sum extends Function<List<Integer>, Integer> {
#Override default Integer apply(List<Integer> list) {
return sum(list);
}
public Integer sum(List<Integer> list);
}
and redesign your InputConverter to accept only one function which may be a combined function:
public class InputConverter<T> {
private T value;
public InputConverter(T value) {
this.value = value;
}
public <R> R convertBy(Function<? super T, ? extends R> f) {
return f.apply(value);
}
}
This can be used in a type safe manner:
FLines flines = name -> {
try { return Files.readAllLines(Paths.get(name)); }
catch(IOException ex) { throw new UncheckedIOException(ex); }
};
Join join = list -> String.join(",", list);
CollectInts collectInts=
s -> Arrays.stream(s.split(",")).map(Integer::parseInt).collect(Collectors.toList());
Sum sum = l -> l.stream().reduce(0, Integer::sum);
InputConverter<String> fileConv = new InputConverter<>("LamComFile.txt");
List<String> lines = fileConv.convertBy(flines);
String text = fileConv.convertBy(flines.andThen(join));
List<Integer> ints = fileConv.convertBy(flines.andThen(join).andThen(collectInts));
Integer sumints = fileConv.convertBy(
flines.andThen(join).andThen(collectInts).andThen(sum)
);
You have to change the method signature and inline the last vararg value as a separate parameter.
If you have this parameter as the last one, then you won't be able a use vararg parameter, as it has always to be last one and must be represented as an array in case it's not the last one:
public <T, R> R convertBy(Function[] functions, Function<T, R> special) { }
If you, however, insist to use varargs, then you can move the "special" Function as first parameter:
public <T, R> R convertBy(Function<T, R> special, Function... functions) { }
Thank all of you who elaborated on the subject, your solutions are much better in real world.
As the author I would like to post my solution that enabled not changing the invocations of convertBy() int the main() one bit. It is very short and ugly, but works.
Main:
Function<String, List<String>> flines ... lambda here
Function<List<String>, String> join ... lambda here
Function<String, List<Integer>> collectInts ... lambda here
Function<List<Integer>, Integer> sum ... lambda here
String fname = System.getProperty("user.home") + "/LamComFile.txt";
InputConverter<String> fileConv = new InputConverter<>(fname);
List<String> lines = fileConv.convertBy(flines);
String text = fileConv.convertBy(flines, join);
List<Integer> ints = fileConv.convertBy(flines, join, collectInts);
Integer sumints = fileConv.convertBy(flines, join, collectInts, sum);
System.out.println(lines);
System.out.println(text);
System.out.println(ints);
System.out.println(sumints);
List<String> arglist = Arrays.asList(args);
InputConverter<List<String>> slistConv = new InputConverter<>(arglist);
sumints = slistConv.convertBy(join, collectInts, sum);
System.out.println(sumints);
The InputConverter class:
public class InputConverter<T> {
private T value;
public InputConverter(T value) {
this.value = value;
}
public <T, R> R convertBy(Function... functions) {
Object result = value;
for (int i = 0; i < functions.length; i++) {
result = functions[i].apply(result);
}
return (R) result;
}
}
I know this is heresy, but I tried to translate the examples from http://www.haskell.org/haskellwiki/Memoization to Java. So far I have:
public abstract class F<A,B> {
public abstract B f(A a);
}
...
public static <A, B> F<A, B> memoize(final F<A, B> fn) {
return new F<A, B>() {
private final Map<A, B> map = new HashMap<A, B>();
public B f(A a) {
B b = map.get(a);
if (b == null) {
b = fn.f(a);
map.put(a, b);
}
return b;
}
};
}
//usage:
private class Cell<X> {
public X value = null;
}
...
final Cell<F<Integer, BigInteger>> fibCell = new Cell<F<Integer, BigInteger>>();
fibCell.value = memoize(new F<Integer, BigInteger>() {
public BigInteger f(Integer a) {
return a <= 1 ? BigInteger.valueOf(a) : fibCell.value.f(a - 1).add(fibCell.value.f(a - 2));
}
});
System.out.println(fibCell.value.f(1000));
That works fine. Now I tried to implement the memoFix combinator defined as
memoFix :: ((a -> b) -> (a -> b)) -> a -> b
memoFix f =
let mf = memoize (f mf) in mf
But I got stuck. Does this even make sense in Java, especially concerning its inherent lack of lazyness?
The Guava library actually implements something similar with its MapMaker:
final Map<Integer, String> memoizingMap = new MapMaker().makeComputingMap(
new Function<Integer, String>() {
#Override
public String apply(final Integer input) {
System.out.println("Calculating ...");
return Integer.toHexString(input.intValue());
}
});
System.out.println(memoizingMap.get(1));
System.out.println(memoizingMap.get(100));
System.out.println(memoizingMap.get(100000));
System.out.println("The following should not calculate:");
System.out.println(memoizingMap.get(1));
Output:
Calculating ...
1
Calculating ...
64
Calculating ...
186a0
The following should not calculate:
1
The nice thing is that you can fine-tune the generated map for different aspects as expiration, concurrency level etc.
Okay, this has convinced me that functional programming is ususally a bad idea with Java. Lack of laziness can be worked around using a reference object (which essentially implements laziness). Here's a solution:
public static class FunctionRef<A, B> {
private F<A, B> func;
public void set(F<A, B> f) { func = f; }
public F<A, B> get() { return func; }
}
public static class Pair<A, B> {
public final A first; public final B second;
public Pair(A a, B b) {
this.first = a; this.second = b;
}
}
public static <A, B> F<A, B> memoFix(final F<Pair<FunctionRef<A, B>, A>, B> func) {
final FunctionRef<A, B> y = new FunctionRef<A, B>();
y.set(
memoize(new F<A, B>() {
#Override
public B f(A a) {
return func.f(new Pair<FunctionRef<A, B>, A>(y, a));
}
})
);
return y.get();
}
//Test that it works
public static void main(String[] args) {
F<Pair<FunctionRef<Integer, Integer>,Integer>, Integer> fib = new F<Pair<FunctionRef<Integer, Integer>,Integer>, Integer>() {
#Override
public Integer f(Pair<FunctionRef<Integer, Integer>, Integer> a) {
int value = a.second;
System.out.println("computing fib of " + value);
if (value == 0) return 0;
if (value == 1) return 1;
return a.first.get().f(value - 2) + a.first.get().f(value - 1);
}
};
F<Integer, Integer> memoized = memoFix(fib);
System.out.println(memoized.f(10));
}
Note that when the program is run, it only outputs "computing fib of" once for each value!
The memoFix solution by Joe K was really impressive :-)
For practical purposes, this seems to be the most elegant solution for recursive (and non-recursive) functions, as it avoids the need for some reference variable:
import java.util.HashMap;
import java.util.Map;
public abstract class MemoF<A,B> extends F<A,B> {
private final Map<A, B> map = new HashMap<A, B>();
#Override
public B f(A a) {
B b = map.get(a);
if (b == null) {
b = func(a);
map.put(a, b);
}
return b;
}
public abstract B func(A a);
}
Now you have to implement func as usual, except that you never call it recursively, but call f instead:
F<Integer, BigInteger> memoFib = new MemoF<Integer, BigInteger>(){
public BigInteger func(Integer a) {
return a <= 1 ? BigInteger.valueOf(a) : f(a - 1).add(f(a - 2));
}
};
System.out.println(memoFib.f(100));
//--> 354224848179261915075
Why are you stuck? It looks like you're done.
You've successfully memoized calls to a function using a Map.
Here is a snippet from my recent solution for the exact same problem:
private final static class MutableFunction<A, B> implements Function<A, B> {
public Function<A, B> f;
#Override
public B apply(A argument) {
return f.apply(argument);
}
}
/**
* Computes the fixed point of function f.
* Only terminates successfully if f is non-strict (that is returns without calling its argument).
*/
public static <A, B, R extends Function<A,B>> R fix(final Function<? super Function<A, B>, ? extends R> f) {
MutableFunction<A, B> mutable = new MutableFunction<A, B>();
R result = f.apply(mutable);
mutable.f = result;
return result;
}
Memofix of f is just a fix(composition(memo, f)) then!