Kotlin Destructuring when/if statement - java

So I have a String i would want to check if I should split into two, or return some default value. Like this:
val myString = "firstPart-secondPart"
val (first, second) = when (myString.contains("-")) {
true -> myString.split('-', limit = 2)
else -> ?? <-- How would i return ("Default1", "Default2") so the destructuring still works?
}
So my question is, how do i return two default strings, so that the deconstructing works? I've used String.split() before in order to deconstruct and it's really nice.

How to return 2 values for destructuring
You need to return a type matching the above type, split returns a list, so you could use this:
listOf("Default1", "Default2")
Full code
val myString = "firstPart-secondPart"
val (first, second) = when (myString.contains("-")) {
true -> myString.split('-', limit = 2)
else -> listOf("Default1", "Default2")
}
Why this works
As both branches return List<String> you can treat the whole when block as a List<String>, so it can be destructured to get the values from it.
Possible cleanup
val myString = "firstPart-secondPart"
val (first, second) = when {
myString.contains("-") -> myString.split('-', limit = 2)
else -> listOf("Default1", "Default2")
}
This may make more sense, assuming you are going to add more conditions, otherwise an if may make more sense.

As an alternative to the good and correct answer of jrtapsell, you could use destructured Pairs:
val (first, second) = when (myString.contains("-")) {
true -> myString.split('-', limit = 2).let { it[0] to it[1] }
else -> "Default1" to "Default2"
}
Note 1: The resulting list with two elements is transformed to a Pair with the help of let.
Note 2: The infix function to is used to create Pairs here.

Related

How do I turn this expression into a lambda expression?

I'd like to turn what I'm doing into lambda, in which case it would be I scroll through a list (listRegistrationTypeWork) within the other, check if the child list (getRegistrationTypeWorkAuthors) is != null, if it is, scroll through it looking for an authorCoautor = type, and increment a count, to find out how many records within the lists have this same type.
public int qtyMaximumWorksByAuthorCoauthor(AuthorCoauthor type) {
int count = 0;
for (RegistrationTypeWork tab : listRegistrationTypeWork) {
if (CollectionUtils.isNotEmpty(tab.getRegistrationTypeWorkAuthors())) {
for (RegistrationTypeWorkAuthors author : tab.getRegistrationTypeWorkAuthors()) {
if (author.getAuthorCoauthor().equals(type))
count++;
}
}
}
return count;
}
Although your statement is not clear enough on what transforming to lambda expression would mean, but I am assuming you would like to turn your imperative looping step to a functional stream and lambda based one.
This should be straightforward using:
filter to filter out the unwanted values from both of your collections
flatMap to flatten all inner collections into a single stream so that you can operate your count on it as a single source
public int qtyMaximumWorksByAuthorCoauthor(AuthorCoauthor type) {
return listRegistrationTypeWork.stream()
.filter(tab -> tab.getRegistrationTypeWorkAuthors() != null)
.flatMap(tab -> tab.getRegistrationTypeWorkAuthors().stream())
.filter(author -> type.equals(author.getAuthorCoauthor()))
.count();
}
In addition to Thomas fine comment I think you would want to write your stream something like this.
long count = listRegistrationTypeWork.stream()
// to make sure no lists that are actual null are mapped.
// map all RegistrationTypeWork into optionals of lists of RegistrationTypeWorkAuthors
.map(registrationTypeWork -> Optional.ofNullable(registrationTypeWork.getRegistrationTypeWorkAuthors()))
// this removes all empty Optionals from the stream
.flatMap(Optional::stream)
// this turns the stream of lists of RegistrationTypeWorkAuthors into a stream of plain RegistrationTypeWorkAuthors
.flatMap(Collection::stream)
// this filters out RegistrationTypeWorkAuthors which are of a different type
.filter(registrationTypeWorkAuthors -> type.equals(registrationTypeWorkAuthors.getAuthorCoauthor()))
.count();
// count returns a long so you either need to return a long in your method signature or cast the long to an integer.
return (int) count;

array list stream find index and update a value

How can I update an item the arraylist using stream, example I want to update item 1 from 100 to 1000:
List<PayList> payList = new ArrayList<>();
payList.add(new PayList(1, 100));
payList.add(new PayList(2, 200));
I am able to find the index, but how can I find the index and then update the value?
int indexOf = IntStream.range(0, payList.size())
.filter(p -> trx.getTransactionId() ==
payList.get(p).getTransactionId())
.findFirst().orElse(-1);
A couple of options:
Using the index you have already obtained, you can use paylist.get(index) again. However you must handle the missing -1 case:
int indexOf = IntStream.range(0, payList.size())
.filter(p -> trx.getTransactionId() ==
payList.get(p).getTransactionId())
.findFirst().orElse(-1);
if(indexOf == -1) {
// handle missing
}else {
payList.get(indexOf).setAmount(1000);
}
If you don't need to handle the missing case you could stream over the lists like this:
final Stream<PayList> filtered = payList.stream().filter(p -> trx.getTransactionId() == p.getTransactionId());
final Optional<PayList> first = filtered.findFirst();
first.ifPresent(i -> i.setAmount(1000));
Or you could use similar orElse, ifPresentOrElse, orElseThrow type logic on the optional to handle the missing case.

Understanding reduction operation of accumulator - java 8

I m trying to understand reduction accumulator operation: In the below example
List<String> letters = Arrays.asList("a","bb","ccc");
String result123 = letters
.stream()
.reduce((partialString, element) ->
partialString.length() < element.length()
? partialString
: element
).get();
System.out.println(result123);
Is partialString initialized to empty string? Since its a fold operation, I assume that operation should result empty string but its printing "a". Can someone please explain how this accumulator works?
The for-loop corresponding code for the reduce operation is like
boolean seen = false;
String acc = null;
for (String letter : letters) {
if (!seen) {
seen = true;
acc = letter;
} else {
acc = acc.length() < letter.length() ? acc : letter;
}
}
The first pair of element to reduce are (firstElt, secondElt) , there is no empty initial element
If you print each step of the reduce
letters.stream()
.reduce((partialString, element) -> {
System.out.println(partialString + " " + element);
return partialString.length() < element.length() ? partialString : element;
}).get();
// output
a bb
a ccc
If you read the documentation, i.e. the javadoc of reduce(), you will learn that partialString is initialized to the first value, and reduce() is only called to combine values, aka to reduce them.
Is partialString initialized to empty string?
No. If you wanted that, you need to use the other reduce() method, so you wouldn't need to call get():
String result123 = letters
.stream()
.reduce("", (partialString, element) ->
partialString.length() < element.length()
? partialString
: element
);
Of course, that doesn't make any sense, because partialString is now always a string with length() = 0, so result of expression is always an empty string. You might as well just write String result123 = ""; and save all the CPU time.

How to extract only one allowed element from a stream?

I have a list of elements, and want to extract the value of the fields' propery.
Problem: all elements should have the same property value.
Can I do better or more elegant than the following?
Set<String> matches = fields.stream().map(f -> f.getField()).collect(Collectors.toSet());
if (matches.size() != 1) throw new IllegalArgumentException("could not match one exact element");
String distrinctVal = matches.iterator().next(); //continue to use the value
Is this possible directly using the stream methods, eg using reduce?
Your current solution is good. You can try this way also to avoid collecting.
Use distinct() then count()
if (fields.stream().map(f -> f.getField()).distinct().count() != 1)
throw new IllegalArgumentException("could not match one exact element");
To get the value
String distrinctVal = fields.get(0).getField();
Well you could certainly do this in several ways but as to which is more elegant can vary from person to person.
Anyway if you were to attempt this via streams this is how i would have done it:
With a slight modification to my answer here you could do:
boolean result = fields.stream()
.map(f -> f.getField())
.distinct()
.limit(2) // ENABLE SHORT CIRCUITING
.count() != 1;
if (result) throw new IllegalArgumentException("could not match one exact element");
String distinctVal = fields.get(0).getField();
The benefit of this approach is basically utilising limit(2) to enable optimisation where possible.
Conclusion : your current approach is quite good actually so I wouldn't be surprised if you were to stick to that but you also have the choice of this approach where you can short-circuit the pipeline.
That would be the reduce solution.
Optional<String> distinctVal = fields.stream()
.map(f -> f.getField())
.reduce((a, b) -> {
if(a != b) throw new IllegalArgumentException("could not match one exact element");
return a;
});
Depending on the frequency of invocation and the size of your set, the iterative code can be significantly faster.
public boolean allEqual(Collection<Fields> fields) {
if (fields.size() > 1) {
String last;
boolean first = true;
for (Field field : fields) {
String thisString = field.getField();
if (first) {
last = thisString;
} else {
if (!StringUtils.equals(last, thisString)) {
return false;
}
}
}
}
return true;
}
While this is not a streaming solution, it aborts when the first mismatch is found, which - depending on the input - can be significantly faster.
Similar to this:
String distrinctVal = fields.stream()
.map(f -> f.getField())
.reduce((a, b)
-> { throw new IllegalArgumentException("could not match one exact element");}
).get();
As others have said, it’s largely a matter of taste. Here’s mine.
String distinctVal = fields.iterator().next().getField();
if (fields.stream().map(Field::getField).anyMatch(e -> ! e.equals(distinctVal)) {
throw new IllegalArgumentException("could not match one exact element");
}
//continue to use the value
(Code is not tested; forgive typos.)
I didn’t particularly code with performance efficiency in mind, but the code will only search until the first non-matching string, so should be efficient.

Limit a stream and find out if there are pending elements

I have the following code that I want to translate to Java 8 streams:
public ReleaseResult releaseReources() {
List<String> releasedNames = new ArrayList<>();
Stream<SomeResource> stream = this.someResources();
Iterator<SomeResource> it = stream.iterator();
while (it.hasNext() && releasedNames.size() < MAX_TO_RELEASE) {
SomeResource resource = it.next();
if (!resource.isTaken()) {
resource.release();
releasedNames.add(resource.getName());
}
}
return new ReleaseResult(releasedNames, it.hasNext(), MAX_TO_RELEASE);
}
Method someResources() returns a Stream<SomeResource> and ReleaseResult class is as follows:
public class ReleaseResult {
private int releasedCount;
private List<String> releasedNames;
private boolean hasMoreItems;
private int releaseLimit;
public ReleaseResult(List<String> releasedNames,
boolean hasMoreItems, int releaseLimit) {
this.releasedNames = releasedNames;
this.releasedCount = releasedNames.size();
this.hasMoreItems = hasMoreItems;
this.releaseLimit = releaseLimit;
}
// getters & setters
}
My attempt so far:
public ReleaseResult releaseReources() {
List<String> releasedNames = this.someResources()
.filter(resource -> !resource.isTaken())
.limit(MAX_TO_RELEASE)
.peek(SomeResource::release)
.map(SomeResource::getName)
.collect(Collectors.toList());
return new ReleasedResult(releasedNames, ???, MAX_TO_RELEASE);
}
The problem is that I can't find a way to know if there are pending resources to process. I've thought of using releasedNames.size() == MAX_TO_RELEASE, but this doesn't take into account the case where the stream of resources has exactly MAX_TO_RELEASE elements.
Is there a way to do the same with Java 8 streams?
Note: I'm not looking for answers like "you don't have to do everything with streams" or "using loops and iterators is fine". I'm OK if using an iterator and a loop is the only way or just the best way. It's just that I'd like to know if there's a non-murky way to do the same.
Since you don’t wanna hear that you don’t need streams for everything and loops and iterators are fine, let’s demonstrate it by showing a clean solution, not relying on peek:
public ReleaseResult releaseReources() {
return this.someResources()
.filter(resource -> !resource.isTaken())
.limit(MAX_TO_RELEASE+1)
.collect(
() -> new ReleaseResult(new ArrayList<>(), false, MAX_TO_RELEASE),
(result, resource) -> {
List<String> names = result.getReleasedNames();
if(names.size() == MAX_TO_RELEASE) result.setHasMoreItems(true);
else {
resource.release();
names.add(resource.getName());
}
},
(r1, r2) -> {
List<String> names = r1.getReleasedNames();
names.addAll(r2.getReleasedNames());
if(names.size() > MAX_TO_RELEASE) {
r1.setHasMoreItems(true);
names.remove(MAX_TO_RELEASE);
}
}
);
}
This assumes that // getters & setters includes getters and setters for all non-final fields of your ReleaseResult. And that getReleasedNames() returns the list by reference. Otherwise you would have to rewrite it to provide a specialized Collector having special non-public access to ReleaseResult (implementing another builder type or temporary storage would be an unnecessary complication, it looks like ReleaseResult is already designed exactly for that use case).
We could conclude that for any nontrivial loop code that doesn’t fit into the stream’s intrinsic operations, you can find a collector solution that basically does the same as the loop in its accumulator function, but suffers from the requirement of always having to provide a combiner function. Ok, in this case we can prepend a filter(…).limit(…) so it’s not that bad…
I just noticed, if you ever dare to use that with a parallel stream, you need a way to reverse the effect of releasing the last element in the combiner in case the combined size exceeds MAX_TO_RELEASE. Generally, limits and parallel processing never play well.
I don't think there's a nice way to do this. I've found a hack that does it lazily. What you can do is convert the Stream to an Iterator, convert the Iterator back to another Stream, do the Stream operations, then finally test the Iterator for a next element!
Iterator<SomeResource> it = this.someResource().iterator();
List<String> list = StreamSupport.stream(Spliterators.spliteratorUnknownSize(it, Spliterator.ORDERED), false)
.filter(resource -> !resource.isTaken())
.limit(MAX_TO_RELEASE)
.peek(SomeResource::release)
.map(SomeResource::getName)
.collect(Collectors.toList());
return new ReleaseResult(list, it.hasNext(), MAX_TO_RELEASE);
The only thing I can think of is
List<SomeResource> list = someResources(); // A List, rather than a Stream, is required
List<Integer> indices = IntStream.range(0, list.size())
.filter(i -> !list.get(i).isTaken())
.limit(MAX_TO_RELEASE)
.collect(Collectors.toList());
List<String> names = indices.stream()
.map(list::get)
.peek(SomeResource::release)
.map(SomeResource::getName)
.collect(Collectors.toList());
Then (I think) there are unprocessed elements if
names.size() == MAX_TO_RELEASE
&& (indices.isEmpty() || indices.get(indices.size() - 1) < list.size() - 1)

Categories