Externalized Configuration not works for Scala's Map - java

I tried to load a property values for external file, and the property is stored as key-value format (Map). It works properly if I use java's Map, just as the code as:
import java.util.{Map, HashMap}
#Component
#ConfigurationProperties(prefix="config")
class ConfigProperties {
val corporationSecrets : Map[String, String] = new HashMap[String, String]
}
But when I just change the map to Scala's map, I cannot get any value from the map, viz., the map is empty.
import java.util.HashMap
import scala.collection.JavaConverters._
#Component
#ConfigurationProperties(prefix="config")
class ConfigProperties {
val corporationSecrets : Map[String, String] = new HashMap[String, String].asScala
}
I tried both mutable map and immutable map, but no case works.
Does it mean I cannot use Scala's map in the case?

Yes, Spring Boot doesn't know how to handle Scala collections. But you could use Java collections internally and add methods returning the Scala versions. Of course, they'll need to have different names. E.g.
import java.util.{HashMap => JHashMap, Map => JMap}
import org.springframework.boot.context.properties.ConfigurationProperties
import org.springframework.stereotype.Component
import scala.beans.BeanProperty
import scala.collection.JavaConversions._
#Component
#ConfigurationProperties(prefix = "security-util")
class SecurityUtilProperties {
#BeanProperty
val corporationSecrets: JMap[String, String] = new JHashMap[String, String]
def corporationSecretsScala = corporationSecrets.asScala
}

Related

Java Mutiny Multi from CompletableFuture<List<T>>

say you have a method:
public CompletableFuture<List<Integer>> getStuffAsync()
I want the same stream as:
Multi<Integer> stream = Multi
.createFrom().completionStage(() -> getStuffAsync())
.onItem().transformToIterable(Function.identity())
which is a stream of each integer returned by the list from the method specified at the beggining...
but without the onItem().transformToIterable(), hopefully something like:
Multi.createFrom().completionStageIterable(() -> getStuffAsync())
purely for aesthetic reasons and to save on valuable characters
You can use Multi.createFrom().iterable() and pass in the result of the CompletableFuture.
Multi<Integer> stream = Multi
.createFrom()
.iterable(getStuffAsync().get());
Alternatively use a utility method to create it.
Create this class Multi.java (or other name) in a package such as util:
package util;
import io.smallrye.mutiny.groups.*;
import java.util.concurrent.CompletableFuture;
import java.util.function.Function;
public interface Multi<T> extends io.smallrye.mutiny.Multi<T> {
static <T> io.smallrye.mutiny.Multi<T> createFromCompletionStageIterable(CompletableFuture<? extends Iterable<T>> completableFuture) {
return MultiCreate.INSTANCE.completionStage(completableFuture).onItem().transformToIterable(Function.identity());
}
}
Then you can use your custom method like this:
Multi<Integer> stream = util.Multi
.createFromCompletionStageIterable(getStuffAsync());

How to stream response body in Quarkus with Kotlin

Goal
Build a service that streams a zip file built on the fly. The zip file contains files downloaded via HTTP. Files are big enough to don't fit into RAM. Using filesystem should be avoided since it's unnecessary.
Problems
JSX-RS defines StreamingResponse and seems to be the answer. However, it's not supported by the Quarkus Reactive. It just returns ZipResource$ZipStreamingOutput#50a05849 as a response body.
Using Multi (from Mutiny) as a returning result does not allow bridging it with Kotlin coroutines, because coroutines require calling them from suspend function. And if the method is marked with suspend then it just returns io.smallrye.mutiny.operators.multi.builders.EmitterBasedMulti#24ac00e7 as a function body.
Example code:
#GET
#Produces("application/zip")
suspend fun download(): Multi<String> {
return flow { emit("hello") }.asMulti()
}
Also even if it worked, I don't see any option to specify response headers.
Criteria
Non-blocking streaming, so that concurrency is not limited by the number of threads
Kotlin coroutines are preferable since it's native for the language rather than framework-specific
Context
Kotlin 1.5.30
Quarkus 2.2.2.Final
Here is a simplified thread-blocking implementation, which I want to implement in a non-thread-blocking (reactive) way.
import java.io.IOException
import java.io.OutputStream
import java.net.URL
import java.util.zip.ZipEntry
import java.util.zip.ZipOutputStream
import javax.ws.rs.GET
import javax.ws.rs.Path
import javax.ws.rs.Produces
import javax.ws.rs.WebApplicationException
import javax.ws.rs.core.Response
import javax.ws.rs.core.StreamingOutput
data class Entry(val url: String, val name: String)
#Path("/zip")
class ZipResource {
#GET
#Produces("application/zip")
fun download(): Response? {
val entries = listOf(
Entry("http://link-to-a-source-file1", "file1.txt"),
Entry("http://link-to-a-source-file2", "file2.txt"),
)
val contentDisposition = """attachment; filename="test.zip""""
return Response.ok(ZipStreamingOutput(entries)).header("Content-Disposition", contentDisposition).build()
}
class ZipStreamingOutput(private val entries: List<Entry>) : StreamingOutput {
#Throws(IOException::class, WebApplicationException::class)
override fun write(output: OutputStream) {
val zipOut = ZipOutputStream(output)
for (entry in entries) {
zipOut.putNextEntry(ZipEntry(entry.name))
val downloadStream = URL(entry.url).openStream()
downloadStream.transferTo(zipOut)
downloadStream.close()
}
zipOut.close()
}
}
}

Apache Spark: Count of records by a specific field in Java RDD

I want to count the different types of records in Java RDD basis a field in the Object.
I have an Entity class having name and state as a member variable of the Class. The Entity class looks like this:
import java.io.Serializable;
import lombok.AllArgsConstructor;
import lombok.Getter;
#Getter
#AllArgsConstructor
public class Entity implements Serializable {
private final String name;
private final String state;
}
I have a javaRDD of Entity Object. I want to determine how many objects are present for each state in this RDD.
The current approach that I am using to do this is by using LongAccumulator. The idea is to iterate through each record in the RDD, parse the state field, and increment the count of the corresponding accumulator. The code which I have tried so far is:
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.util.LongAccumulator;
import java.util.ArrayList;
import java.util.List;
import lombok.extern.slf4j.Slf4j;
#Slf4j
public class CountRDD {
public static void main(String[] args) {
String applicationName = CountRDD.class.getName();
SparkConf sparkConf = new SparkConf().setAppName(applicationName).setMaster("local");
JavaSparkContext javaSparkContext = new JavaSparkContext(sparkConf);
javaSparkContext.setLogLevel("INFO");
Entity entity1 = new Entity("a1", "s1");
Entity entity2 = new Entity("a2", "s2");
Entity entity3 = new Entity("a3", "s1");
Entity entity4 = new Entity("a4", "s2");
Entity entity5 = new Entity("a5", "s1");
List<Entity> entityList = new ArrayList<Entity>();
entityList.add(entity1);
entityList.add(entity2);
entityList.add(entity3);
entityList.add(entity4);
entityList.add(entity5);
JavaRDD<Entity> entityJavaRDD = javaSparkContext.parallelize(entityList, 1);
LongAccumulator s1Accumulator = javaSparkContext.sc().longAccumulator("s1");
LongAccumulator s2Accumulator = javaSparkContext.sc().longAccumulator("s2");
entityJavaRDD.foreach(entity -> {
if (entity != null) {
String state = entity.getState();
if ("s1".equalsIgnoreCase(state)) {
s1Accumulator.add(1);
} else if ("s2".equalsIgnoreCase(state)) {
s2Accumulator.add(1);
}
}
});
log.info("Final values for input entity RDD are following");
log.info("s1Accumulator = {} ", s1Accumulator.value());
log.info("s2Accumulator = {} ", s2Accumulator.value());
}
}
The above code works and produces this output s1Accumulator = 3 and s2Accumulator = 2.
The limitation of the above code is: We should know all the permissible value of the state before the execution and maintain the corresponding accumulator. This would simply make the code too big for a larger value of the state.
Another approach that I can think of is to create a new Pair RDD of String (state) and Integer (count). Apply the mapToPair transformation on the input RDD, and get the count from this newly created RDD.
Any other thought about how can I approach this problem.
As mentioned in the comments, you can groupBy on the state field and then call count on it, this will give you the count for each state. You don't need accumulators.
As a side note, jobs run with significantly better performance if you avoid lambda functions and use DataFrames (which is DataSet<Row>). DataFrames provide better query optimization and code generation capabilities than RDDs and have vectorized (meaning: very fast) functions for most processing use cases.
The DataSet API javadoc has a DataFrame groupBy example in the description: https://spark.apache.org/docs/2.4.5/api/java/org/apache/spark/sql/Dataset.html
It is preferred to read data as DataFrames to begin with, but you can convert RDDs and JavaRDDs with SparkSession.createDateFrame.

JSON mappnig unxepectable case

Earlier I used import com.fasterxml.jackson in my application.
Because I used akka http, I wanted to try live with Marshal/Unmarshal and spray.json.toJson.compactPrint.
Without extra package (com.fasterxml.jackson) dependency.
But I stuck on simple case
old working code:
...
import com.fasterxml.jackson.databind.ObjectMapper
import com.fasterxml.jackson.module.scala.DefaultScalaModule
val obj: AnyRef = new Object()
val mapper = new ObjectMapper()
mapper.registerModule(DefaultScalaModule)
val json = mapper.writeValueAsString(obj)
new code:
import spray.json._
val obj: AnyRef = new Object()
val json = obj.toJson.compactPrint
This cause exception
Cannot find JsonWriter or JsonFormat type class for AnyRef on
obj.toJson.compactPrint
Help please!
upd:
this is real part of code - for better understand what i need
it works well.
com.fasterxml.jackson mapper does not have restriction to write AnyRef to json string
import com.fasterxml.jackson.databind.ObjectMapper
import com.fasterxml.jackson.module.scala.DefaultScalaModule
object mapper {
private val _mapper = new ObjectMapper()
_mapper.registerModule(DefaultScalaModule)
def get: ObjectMapper = _mapper
}
import akka.actor.{Actor, Props, ReceiveTimeout}
import akka.http.scaladsl.model.{ContentTypes, HttpEntity, HttpResponse}
object RequestHandler {
def props(ctx: ImperativeRequestContext): Props = Props(new RequestHandler(ctx))
}
class RequestHandler(ctx: ImperativeRequestContext) extends Actor {
import context._
import concurrent.duration._
setReceiveTimeout(30.second)
def receive: Receive = {
case ReceiveTimeout =>
ctx.complete(HttpResponse(500, entity = "timeout"))
stop(self)
case x: AnyRef =>
ctx.complete(HttpEntity(ContentTypes.`application/json`, mapper.get.writeValueAsString(x)))
stop(self)
}
}
I'm not sure what exactly you complain about. Spray JSON library as it is typical for a Scala library is more typesafe than Jackson. Particularly toJson works basing on the compile-time type rather than rune-time type. And obviously there is no safe way to transform any AnyRef (aka Object) to a JSON because there might be any object of any type behind that variable. Consider following example:
val obj: AnyRef = new Thread()
val json = obj.toJson.compactPrint
To Spray JSON this code looks exactly the same as your example. Obviously you can't write Thread to JSON. The difference is that with Jackson you will get some kind of a runtime error. With Spray JSON it will not even compile. If you use some specific types that can be converted to JSON safely - Spray JSON will work for you.
If the question is that you want to have some generic method that accepts arguments and as one of the steps does conversion to JSON, then you should use generics and constraints to specify that type as in:
def foo[T : JsonWriter](bar : T) = {
//do something
bar.toJson.compactPrint
//do something more
}
Update: more realistic example
Here is an example that compiles for me and runs as I would expect:
import spray.json._
import DefaultJsonProtocol._
case class Foo(i: Int)
implicit val fooFormat = jsonFormat1(Foo)
case class Bar(ii: Int, ss: String)
implicit val barFormat = jsonFormat2(Bar)
def printJson[T: JsonWriter](obj: T):Unit = {
println(obj.toJson.compactPrint)
}
printJson(Foo(42))
printJson(Bar(123, "this works!"))

Extracting a value from a HashMap with a dynamic key

I have a Java Web Application with a HashMap class to store around 20 different web sites where the key is a specific code:
e.g code: AB website: http://www.somewebsiteforAB.com
I generate the code (HashMap key) via another Java Class which is surfaced in the JSP for user display.
I am trying to understand how I can pass this 'dynamic' variable from the JSP to the HashMap to return the associated value.
My Java class is:
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Iterator;
import java.util.Set;
public class FaMap {
// Initialisers a static, immutable map containing relevant web sites
private static final Map<String, String> fMap;
static {
/* Declaring the HashMap*/
Map<String, String> aMap = new HashMap<String, String>();
/* Adding elements to HashMap*/
aMap.put("AB", "https://www.somewebsiteforAB.com/");
aMap.put("CD", "https://www.somewebsiteforCD.com/");
aMap.put("EF", "https://www.somewebsiteforEF.com/");
aMap.put("GH", "https://www.somewebsiteforGH.com/");
fMap = Collections.unmodifiableMap(aMap);
/* Display content using Iterator*/
Set<Entry<String, String>> set = fMap.entrySet();
Iterator<Entry<String, String>> iterator = set.iterator();
while (iterator.hasNext()) {
Entry<String, String> mentry = iterator.next();
}
}
}
The above class will print the keys and values for all or any specified key using System.out.println statements within the class. But how do I pass the map a dynamically generated key to extract the relevant value and pass this back to the JSP.
Do I need to write another method that accepts the key as an parameter and passes this to the map?
First of all, your dynamic var needs to be created. for example:
<c:set var="myVar" value="AB"/>
Once you have this, and have an instance of your map (let's call it fMap) you can simply call it like in Java. for example:
<c:set var="myWebsite" value="${fMap.get(myVar)}"/>
And you also need a pulbic Java method that will allow you access to the map. for example:
public Map<String, String> getMap() {
return fMap;
}
yes, you can just create a new method that will receive the key that is being displayed in your JSP.
Something like this:
public String getValueWithKey(String keyFromJSP) {
return fMap.get(keyFromJSP);
}

Categories