I parse CSV file and create a domain objects using supercsv. My domain object has one enum field, e.g.:
public class TypeWithEnum {
private Type type;
public TypeWithEnum(Type type) {
this.type = type;
}
public Type getType() {
return type;
}
public void setType(Type type) {
this.type = type;
}
}
My enum looks like this:
public enum Type {
CANCEL, REFUND
}
Trying to create beans out of this CSV file:
final String[] header = new String[]{ "type" };
ICsvBeanReader inFile = new CsvBeanReader(new FileReader(
getFilePath(this.getClass(), "learning/enums.csv")), CsvPreference.STANDARD_PREFERENCE);
final CellProcessor[] processors =
new CellProcessor[]{ TODO WHAT TO PUT HERE? };
TypeWithEnum myEnum = inFile.read(
TypeWithEnum.class, header, processors);
this fails with
Error while filling an object context: null offending processor: null
at org.supercsv.io.CsvBeanReader.fillObject(Unknown Source)
at org.supercsv.io.CsvBeanReader.read(Unknown Source)
Any hint on parsing enums? Should I write my own processor for this?
I already tried to write my own processor, something like this:
class MyCellProcessor extends CellProcessorAdaptor {
public Object execute(Object value, CSVContext context) {
Type type = Type.valueOf(value.toString());
return next.execute(type, context);
}
}
but it dies with the same exception.
The content of my enums.csv file is simple:
CANCEL
REFUND
The exception you're getting is because CsvBeanReader cannot instantiate your TypeWithEnum class, as it doesn't have a default (no arguments) constructor. It's probably a good idea to print the stack trace so you can see the full details of what went wrong.
Super CSV relies on the fact that you should have supplied a valid Java bean, i.e. a class with a default constructor and public getters/setters for each of its fields.
So you can fix the exception by adding the following to TypeWithEnum:
public TypeWithEnum(){
}
As for hints on parsing enums the two easiest options are:
1. Using the HashMapper processor
#Test
public void hashMapperTest() throws Exception {
// two lines of input
String input = "CANCEL\nREFUND";
// you could also put the header in the CSV file
// and use inFile.getCSVHeader(true)
final String[] header = new String[] { "type" };
// map from enum name to enum
final Map<Object, Object> typeMap = new HashMap<Object, Object>();
for( Type t : Type.values() ) {
typeMap.put(t.name(), t);
}
// HashMapper will convert from the enum name to the enum
final CellProcessor[] processors =
new CellProcessor[] { new HashMapper(typeMap) };
ICsvBeanReader inFile =
new CsvBeanReader(new StringReader(input),
CsvPreference.STANDARD_PREFERENCE);
TypeWithEnum myEnum;
while((myEnum = inFile.read(TypeWithEnum.class, header, processors)) !=null){
System.out.println(myEnum.getType());
}
}
2. Creating a custom CellProcessor
Create your processor
package org.supercsv;
import org.supercsv.cellprocessor.CellProcessorAdaptor;
import org.supercsv.cellprocessor.ift.CellProcessor;
import org.supercsv.exception.SuperCSVException;
import org.supercsv.util.CSVContext;
public class TypeProcessor extends CellProcessorAdaptor {
public TypeProcessor() {
super();
}
public TypeProcessor(CellProcessor next) {
super(next);
}
public Object execute(Object value, CSVContext context) {
if (!(value instanceof String)){
throw new SuperCSVException("input should be a String!");
}
// parse the String to a Type
Type type = Type.valueOf((String) value);
// execute the next processor in the chain
return next.execute(type, context);
}
}
Use it!
#Test
public void customProcessorTest() throws Exception {
// two lines of input
String input = "CANCEL\nREFUND";
final String[] header = new String[] { "type" };
// HashMapper will convert from the enum name to the enum
final CellProcessor[] processors =
new CellProcessor[] { new TypeProcessor() };
ICsvBeanReader inFile =
new CsvBeanReader(new StringReader(input),
CsvPreference.STANDARD_PREFERENCE);
TypeWithEnum myEnum;
while((myEnum = inFile.read(TypeWithEnum.class, header, processors)) !=null){
System.out.println(myEnum.getType());
}
}
I'm working on an upcoming release of Super CSV. I'll be sure to update the website to make it clear that you have to have a valid Java bean - and maybe a description of the available processors, for those not inclined to read Javadoc.
Here is a generic cell processor for enums
/** A cell processor to convert strings to enums. */
public class EnumCellProcessor<T extends Enum<T>> implements CellProcessor {
private Class<T> enumClass;
private boolean ignoreCase;
/**
* #param enumClass the enum class used for conversion
*/
public EnumCellProcessor(Class<T> enumClass) {
this.enumClass = enumClass;
}
/**
* #param enumClass the enum class used for conversion
* #param ignoreCase if true, the conversion is made case insensitive
*/
public EnumCellProcessor(Class<T> enumClass, boolean ignoreCase) {
this.enumClass = enumClass;
this.ignoreCase = ignoreCase;
}
#Override
public Object execute(Object value, CsvContext context) {
if (value == null)
return null;
String valueAsStr = value.toString();
for (T s : enumClass.getEnumConstants()) {
if (ignoreCase ? s.name().equalsIgnoreCase(valueAsStr) : s.name().equals(valueAsStr)) {
return s;
}
}
throw new SuperCsvCellProcessorException(valueAsStr + " cannot be converted to enum " + enumClass.getName(), context, this);
}
}
and you will use it
new EnumCellProcessor<Type>(Type.class);
I tried to reproduce your Error but everything works for me. I use SuperCSV 1.52:
private enum ENUMS_VALUES{TEST1, TEST2, TEST3};
#Test
public void testEnum3() throws IOException
{
String testInput = new String("TEST1\nTEST2\nTEST3");
ICsvBeanReader reader = new CsvBeanReader(new StringReader(testInput), CsvPreference.EXCEL_NORTH_EUROPE_PREFERENCE);
final String[] header = new String[] {"header"};
reader.read(this.getClass(), header, new CellProcessor[] {new CellProcessorAdaptor() {
#Override
public Object execute(Object pValue, CSVContext pContext)
{
return next.execute(ENUMS_VALUES.valueOf((String)pValue), pContext);
}}});
}
#Test
public void testEnum4() throws IOException
{
String testInput = new String("TEST1\nTEST2\nTEST3");
ICsvBeanReader reader = new CsvBeanReader(new StringReader(testInput), CsvPreference.EXCEL_NORTH_EUROPE_PREFERENCE);
final String[] header = new String[] {"header"};
reader.read(this.getClass(), header, new CellProcessor[] {new CellProcessorAdaptor()
{
#Override
public Object execute(Object pValue, CSVContext pContext)
{
return ENUMS_VALUES.valueOf((String)pValue);
}}});
}
public void setHeader(ENUMS_VALUES value)
{
System.out.println(value);
}
Related
I need to get the enum name based on value. I am given with enum class and value and need to pick the corresponding name during run time .
I have a class called Information as below.
class Information {
private String value;
private String type;
private String cValue;
public String getValue() {
return value;
}
public void setValue(String value) {
this.value = value;
}
public String getType() {
return type;
}
public void setType(String type) {
this.type = type;
}
public String getcValue() {
return cValue;
}
public void setcValue(String cValue) {
this.cValue = cValue;
}
public static void main(String args[]) {
Information inf = new Information();
inf.setType("com.abc.SignalsEnum");
inf.setValue("1");
}
}
class SignalEnum {
RED("1"), GREEN("2"), ORANGE("3");
private String sign;
SignalEnum(String pattern) {
this.sign = pattern;
}
}
class MobileEnum {
SAMSUNG("1"), NOKIA("2"), APPLE("3");
private String mobile;
MobileEnum(String mobile) {
this.mobile = mobile;
}
}
In run time i will come to know the enum name using the attribute type from the Information class and also i am getting the value. I need to figure out the corresponding enum to set the value for cValue attribute of Information class.
Just for example i have provided two enums like SignalEnum and MobileEnum but in my actual case i will get one among 100 enum types. Hence i dont want to check type cast. I am looking for some solution using reflection to se the cValue.
Here is a simple resolver for any enum class.
Since reflection operations are expensive, it's better to prepare all required data once and then just query for it.
class EnumResolver {
private Map<String, Enum> map = new ConcurrentHashMap<>();
public EnumResolver(String className) {
try {
Class enumClass = Class.forName(className);
// look for backing property field, e.g. "sign" in SignalEnum
Field accessor = Arrays.stream(enumClass.getDeclaredFields())
.filter(f -> f.getType().equals(String.class))
.findFirst()
.orElseThrow(() -> new NoSuchFieldException("Not found field to access enum backing value"));
accessor.setAccessible(true);
// populate map with pairs like ["1" => SignalEnum.RED, "2" => SignalEnum.GREEN, etc]
for (Enum e : getEnumValues(enumClass)) {
map.put((String) accessor.get(e), e);
}
accessor.setAccessible(false);
} catch (ReflectiveOperationException e) {
throw new RuntimeException(e);
}
}
public Enum resolve(String backingValue) {
return map.get(backingValue);
}
private <E extends Enum> E[] getEnumValues(Class<E> enumClass) throws ReflectiveOperationException {
Field f = enumClass.getDeclaredField("$VALUES");
f.setAccessible(true);
Object o = f.get(null);
f.setAccessible(false);
return (E[]) o;
}
}
And here is simple JUnit test
public class EnumResolverTest {
#Test
public void testSignalEnum() {
EnumResolver signalResolver = new EnumResolver("com.abc.SignalEnum");
assertEquals(SignalEnum.RED, signalResolver.resolve("1"));
assertEquals(SignalEnum.GREEN, signalResolver.resolve("2"));
assertEquals(SignalEnum.ORANGE, signalResolver.resolve("3"));
}
#Test
public void testMobileEnum() {
EnumResolver mobileResolver = new EnumResolver("com.abc.MobileEnum");
assertEquals(MobileEnum.SAMSUNG, mobileResolver.resolve("1"));
assertEquals(MobileEnum.NOKIA, mobileResolver.resolve("2"));
assertEquals(MobileEnum.APPLE, mobileResolver.resolve("3"));
}
}
And again for performance sake you can also instantiate these various resolvers once and put them into a separate Map
Map<String, EnumResolver> resolverMap = new ConcurrentHashMap<>();
resolverMap.put("com.abc.MobileEnum", new EnumResolver("com.abc.MobileEnum"));
resolverMap.put("com.abc.SignalEnum", new EnumResolver("com.abc.SignalEnum"));
// etc
Information inf = new Information();
inf.setType("com.abc.SignalsEnum");
inf.setValue("1");
SignalEnum red = (SignalEnum) resolverMap.get(inf.getType()).resolve(inf.getValue());
I want to develop a component that allows filtering fields of a DTO based on exclusion algorithm of a field of an object to be serialized (JSON). Based on the name mentioned in the Jackson annotation '#JsonProperty' if it's present, otherwise use the name of the field itself (mapping without annotation).
Please, how to do dynamic filtering based on exclusion with annotations? Is there some useful resources (code, tutos, ...)?
Class JacksonFieldFilter
public class JacksonFieldFilter {
public <T> T filter(T input, List<String> toExclude, Function<Object, Object> fun) throws IllegalArgumentException, IllegalAccessException {
Field[] fields = input.getClass().getFields();
for (Field field : fields) {
// check is not elementary type.
// if not ==> get its annotation
Annotation[] annaotations = field.getAnnotations();
/// Filter on Jakson annotation only with name == JSonProperty
Annotation ja = getJakson(annaotations);
/// get annotation value as String ==> annotationNameValue.
String annotationNameValue = null;
if (toExclude.contains(annotationNameValue)) {
/// found the name in excluded values list
Object prev = field.get(input);
field.set(input, fun.apply(prev));
}
}
return input;
}
Annotation getJakson(Annotation[] annaotations) {
for (Annotation annotation : annaotations) {
if (annotation.annotationType().isAssignableFrom(JsonProperty.class)) {
return annotation;
}
}
return null;
}
// Test
public static void main(String[] args) throws IllegalArgumentException, IllegalAccessException {
JacksonFieldFilter filter = new JacksonFieldFilter();
Item item = new Item();
item.setField1("London");
item.setField2("Paris");
Item clone = null; // item.
clone = filter.filter(clone, Arrays.asList(new String[] { "field_1" }), p -> {
System.err.println("Erasing " + p);
return null;
});
// OUTPUT ==> {"field_2":"Paris"}
System.out.println(clone);
}
}
Class Item
public class Item {
#JsonProperty("field_1")
private String field1;
#JsonProperty("field_2")
private String field2;
public String getField1() {
return field1;
}
public void setField1(String field1) {
this.field1 = field1;
}
public String getField2() {
return field2;
}
public void setField2(String field2) {
this.field2 = field2;
}
}
There are a few problems with your code :
You do not pass the item object to the filter function so you can not copy the field values in clone
The fields in the Item class are private so you have to use getClass().getDeclaredFields() and setAccessible() to access/write them.
Try the code below :
public static class JacksonFieldFilter {
public <T> T filter(T src, T dest, Collection<String> toExclude, Function<String, Void> callback)
throws IllegalArgumentException, IllegalAccessException {
Field[] fields = src.getClass().getDeclaredFields();
for (Field field : fields) {
JsonProperty property = field.getAnnotation(JsonProperty.class);
if (property != null) {
String value = property.value();
if (toExclude.contains(value)) {
callback.apply(value);
} else {
// Write the value from "src" to "dest"
field.setAccessible(true); // Without this we can not write into a private field
field.set(dest, field.get(src));
}
}
}
return dest;
}
}
// ...
public static void main(String[] args) throws Exception {
Item item = new Item();
item.setField1("London");
item.setField2("Paris");
Item clone = new JacksonFieldFilter().filter(item, new Item(), Arrays.asList(new String[]{"field_1"}), (f) -> {
System.out.println("Erasing " + f);
return null;
});
System.out.println(clone);
}
Output :
Erasing field_1
Item{field1=null, field2=Paris}
I want to create a custom Spark Transformer in Java.
The Transformer is text preprocessor which acts like a Tokenizer. It takes an input column and an output column as parameters.
I looked around and I found 2 Scala Traits HasInputCol and HasOutputCol.
How can I create a class that extends Transformer and implements HasInputCol and OutputCol?
My goal is have something like this.
// Dataset that have a String column named "text"
DataSet<Row> dataset;
CustomTransformer customTransformer = new CustomTransformer();
customTransformer.setInputCol("text");
customTransformer.setOutputCol("result");
// result that have 2 String columns named "text" and "result"
DataSet<Row> result = customTransformer.transform(dataset);
As SergGr suggested, you can extend UnaryTransformer. However it is quite tricky.
NOTE: All the below comments apply to Spark version 2.2.0.
To address the issue described in SPARK-12606, where they were getting "...Param null__inputCol does not belong to...", you should implement String uid() like this:
#Override
public String uid() {
return getUid();
}
private String getUid() {
if (uid == null) {
uid = Identifiable$.MODULE$.randomUID("mycustom");
}
return uid;
}
Apparently they were initializing uid in the constructor. But the thing is that UnaryTransformer's inputCol (and outputCol) is initialized before uid is initialized in the inheriting class. See HasInputCol:
final val inputCol: Param[String] = new Param[String](this, "inputCol", "input column name")
This is how Param is constructed:
def this(parent: Identifiable, name: String, doc: String) = this(parent.uid, name, doc)
Thus, when parent.uid is evaluated, the custom uid() implementation is called and at this point uid is still null. By implementing uid() with lazy evaluation you make sure uid() never returns null.
In your case though:
Param d7ac3108-799c-4aed-a093-c85d12833a4e__inputCol does not belong to fe3d99ba-e4eb-4e95-9412-f84188d936e3
it seems to be a bit different. Because "d7ac3108-799c-4aed-a093-c85d12833a4e" != "fe3d99ba-e4eb-4e95-9412-f84188d936e3", it looks like your implementation of the uid() method returns a new value on each call. Perhaps in your case it was implemented it so:
#Override
public String uid() {
return Identifiable$.MODULE$.randomUID("mycustom");
}
By the way, when extending UnaryTransformer, make sure the transform function is Serializable.
You probably want to inherit your CustomTransformer from org.apache.spark.ml.UnaryTransformer. You may try something like this:
import org.apache.spark.ml.UnaryTransformer;
import org.apache.spark.ml.util.Identifiable$;
import org.apache.spark.sql.types.DataType;
import org.apache.spark.sql.types.DataTypes;
import scala.Function1;
import scala.collection.JavaConversions$;
import scala.collection.immutable.Seq;
import java.util.Arrays;
public class MyCustomTransformer extends UnaryTransformer<String, scala.collection.immutable.Seq<String>, MyCustomTransformer>
{
private final String uid = Identifiable$.MODULE$.randomUID("mycustom");
#Override
public String uid()
{
return uid;
}
#Override
public Function1<String, scala.collection.immutable.Seq<String>> createTransformFunc()
{
// can't use labmda syntax :(
return new scala.runtime.AbstractFunction1<String, Seq<String>>()
{
#Override
public Seq<String> apply(String s)
{
// do the logic
String[] split = s.toLowerCase().split("\\s");
// convert to Scala type
return JavaConversions$.MODULE$.iterableAsScalaIterable(Arrays.asList(split)).toList();
}
};
}
#Override
public void validateInputType(DataType inputType)
{
super.validateInputType(inputType);
if (inputType != DataTypes.StringType)
throw new IllegalArgumentException("Input type must be string type but got " + inputType + ".");
}
#Override
public DataType outputDataType()
{
return DataTypes.createArrayType(DataTypes.StringType, true); // or false? depends on your data
}
}
I'm a bit late to the party, but I have a few examples of custom Java Spark transforms here: https://github.com/dafrenchyman/spark/tree/master/src/main/java/com/mrsharky/spark/ml/feature
Here's an example with just an input column, but you can easily add an output column following the same patterns. This doesn't implement the readers and writers though. You'll need to check the link above to see how to do that.
public class DropColumns extends Transformer implements Serializable,
DefaultParamsWritable {
private StringArrayParam _inputCols;
private final String _uid;
public DropColumns(String uid) {
_uid = uid;
}
public DropColumns() {
_uid = DropColumns.class.getName() + "_" +
UUID.randomUUID().toString();
}
// Getters
public String[] getInputCols() { return get(_inputCols).get(); }
// Setters
public DropColumns setInputCols(String[] columns) {
_inputCols = inputCols();
set(_inputCols, columns);
return this;
}
public DropColumns setInputCols(List<String> columns) {
String[] columnsString = columns.toArray(new String[columns.size()]);
return setInputCols(columnsString);
}
public DropColumns setInputCols(String column) {
String[] columns = new String[]{column};
return setInputCols(columns);
}
// Overrides
#Override
public Dataset<Row> transform(Dataset<?> data) {
List<String> dropCol = new ArrayList<String>();
Dataset<Row> newData = null;
try {
for (String currColumn : this.get(_inputCols).get() ) {
dropCol.add(currColumn);
}
Seq<String> seqCol = JavaConverters.asScalaIteratorConverter(dropCol.iterator()).asScala().toSeq();
newData = data.drop(seqCol);
} catch (Exception ex) {
ex.printStackTrace();
}
return newData;
}
#Override
public Transformer copy(ParamMap extra) {
DropColumns copied = new DropColumns();
copied.setInputCols(this.getInputCols());
return copied;
}
#Override
public StructType transformSchema(StructType oldSchema) {
StructField[] fields = oldSchema.fields();
List<StructField> newFields = new ArrayList<StructField>();
List<String> columnsToRemove = Arrays.asList( get(_inputCols).get() );
for (StructField currField : fields) {
String fieldName = currField.name();
if (!columnsToRemove.contains(fieldName)) {
newFields.add(currField);
}
}
StructType schema = DataTypes.createStructType(newFields);
return schema;
}
#Override
public String uid() {
return _uid;
}
#Override
public MLWriter write() {
return new DropColumnsWriter(this);
}
#Override
public void save(String path) throws IOException {
write().saveImpl(path);
}
public static MLReader<DropColumns> read() {
return new DropColumnsReader();
}
public StringArrayParam inputCols() {
return new StringArrayParam(this, "inputCols", "Columns to be dropped");
}
public DropColumns load(String path) {
return ( (DropColumnsReader) read()).load(path);
}
}
Even later to the party, I have another update. I had a hard time finding information on extending Spark Transformers to Java, so I am posting my findings here.
I have also been working on custom transformers in Java. At the time of writing, it is a little easier to include save/load functionality. One can create writable parameters by implementing DefaultParamsWritable. Implementing DefaultParamsReadable, however, still results in an exception for me, but there is a simple work-around.
Here is the basic implementation of a column renamer:
public class ColumnRenamer extends Transformer implements DefaultParamsWritable {
/**
* A custom Spark transformer that renames the inputCols to the outputCols.
*
* We would also like to implement DefaultParamsReadable<ColumnRenamer>, but
* there appears to be a bug in DefaultParamsReadable when used in Java, see:
* https://issues.apache.org/jira/browse/SPARK-17048
**/
private final String uid_;
private StringArrayParam inputCols_;
private StringArrayParam outputCols_;
private HashMap<String, String> renameMap;
public ColumnRenamer() {
this(Identifiable.randomUID("ColumnRenamer"));
}
public ColumnRenamer(String uid) {
this.uid_ = uid;
init();
}
#Override
public String uid() {
return uid_;
}
#Override
public Transformer copy(ParamMap extra) {
return defaultCopy(extra);
}
/**
* The below method is a work around, see:
* https://issues.apache.org/jira/browse/SPARK-17048
**/
public static MLReader<ColumnRenamer> read() {
return new DefaultParamsReader<>();
}
public Dataset<Row> transform(Dataset<?> dataset) {
Dataset<Row> transformedDataset = dataset.toDF();
// Check schema.
transformSchema(transformedDataset.schema(), true); // logging = true
// Rename columns.
for (Map.Entry<String, String> entry: renameMap.entrySet()) {
String inputColName = entry.getKey();
String outputColName = entry.getValue();
transformedDataset = transformedDataset
.withColumnRenamed(inputColName, outputColName);
}
return transformedDataset;
}
#Override
public StructType transformSchema(StructType schema) {
// Validate the parameters here...
String[] inputCols = getInputCols();
String[] outputCols = getOutputCols();
// Create rename mapping.
renameMap = new HashMap<> ();
for (int i = 0; i < inputCols.length; i++) {
renameMap.put(inputCols[i], outputCols[i]);
}
// Rename columns.
ArrayList<StructField> fields = new ArrayList<> ();
for (StructField field: schema.fields()) {
String columnName = field.name();
if (renameMap.containsKey(columnName)) {
columnName = renameMap.get(columnName);
}
fields.add(new StructField(
columnName, field.dataType(), field.nullable(), field.metadata()
));
}
// Return as StructType.
return new StructType(fields.toArray(new StructField[0]));
}
private void init() {
inputCols_ = new StringArrayParam(this, "inputCols", "input column names");
outputCols_ = new StringArrayParam(this, "outputCols", "output column names");
}
public StringArrayParam inputCols() {
return inputCols_;
}
public ColumnRenamer setInputCols(String[] value) {
set(inputCols_, value);
return this;
}
public String[] getInputCols() {
return getOrDefault(inputCols_);
}
public StringArrayParam outputCols() {
return outputCols_;
}
public ColumnRenamer setOutputCols(String[] value) {
set(outputCols_, value);
return this;
}
public String[] getOutputCols() {
return getOrDefault(outputCols_);
}
}
My company has an application server that receives sets of instructions in their own bespoke XTML syntax. As this is limited, there's a special "drop to Java" command that sends arguments to a JVM (1.6.0_39). Arguments are passed as "in" only, or "in/out", where the special "in/out" variables are a library of mutables for use with this platform.
Previously the only way to receive external configuration was to use a different special command to read from an XTML file. For reasons not worth delving into, this method of configuration is difficult to scale, so I'm working on a way to do this with Java.
The syntax for this configuration was two-tuples of (String,T) where String was the property name in the XTML file, and T was the in/out mutable that the application server would assign the property value to.
I'm attempting to make this transition as seamless as possible, and not have to do annoying string parsing in the application server.
I already have a function
public String[] get(String ... keys)
That retrieves the values from the application servers' keys, but What I really need is a function
public static void get(T ... args)
that accepts the two-tuples. However, note it needs to be static in order to be called from the application server, and my understanding is that T can't be used in a static context.
I'm at a loss for how to approach this problem in a way that doesn't require (at least) two steps, and there is no way to loop over the arguments in the application server.
I know I'm working within a tight set of constraints here, so if the answer is "you have to some messed up stuff", that's fine - I'd just like any insight into another way.
-- edit --
Editing a more specific example.
The configuration is a set of key-value pairs, and can be in a database or a file. The get function is:
public JSONObject get(String ... keys) throws ClassNotFoundException, SQLException, KeyNotFoundException, FileNotFoundException, IOException {
JSONObject response = new JSONObject();
if(this.isDatabase) {
for(int i=0;i<keys.length;i++){
PreparedStatement statement = this.prepare("SELECT value FROM "+this.databaseSchema+"."+this.settingsTableName+" WHERE key = ? LIMIT 1");
statement.setString(1, keys[i]);
ResultSet results = statement.executeQuery();
boolean found = false;
while(results.next()){
String value = results.getString("value");
value = value.replace("\"","");
response.put(keys[i], value);
found = true;
}
if(!found){
throw new KeyNotFoundException(keys[i]);
}
}
} else if (this.isFile) {
boolean[] found = new boolean[keys.length];
BufferedReader br = new BufferedReader(new FileReader(this.settingsFile));
String line;
while((line = br.readLine()) != null ){
String key;
String value;
for(int i=0;i<line.length();i++){
if(line.charAt(i) == '='){
key = line.substring(0,i);
value = line.substring(i+1,line.length());
if(indexOfString(keys,key) != -1){
value = value.replace("\"","");
found[indexOfString(keys,key)] = true;
response.put(key,value);
if(allFound(found)==-1){
return response;
}
}
break;
}
}
}
if(allFound(found)!=-1){
throw new KeyNotFoundException(keys[allFound(found)]);
}
}
return response;
If I had my way, it would look like ...
// ConfigurationReader.java
public class ConfigurationReader{
public ConfigurationReader( ... ){}
public static JSONObject get(String key){
// Get the key
}
}
// ConfigurationInterface.java
public static void get(T ... args){
ConfigurationReader cfgReader = new ConfigurationReader( ... );
for(var i=0;i<args.length;i+=2){
in = args[i];
out = args[i+1];
out = cfgReader.get(in);
}
}
You can use generic types in a static context. Your question is somewhat vague/unclear about how you intend to do this, but consider the example below:
public class Example {
public static void main(String[] args) {
Type t1 = new Type("foo");
Type t2 = new Type("bar");
Type t3 = new Type("baz");
Printer.<Type> printNames(t1, t2, t3);
}
public static class Printer {
#SafeVarargs
public static <T extends Type> void printNames(T... objs) {
for (T obj : objs) {
System.out.println(obj);
}
}
}
public static class Type {
private final String name;
public Type(String name) {
this.name = name;
}
#Override
public final String toString() {
return name;
}
}
}
Printer.<Type> printNames(t1, t2, t3) makes a static reference to the printNames method, parameterized with the Type generic type.
Note that this is type-safe. Attempting to pass an object of a different type into that parameterized method will fail at compile-time (assuming the type is known to be different at that point):
Example.java:8: error: method printNames in class Printer cannot be applied to given types;
Printer.<Type> printNames(t1, t2, t3, "test");
^
required: T[]
found: Type,Type,Type,String
reason: varargs mismatch; String cannot be converted to Type
where T is a type-variable:
T extends Type declared in method <T>printNames(T...)
Edit
Based on your comment, the issue isn't that you're trying use a generic type for your method argument (in the Java-sense of the word generic, anyway); you're simply looking for any non-specific, parent class that both String and your custom type inherit from. There's only one such class: Object.
I'd strongly recommend reconsidering your design if you have any flexibility, since this will make for poor API design. However you can have your method accept an arbitrary number of arbitrarily-typed objects using Object... objs.
For example:
public class Example {
public static void main(String[] args) {
Printer.printNames("a", "b", new Type("foo"), new Type("bar"));
}
public static class Printer {
public static void printNames(Object... objs) {
for (Object obj : objs) {
if (obj instanceof String) {
System.out.println(((String) obj).toUpperCase());
}
else if (obj instanceof Type) {
System.out.println(obj);
}
}
}
}
public static class Type {
private final String name;
public Type(String name) { this.name = name; }
public final String toString() { return name; }
}
}
Based on #nbrooks work, I found a solution. I made a temporary MutableString (to be replaced by the classes provided by the library).
public static class MutableString {
public String value;
public MutableString(){}
}
// One for every mutable type
public static void Pair(String key, MutableString mutable, ApplicationConfiguration appConfig) throws Exception{
mutable.value = appConfig.get(key).toString();
}
public static void Retrieve(Object ... args) throws Exception {
ApplicationConfiguration appConfig = new ApplicationConfiguration( ##args## );
for(int i=0;i<args.length;i+=2){
if(args[i+1].getClass().equals(new MutableString().getClass())){
ApplicationConfiguration.Pair( (String) args[i], (MutableString) args[i+1], appConfig);
} // One for every mutable type
}
}
I'm trying to serialize/deserialize an object, that involves polymorphism, into JSON using Gson.
This is my code for serializing:
ObixBaseObj lobbyObj = new ObixBaseObj();
lobbyObj.setIs("obix:Lobby");
ObixOp batchOp = new ObixOp();
batchOp.setName("batch");
batchOp.setIn("obix:BatchIn");
batchOp.setOut("obix:BatchOut");
lobbyObj.addChild(batchOp);
Gson gson = new Gson();
System.out.println(gson.toJson(lobbyObj));
Here's the result:
{"obix":"obj","is":"obix:Lobby","children":[{"obix":"op","name":"batch"}]}
The serialization mostly works, except its missing the contents of inherited members (In particular obix:BatchIn and obixBatchout strings are missing).
Here's my base class:
public class ObixBaseObj {
protected String obix;
private String display;
private String displayName;
private ArrayList<ObixBaseObj> children;
public ObixBaseObj()
{
obix = "obj";
}
public void setName(String name) {
this.name = name;
}
...
}
Here's what my inherited class (ObixOp) looks like:
public class ObixOp extends ObixBaseObj {
private String in;
private String out;
public ObixOp() {
obix = "op";
}
public ObixOp(String in, String out) {
obix = "op";
this.in = in;
this.out = out;
}
public String getIn() {
return in;
}
public void setIn(String in) {
this.in = in;
}
public String getOut() {
return out;
}
public void setOut(String out) {
this.out = out;
}
}
I realize I could use an adapter for this, but the problem is that I'm serializing a collection of base class type ObixBaseObj. There are about 25 classes that inherits from this. How can I make this work elegantly?
There's a simple solution: Gson's RuntimeTypeAdapterFactory (from com.google.code.gson:gson-extras:$gsonVersion). You don't have to write any serializer, this class does all work for you. Try this with your code:
ObixBaseObj lobbyObj = new ObixBaseObj();
lobbyObj.setIs("obix:Lobby");
ObixOp batchOp = new ObixOp();
batchOp.setName("batch");
batchOp.setIn("obix:BatchIn");
batchOp.setOut("obix:BatchOut");
lobbyObj.addChild(batchOp);
RuntimeTypeAdapterFactory<ObixBaseObj> adapter =
RuntimeTypeAdapterFactory
.of(ObixBaseObj.class)
.registerSubtype(ObixBaseObj.class)
.registerSubtype(ObixOp.class);
Gson gson2=new GsonBuilder().setPrettyPrinting().registerTypeAdapterFactory(adapter).create();
Gson gson = new Gson();
System.out.println(gson.toJson(lobbyObj));
System.out.println("---------------------");
System.out.println(gson2.toJson(lobbyObj));
}
Output:
{"obix":"obj","is":"obix:Lobby","children":[{"obix":"op","name":"batch","children":[]}]}
---------------------
{
"type": "ObixBaseObj",
"obix": "obj",
"is": "obix:Lobby",
"children": [
{
"type": "ObixOp",
"in": "obix:BatchIn",
"out": "obix:BatchOut",
"obix": "op",
"name": "batch",
"children": []
}
]
}
EDIT: Better working example.
You said that there are about 25 classes that inherits from ObixBaseObj.
We start writing a new class, GsonUtils
public class GsonUtils {
private static final GsonBuilder gsonBuilder = new GsonBuilder()
.setPrettyPrinting();
public static void registerType(
RuntimeTypeAdapterFactory<?> adapter) {
gsonBuilder.registerTypeAdapterFactory(adapter);
}
public static Gson getGson() {
return gsonBuilder.create();
}
Every time we need a Gson object, instead of calling new Gson(), we will call
GsonUtils.getGson()
We add this code to ObixBaseObj:
public class ObixBaseObj {
protected String obix;
private String display;
private String displayName;
private String name;
private String is;
private ArrayList<ObixBaseObj> children = new ArrayList<ObixBaseObj>();
// new code
private static final RuntimeTypeAdapterFactory<ObixBaseObj> adapter =
RuntimeTypeAdapterFactory.of(ObixBaseObj.class);
private static final HashSet<Class<?>> registeredClasses= new HashSet<Class<?>>();
static {
GsonUtils.registerType(adapter);
}
private synchronized void registerClass() {
if (!registeredClasses.contains(this.getClass())) {
registeredClasses.add(this.getClass());
adapter.registerSubtype(this.getClass());
}
}
public ObixBaseObj() {
registerClass();
obix = "obj";
}
Why? because every time this class or a children class of ObixBaseObj is instantiated,
the class it's gonna be registered in the RuntimeTypeAdapter
In the child classes, only a minimal change is needed:
public class ObixOp extends ObixBaseObj {
private String in;
private String out;
public ObixOp() {
super();
obix = "op";
}
public ObixOp(String in, String out) {
super();
obix = "op";
this.in = in;
this.out = out;
}
Working example:
public static void main(String[] args) {
ObixBaseObj lobbyObj = new ObixBaseObj();
lobbyObj.setIs("obix:Lobby");
ObixOp batchOp = new ObixOp();
batchOp.setName("batch");
batchOp.setIn("obix:BatchIn");
batchOp.setOut("obix:BatchOut");
lobbyObj.addChild(batchOp);
Gson gson = GsonUtils.getGson();
System.out.println(gson.toJson(lobbyObj));
}
Output:
{
"type": "ObixBaseObj",
"obix": "obj",
"is": "obix:Lobby",
"children": [
{
"type": "ObixOp",
"in": "obix:BatchIn",
"out": "obix:BatchOut",
"obix": "op",
"name": "batch",
"children": []
}
]
}
I hope it helps.
I think that a custom serializer/deserializer is the only way to proceed and I tried to propose you the most compact way to realize it I have found. I apologize for not using your classes, but the idea is the same (I just wanted at least 1 base class and 2 extended classes).
BaseClass.java
public class BaseClass{
#Override
public String toString() {
return "BaseClass [list=" + list + ", isA=" + isA + ", x=" + x + "]";
}
public ArrayList<BaseClass> list = new ArrayList<BaseClass>();
protected String isA="BaseClass";
public int x;
}
ExtendedClass1.java
public class ExtendedClass1 extends BaseClass{
#Override
public String toString() {
return "ExtendedClass1 [total=" + total + ", number=" + number
+ ", list=" + list + ", isA=" + isA + ", x=" + x + "]";
}
public ExtendedClass1(){
isA = "ExtendedClass1";
}
public Long total;
public Long number;
}
ExtendedClass2.java
public class ExtendedClass2 extends BaseClass{
#Override
public String toString() {
return "ExtendedClass2 [total=" + total + ", list=" + list + ", isA="
+ isA + ", x=" + x + "]";
}
public ExtendedClass2(){
isA = "ExtendedClass2";
}
public Long total;
}
CustomDeserializer.java
public class CustomDeserializer implements JsonDeserializer<List<BaseClass>> {
private static Map<String, Class> map = new TreeMap<String, Class>();
static {
map.put("BaseClass", BaseClass.class);
map.put("ExtendedClass1", ExtendedClass1.class);
map.put("ExtendedClass2", ExtendedClass2.class);
}
public List<BaseClass> deserialize(JsonElement json, Type typeOfT,
JsonDeserializationContext context) throws JsonParseException {
List list = new ArrayList<BaseClass>();
JsonArray ja = json.getAsJsonArray();
for (JsonElement je : ja) {
String type = je.getAsJsonObject().get("isA").getAsString();
Class c = map.get(type);
if (c == null)
throw new RuntimeException("Unknow class: " + type);
list.add(context.deserialize(je, c));
}
return list;
}
}
CustomSerializer.java
public class CustomSerializer implements JsonSerializer<ArrayList<BaseClass>> {
private static Map<String, Class> map = new TreeMap<String, Class>();
static {
map.put("BaseClass", BaseClass.class);
map.put("ExtendedClass1", ExtendedClass1.class);
map.put("ExtendedClass2", ExtendedClass2.class);
}
#Override
public JsonElement serialize(ArrayList<BaseClass> src, Type typeOfSrc,
JsonSerializationContext context) {
if (src == null)
return null;
else {
JsonArray ja = new JsonArray();
for (BaseClass bc : src) {
Class c = map.get(bc.isA);
if (c == null)
throw new RuntimeException("Unknow class: " + bc.isA);
ja.add(context.serialize(bc, c));
}
return ja;
}
}
}
and now this is the code I executed to test the whole thing:
public static void main(String[] args) {
BaseClass c1 = new BaseClass();
ExtendedClass1 e1 = new ExtendedClass1();
e1.total = 100L;
e1.number = 5L;
ExtendedClass2 e2 = new ExtendedClass2();
e2.total = 200L;
e2.x = 5;
BaseClass c2 = new BaseClass();
c1.list.add(e1);
c1.list.add(e2);
c1.list.add(c2);
List<BaseClass> al = new ArrayList<BaseClass>();
// this is the instance of BaseClass before serialization
System.out.println(c1);
GsonBuilder gb = new GsonBuilder();
gb.registerTypeAdapter(al.getClass(), new CustomDeserializer());
gb.registerTypeAdapter(al.getClass(), new CustomSerializer());
Gson gson = gb.create();
String json = gson.toJson(c1);
// this is the corresponding json
System.out.println(json);
BaseClass newC1 = gson.fromJson(json, BaseClass.class);
System.out.println(newC1);
}
This is my execution:
BaseClass [list=[ExtendedClass1 [total=100, number=5, list=[], isA=ExtendedClass1, x=0], ExtendedClass2 [total=200, list=[], isA=ExtendedClass2, x=5], BaseClass [list=[], isA=BaseClass, x=0]], isA=BaseClass, x=0]
{"list":[{"total":100,"number":5,"list":[],"isA":"ExtendedClass1","x":0},{"total":200,"list":[],"isA":"ExtendedClass2","x":5},{"list":[],"isA":"BaseClass","x":0}],"isA":"BaseClass","x":0}
BaseClass [list=[ExtendedClass1 [total=100, number=5, list=[], isA=ExtendedClass1, x=0], ExtendedClass2 [total=200, list=[], isA=ExtendedClass2, x=5], BaseClass [list=[], isA=BaseClass, x=0]], isA=BaseClass, x=0]
Some explanations: the trick is done by another Gson inside the serializer/deserializer. I use just isA field to spot the right class. To go faster, I use a map to associate the isA string to the corresponding class. Then, I do the proper serialization/deserialization using the second Gson object. I declared it as static so you won't slow serialization/deserialization with multiple allocation of Gson.
Pro
You actually do not write more code than this, you let Gson do all the work. You have just to remember to put a new subclass into the maps (the exception reminds you of that).
Cons
You have two maps. I think that my implementation can refined a bit to avoid map duplications, but I left them to you (or to future editor, if any).
Maybe you want to unify serialization and deserialization into a unique object, you should be check the TypeAdapter class or experiment with an object that implements both interfaces.
I appreciate the other answers here that led me on my path to solving this issue. I used a combination of RuntimeTypeAdapterFactory with Reflection.
I also created a helper class to make sure a properly configured Gson was used.
Within a static block inside the GsonHelper class, I have the following code go through my project to find and register all of the appropriate types. All of my objects that will go through JSON-ification are a subtype of Jsonable.
You will want to change the following:
my.project in Reflections should be your package name.
Jsonable.class is my base class. Substitute yours.
I like having the field show the full canonical name, but clearly if you don't want / need it, you can leave out that part of the call to register the subtype. The same thing goes for className in the RuntimeAdapterFactory; I have data items already using the type field.
private static final GsonBuilder gsonBuilder = new GsonBuilder()
.setDateFormat("yyyy-MM-dd'T'HH:mm:ssZ")
.excludeFieldsWithoutExposeAnnotation()
.setPrettyPrinting();
static {
Reflections reflections = new Reflections("my.project");
Set<Class<? extends Jsonable>> allTypes = reflections.getSubTypesOf(Jsonable.class);
for (Class< ? extends Jsonable> serClass : allTypes){
Set<?> subTypes = reflections.getSubTypesOf(serClass);
if (subTypes.size() > 0){
RuntimeTypeAdapterFactory<?> adapterFactory = RuntimeTypeAdapterFactory.of(serClass, "className");
for (Object o : subTypes ){
Class c = (Class)o;
adapterFactory.registerSubtype(c, c.getCanonicalName());
}
gsonBuilder.registerTypeAdapterFactory(adapterFactory);
}
}
}
public static Gson getGson() {
return gsonBuilder.create();
}
I created a type adapter factory that uses an annotation and ClassGraph to discover subclasses and supports multiple serialization styles (Type Property, Property, Array). See github for source code and maven coordinates.