I want to use the map function (or stream function ?) to make this code this tighter.
var allObjectNames = new ArrayList<String>();
for (Object object : allObjects) {
allObjectNames.add(object.name);
}
I thought about:
var allObjectNames = new ArrayList<String>();
allObjectsNames.addAll(map(allObjects.name));
Or something like this.
Minimalistic Example:
package Mapping;
import java.util.ArrayList;
import java.util.stream.Collectors;
import java.util.stream.Stream;
public class FunctionMapping {
public static void main(String[] args) {
var output1 = new Output("start");
var output2 = new Output("success");
var output3 = new Output("failure");
ArrayList<Output> rootOutput = new ArrayList<>();
rootOutput.add(output1);
rootOutput.add(output2);
rootOutput.add(output3);
var outputNames = rootOutput.stream().map(o -> o.outputName).collect(Collectors.toList());
}
static class Output {
String outputName;
Output (String name) {
this.outputName = name;
}
}
}
var allObjectNames = allObjects.stream().map(o -> o.name).collect(Collectors.toList());
Related
I'm trying to create a testng file in runtime, so that I can excute only those classes and methods which I want to. But, while running I'm getting a cast error. Help me with that please.
I've tried all combinations I can and have come up with the following code
package org.exela.mtclaims;
import java.io.FileInputStream;
import java.util.ArrayList;
import java.util.List;
import org.apache.poi.xssf.usermodel.XSSFCell;
import org.apache.poi.xssf.usermodel.XSSFRow;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import org.testng.TestNG;
import org.testng.xml.XmlClass;
import org.testng.xml.XmlInclude;
import org.testng.xml.XmlSuite;
import org.testng.xml.XmlTest;
public class CreateTestNGXML extends BaseClass {
public static FileInputStream fis = null;
public static XSSFWorkbook workBook = null;
public static XSSFSheet sheet = null;
public XSSFRow row = null;
public XSSFCell cell = null;
public static int rowCount = 0;
static ArrayList<String> methods = new ArrayList<String>();
static ArrayList<String> classes = new ArrayList<String>();
public static String excelPath = "D:\\Sai\\Documents\\Workspace\\IBM_MTClaims_Automation\\Data Repository\\Data - MT Claims.xlsx";
public static List<XmlInclude> constructIncludes(String... methodNames)
{
List<XmlInclude> includes = new ArrayList<XmlInclude>();
for (String eachMethod : methodNames)
{
includes.add(new XmlInclude(eachMethod));
}
return includes;
}
public static List<XmlClass> constructClasses(ArrayList<String> classNames)
{
List<XmlClass> includes = new ArrayList<XmlClass>();
//String className = "org.exela.mtclaims.";
for (String eachClass : classNames)
{
//String text = className.concat(eachClass);
if(includes.isEmpty())
{
if(!includes.contains("[XmlClass class=org.exela.mtclaims." + eachClass+"]"))
{
includes.add(new XmlClass("org.exela.mtclaims." + eachClass));
}
}
else
{
for (XmlClass xmlClass : includes)
{
if(includes.contains(xmlClass))
{
continue;
}
else
{
includes.add(new XmlClass("org.exela.mtclaims." + eachClass));
}
}
}
/*if (includes.contains("org.exela.mtclaims." + eachClass))
{
includes.add(new XmlClass("org.exela.mtclaims." + eachClass));
}
else
{
includes.add(new XmlClass("org.exela.mtclaims." + eachClass));
}*/
}
return includes;
}
#SuppressWarnings("unchecked")
public static void main(String[] args) throws Exception {
// Creating TestNG Suite
XmlSuite suite = new XmlSuite();
suite.setName("Test Suite");
// Creating TestNG Tests
XmlTest test = new XmlTest(suite);
test.setName("Tests");
// Creating TestNG Classes and Includes to include methods
//List<XmlClass> classesToRun = new List<XmlClass>();
List<XmlClass> classesToRun = new ArrayList<XmlClass>();
List<XmlInclude> methodsToRun = new ArrayList<XmlInclude>();
// Reading data from excel
fis = new FileInputStream(excelPath);
workBook = new XSSFWorkbook(fis);
sheet = workBook.getSheet("TestCases");
rowCount = sheet.getLastRowNum();
// Including methods that should run
for (int i = 2; i < rowCount; i++)
{
if (readDataFromCell(excelPath, "TestCases", "Should Run", i).equals("Yes"))
{
methods.add(readDataFromCell(excelPath, "TestCases", "Methods", i));
}
}
methodsToRun = constructIncludes(new String[] { String.join(",", methods) });
// Including classes that should run
for (int i = 2; i < rowCount; i++)
{
if (readDataFromCell(excelPath, "TestCases", "Should Run", i).equals("Yes")) {
classes.add(readDataFromCell(excelPath, "TestCases", "Classes", i));
}
}
//classesToRun = constructClasses(new String[] { String.join(",", classes) });
classesToRun = constructClasses(classes);
//test.setXmlClasses (Arrays.asList (new XmlClass[] { classesToRun }));
test.setXmlClasses (classesToRun);
// Adding Include to classes
((XmlClass) classesToRun).setIncludedMethods(methodsToRun);
// Adding Classes to tests
test.setXmlClasses((List<XmlClass>) classesToRun);
// Adding suites
List<XmlSuite> suites = new ArrayList<XmlSuite>();
suites.add(suite);
// Printing the created suite file
System.out.println("Printing TestNG Suite Xml");
System.out.println(suite.toXml());
// Creating and running the TestNG file
TestNG tng = new TestNG();
tng.setXmlSuites(suites);
tng.run();
}
}
The problem is at line 129, ((XmlClass) classesToRun).setIncludedMethods(methodsToRun);. I don't understand what's wrong. Can someone provide a solution for this?
classesToRun is a list of XmlClass, It can't be cast to a single XmlClass. You need to iterate over the list
for (XmlClass xmlClass : classesToRun) {
xmlClass.setIncludedMethods(methodsToRun);
}
I'm trying to write a Java application with web3j that can read an arbitrary abi file, show the list of AbiDefinitions to the user and let him make a call to a constant function of his choice. How do I compute the outTypes below?
AbiDefinition functionDef = ...; // found at runtime
List<Type> args = ...; // I know how to do this
List<NamedType> outputs = functionDef.getOutputs(); // list of output parameters
List<TypeReference<?>> outTypes = ????;
Function function = new Function(functionDef.getName(), args, outTypes);
The TypeReference class uses tricks with generic types that work when the generic type is hardcoded in the source code like this:
new TypeReference.StaticArrayTypeReference< StaticArray< Int256>>(2){}
This is what the generated contract wrapper would do.
For simple types, I can do this:
Class<Type> type = (Class<Type>)AbiTypes.getType(typeName);
TypeReference<?> typeRef = TypeReference.create(type);
For array types like "int256[2]", what should I do?
uniswapV2 router
function getAmountsOut(uint amountIn, address[] calldata path) external view returns (uint[] memory amounts)
package com.test;
import org.web3j.abi.FunctionEncoder;
import org.web3j.abi.FunctionReturnDecoder;
import org.web3j.abi.TypeReference;
import org.web3j.abi.datatypes.Address;
import org.web3j.abi.datatypes.DynamicArray;
import org.web3j.abi.datatypes.Function;
import org.web3j.abi.datatypes.Type;
import org.web3j.abi.datatypes.generated.Uint256;
import org.web3j.protocol.Web3j;
import org.web3j.protocol.core.DefaultBlockParameterName;
import org.web3j.protocol.core.methods.request.Transaction;
import org.web3j.protocol.core.methods.response.EthCall;
import org.web3j.protocol.http.HttpService;
import java.io.IOException;
import java.math.BigInteger;
import java.util.ArrayList;
import java.util.List;
public class main {
private static String EMPTY_ADDRESS = "0x0000000000000000000000000000000000000000";
static Web3j web3j;
static String usdt = "0x55d398326f99059fF775485246999027B3197955";
static String weth = "0xbb4CdB9CBd36B01bD1cBaEBF2De08d9173bc095c";
static String pancakeRouter = "0x10ed43c718714eb63d5aa57b78b54704e256024e";
public static void main(String[] args) throws IOException {
web3j = Web3j.build(new HttpService("https://bsc-dataseed.binance.org/"));
String s = web3j.netVersion().send().getNetVersion();
System.out.println(s);
List<BigInteger> list1 = getAmountsOut(usdt, weth);
System.out.println("result::: ");
for (BigInteger a : list1) {
System.out.println(a);
}
}
public static List<BigInteger> getAmountsOut(String tokenInAddr, String tokenOutAddr) {
String methodName = "getAmountsOut";
String fromAddr = EMPTY_ADDRESS;
List<Type> inputParameters = new ArrayList<Type>();
Uint256 inAmount = new Uint256(new BigInteger("1000000000000000000"));
Address inAddr = new Address(tokenInAddr);
Address outAddr = new Address(tokenOutAddr);
DynamicArray<Address> addrArr = new DynamicArray<Address>(inAddr, outAddr);
inputParameters.add(inAmount);
inputParameters.add(addrArr);
List<TypeReference<?>> outputParameters = new ArrayList<TypeReference<?>>();
TypeReference<DynamicArray<Uint256>> oa = new TypeReference<DynamicArray<Uint256>>() {
};
outputParameters.add(oa);
Function function = new Function(methodName, inputParameters, outputParameters);
String data = FunctionEncoder.encode(function);
Transaction transaction = Transaction.createEthCallTransaction(fromAddr, pancakeRouter, data);
EthCall ethCall;
try {
ethCall = web3j.ethCall(transaction, DefaultBlockParameterName.LATEST).sendAsync().get();
List<Type> results = FunctionReturnDecoder.decode(ethCall.getValue(), function.getOutputParameters());
System.out.println(results);
List<BigInteger> resultArr = new ArrayList<>();
if (results.size() > 0) {
for (Type tt : results) {
DynamicArray<Uint256> da = (DynamicArray<Uint256>) tt;
List<Uint256> lu = da.getValue();
if (lu.size() > 0) {
for (Uint256 n : lu) {
resultArr.add((BigInteger) n.getValue());
}
}
}
return resultArr;
}
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
}
How do you read an ORC file in Java? I'm wanting to read in a small file for some unit test output verification, but I can't find a solution.
Came across this and implemented one myself recently
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hive.ql.io.orc.OrcFile;
import org.apache.hadoop.hive.ql.io.orc.Reader;
import org.apache.hadoop.hive.ql.io.orc.RecordReader;
import org.apache.hadoop.hive.serde2.objectinspector.StructField;
import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
import java.util.List;
public class OrcFileDirectReaderExample {
public static void main(String[] argv)
{
try {
Reader reader = OrcFile.createReader(HdfsFactory.getFileSystem(), new Path("/user/hadoop/000000_0"));
StructObjectInspector inspector = (StructObjectInspector)reader.getObjectInspector();
System.out.println(reader.getMetadata());
RecordReader records = reader.rows();
Object row = null;
//These objects are the metadata for each column. They give you the type of each column and can parse it unless you
//want to parse each column yourself
List fields = inspector.getAllStructFieldRefs();
for(int i = 0; i < fields.size(); ++i) {
System.out.print(((StructField)fields.get(i)).getFieldObjectInspector().getTypeName() + '\t');
}
while(records.hasNext())
{
row = records.next(row);
List value_lst = inspector.getStructFieldsDataAsList(row);
StringBuilder builder = new StringBuilder();
//iterate over the fields
//Also fields can be null if a null was passed as the input field when processing wrote this file
for(Object field : value_lst) {
if(field != null)
builder.append(field.toString());
builder.append('\t');
}
//this writes out the row as it would be if this were a Text tab seperated file
System.out.println(builder.toString());
}
}catch (Exception e)
{
e.printStackTrace();
}
}
}
As per Apache Wiki, ORC file format was introduced in Hive 0.11.
So you will need Hive packages in your project source path to read ORC files. The package for the same are
org.apache.hadoop.hive.ql.io.orc.Reader;
org.apache.hadoop.hive.ql.io.orc.OrcFile
read orc testcase
#Test
public void read_orc() throws Exception {
//todo do kerberos auth
String orcPath = "hdfs://user/hive/warehouse/demo.db/orc_path";
//load hdfs conf
Configuration conf = new Configuration();
conf.addResource(getClass().getResource("/hdfs-site.xml"));
conf.addResource(getClass().getResource("/core-site.xml"));
FileSystem fs = FileSystem.get(conf);
// custom read column
List<String> columns = Arrays.asList("id", "title");
final List<Map<String, Object>> maps = OrcUtil.readOrcFile(fs, orcPath, columns);
System.out.println(new Gson().toJson(maps));
}
OrcUtil to read orc path with special columns
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.stream.Collectors;
import org.apache.commons.lang.StringUtils;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.PathFilter;
import org.apache.hadoop.hive.ql.io.orc.OrcFile;
import org.apache.hadoop.hive.ql.io.orc.OrcInputFormat;
import org.apache.hadoop.hive.ql.io.orc.OrcSerde;
import org.apache.hadoop.hive.ql.io.orc.OrcSplit;
import org.apache.hadoop.hive.ql.io.orc.OrcStruct;
import org.apache.hadoop.hive.ql.io.orc.Reader;
import org.apache.hadoop.hive.serde2.SerDeException;
import org.apache.hadoop.hive.serde2.objectinspector.StructField;
import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.InputFormat;
import org.apache.hadoop.mapred.InputSplit;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.RecordReader;
import org.apache.hadoop.mapred.Reporter;
public class OrcUtil {
public static List<Map<String, Object>> readOrcFile(FileSystem fs, String orcPath, List<String> readColumns)
throws IOException, SerDeException {
JobConf jobConf = new JobConf();
for (Map.Entry<String, String> entry : fs.getConf()) {
jobConf.set(entry.getKey(), entry.getValue());
}
FileInputFormat.setInputPaths(jobConf, orcPath);
FileInputFormat.setInputPathFilter(jobConf, ((PathFilter) path1 -> true).getClass());
InputSplit[] splits = new OrcInputFormat().getSplits(jobConf, 1);
InputFormat<NullWritable, OrcStruct> orcInputFormat = new OrcInputFormat();
List<Map<String, Object>> rows = new ArrayList<>();
for (InputSplit split : splits) {
OrcSplit orcSplit = (OrcSplit) split;
System.out.printf("read orc split %s%n", ((OrcSplit) split).getPath());
StructObjectInspector inspector = getStructObjectInspector(orcSplit.getPath(), jobConf, fs);
List<? extends StructField> readFields = inspector.getAllStructFieldRefs()
.stream().filter(e -> readColumns.contains(e.getFieldName())).collect(Collectors.toList());
// 49B file is empty
if (orcSplit.getLength() > 49) {
RecordReader<NullWritable, OrcStruct> recordReader = orcInputFormat.getRecordReader(orcSplit, jobConf, Reporter.NULL);
NullWritable key = recordReader.createKey();
OrcStruct value = recordReader.createValue();
while (recordReader.next(key, value)) {
Map<String, Object> entity = new HashMap<>();
for (StructField field : readFields) {
entity.put(field.getFieldName(), inspector.getStructFieldData(value, field));
}
rows.add(entity);
}
}
}
return rows;
}
private static StructObjectInspector getStructObjectInspector(Path path, JobConf jobConf, FileSystem fs)
throws IOException, SerDeException {
OrcFile.ReaderOptions readerOptions = OrcFile.readerOptions(jobConf);
readerOptions.filesystem(fs);
Reader reader = OrcFile.createReader(path, readerOptions);
String typeStruct = reader.getObjectInspector().getTypeName();
System.out.println(typeStruct);
List<String> columnList = parseColumnAndType(typeStruct);
String[] fullColNames = new String[columnList.size()];
String[] fullColTypes = new String[columnList.size()];
for (int i = 0; i < columnList.size(); ++i) {
String[] temp = columnList.get(i).split(":");
fullColNames[i] = temp[0];
fullColTypes[i] = temp[1];
}
Properties p = new Properties();
p.setProperty("columns", StringUtils.join(fullColNames, ","));
p.setProperty("columns.types", StringUtils.join(fullColTypes, ":"));
OrcSerde orcSerde = new OrcSerde();
orcSerde.initialize(jobConf, p);
return (StructObjectInspector) orcSerde.getObjectInspector();
}
private static List<String> parseColumnAndType(String typeStruct) {
int startIndex = typeStruct.indexOf("<") + 1;
int endIndex = typeStruct.lastIndexOf(">");
typeStruct = typeStruct.substring(startIndex, endIndex);
List<String> columnList = new ArrayList<>();
List<String> splitList = Arrays.asList(typeStruct.split(","));
Iterator<String> it = splitList.iterator();
while (it.hasNext()) {
StringBuilder current = new StringBuilder(it.next());
String currentStr = current.toString();
boolean left = currentStr.contains("(");
boolean right = currentStr.contains(")");
if (!left && !right) {
columnList.add(currentStr);
continue;
}
if (left && right) {
columnList.add(currentStr);
continue;
}
if (left && !right) {
while (it.hasNext()) {
String next = it.next();
current.append(",").append(next);
if (next.contains(")")) {
break;
}
}
columnList.add(current.toString());
}
}
return columnList;
}
}
Try this for getting ORCFile rowcount...
private long getRowCount(FileSystem fs, String fName) throws Exception {
long tempCount = 0;
Reader rdr = OrcFile.createReader(fs, new Path(fName));
StructObjectInspector insp = (StructObjectInspector) rdr.getObjectInspector();
Iterable<StripeInformation> iterable = rdr.getStripes();
for(StripeInformation stripe:iterable){
tempCount = tempCount + stripe.getNumberOfRows();
}
return tempCount;
}
//fName is hdfs path to file.
long rowCount = getRowCount(fs,fName);
I'm a new french user on stack and I have a problem ^^
I use an HTML parse Jsoup for parsing a html page. For that it's ok but I can't parse more url in same time.
This is my code:
first class for parsing a web page
package test2;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import org.apache.poi.hssf.usermodel.HSSFWorkbook;
import org.apache.poi.ss.usermodel.Row;
import org.apache.poi.ss.usermodel.Sheet;
import org.apache.poi.ss.usermodel.Workbook;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.select.Elements;
public final class Utils {
public static Map<String, String> parse(String url){
Map<String, String> out = new HashMap<String, String>();
try
{
Document doc = Jsoup.connect(url).get();
doc.select("img").remove();
Elements denomination = doc.select(".AmmDenomination");
Elements composition = doc.select(".AmmComposition");
Elements corptexte = doc.select(".AmmCorpTexte");
for(int i = 0; i < denomination.size(); i++)
{
out.put("denomination" + i, denomination.get(i).text());
}
for(int i = 0; i < composition.size(); i++)
{
out.put("composition" + i, composition.get(i).text());
}
for(int i = 0; i < corptexte.size(); i++)
{
out.put("corptexte" + i, corptexte.get(i).text());
System.out.println(corptexte.get(i));
}
} catch(IOException e){
e.printStackTrace();
}
return out;
}//Fin Methode parse
public static void excelizer(int fileId, Map<String, String> values){
try
{
FileOutputStream out = new FileOutputStream("C:/Documents and Settings/c.bon/git/clinsearch/drugs/src/main/resources/META-INF/test/fichier2.xls" );
Workbook wb = new HSSFWorkbook();
Sheet mySheet = wb.createSheet();
Row row1 = mySheet.createRow(0);
Row row2 = mySheet.createRow(1);
String entete[] = {"CIS", "Denomination", "Composition", "Form pharma", "Indication therapeutiques", "Posologie", "Contre indication", "Mise en garde",
"Interraction", "Effet indesirable", "Surdosage", "Pharmacodinamie", "Liste excipients", "Incompatibilité", "Duree conservation",
"Conservation", "Emballage", "Utilisation Manipulation", "TitulaireAMM"};
for (int i = 0; i < entete.length; i++)
{
row1.createCell(i).setCellValue(entete[i]);
}
Set<String> set = values.keySet();
int rowIndexDenom = 1;
int rowIndexCompo = 1;
for(String key : set)
{
if(key.contains("denomination"))
{
mySheet.createRow(1).createCell(1).setCellValue(values.get(key));
rowIndexDenom++;
}
else if(key.contains("composition"))
{
row2.createCell(2).setCellValue(values.get(key));
rowIndexDenom++;
}
}
wb.write(out);
out.close();
}
catch(Exception e)
{
e.printStackTrace();
}
}
}
second class
package test2;
public final class Task extends Thread {
private static int fileId = 0;
private int id;
private String url;
public Task(String url)
{
this.url = url;
id = fileId;
fileId++;
}
#Override
public void run()
{
Utils.excelizer(id, Utils.parse(url));
}
}
the main class (entry point)
package test2;
import java.util.ArrayList;
public class Main {
public static void main(String[] args)
{
ArrayList<String> urls = new ArrayList<String>();
urls.add("http://base-donnees-publique.medicaments.gouv.fr/affichageDoc.php?specid=61266250&typedoc=R");
urls.add("http://base-donnees-publique.medicaments.gouv.fr/affichageDoc.php?specid=66207341&typedoc=R");
for(String url : urls)
{
new Task(url).run();
}
}
}
When the data was copied to my excel file, the second url doesn't work.
Can you help me solve my problem please?
Thanks
I think its because your main() exits before your second thread has a chance to do its job. You should wait for all spawned threads to complete using Thread.join(). Or better yet, create one of the ExecutorService's and use awaitTermination(...) to block until all URLs are parsed.
EDIT See some examples here http://www.javacodegeeks.com/2013/01/java-thread-pool-example-using-executors-and-threadpoolexecutor.html
Here's two ways of doing string substitution:
name = "Tshepang"
"my name is {}".format(name)
"my name is " + name
How do I do something similar to the first method, using Java?
name = "Paŭlo";
MessageFormat f = new MessageFormat("my name is {0}");
f.format(new Object[]{name});
Or shorter:
MessageFormat.format("my name is {0}", name);
String s = String.format("something %s","name");
Underscore-java has a format() static method. Live example
import com.github.underscore.Underscore;
public class Main {
public static void main(String[] args) {
String name = "Tshepang";
String formatted = Underscore.format("my name is {}", name);
// my name is Tshepang
}
}
You can try this
package template.fstyle;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.collect.Lists.newArrayList;
import static com.google.common.collect.Maps.newHashMap;
import static java.util.Objects.nonNull;
import java.lang.reflect.InvocationTargetException;
import java.util.List;
import java.util.Map;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.util.stream.Collectors;
import org.apache.commons.beanutils.BeanUtils;
import org.apache.commons.collections4.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.commons.lang3.tuple.Pair;
import lombok.AccessLevel;
import lombok.NoArgsConstructor;
import lombok.extern.slf4j.Slf4j;
#Slf4j
#NoArgsConstructor(access = AccessLevel.PRIVATE)
public class FStyleFinal {
private static final String PLACEHOLDERS_KEY = "placeholders";
private static final String VARIABLE_NAMES_KEY = "variableNames";
private static final String PLACEHOLDER_PREFIX = "{{";
private static final String PLACEHOLDER_SUFFIX = "}}";
private static final Pattern PLACEHOLDER_PATTERN = Pattern.compile("\\{\\{\\s*([\\S]+)\\s*}}");
private static Map<String, List<String>> toPlaceholdersAndVariableNames(String rawTemplate) {
List<String> placeholders = newArrayList();
List<String> variableNames = newArrayList();
Matcher matcher = PLACEHOLDER_PATTERN.matcher(rawTemplate);
while (matcher.find()) {
for (int j = 0; j <= matcher.groupCount(); j++) {
String s = matcher.group(j);
if (StringUtils.startsWith(s, PLACEHOLDER_PREFIX) && StringUtils.endsWith(s, PLACEHOLDER_SUFFIX)) {
placeholders.add(s);
} else if (!StringUtils.startsWith(s, PLACEHOLDER_PREFIX) && !StringUtils.endsWith(s, PLACEHOLDER_SUFFIX)) {
variableNames.add(s);
}
}
}
checkArgument(CollectionUtils.size(placeholders) == CollectionUtils.size(variableNames), "template engine error");
Map<String, List<String>> map = newHashMap();
map.put(PLACEHOLDERS_KEY, placeholders);
map.put(VARIABLE_NAMES_KEY, variableNames);
return map;
}
private static String toJavaTemplate(String rawTemplate, List<String> placeholders) {
String javaTemplate = rawTemplate;
for (String placeholder : placeholders) {
javaTemplate = StringUtils.replaceOnce(javaTemplate, placeholder, "%s");
}
return javaTemplate;
}
private static Object[] toJavaTemplateRenderValues(Map<String, String> context, List<String> variableNames, boolean allowNull) {
return variableNames.stream().map(name -> {
String value = context.get(name);
if (!allowNull) {
checkArgument(nonNull(value), name + " should not be null");
}
return value;
}).toArray();
}
private static Map<String, String> fromBeanToMap(Object bean, List<String> variableNames) {
return variableNames.stream().distinct().map(name -> {
String value = null;
try {
value = BeanUtils.getProperty(bean, name);
} catch (IllegalAccessException | InvocationTargetException | NoSuchMethodException e) {
log.debug("fromBeanToMap error", e);
}
return Pair.of(name, value);
}).filter(p -> nonNull(p.getRight())).collect(Collectors.toMap(Pair::getLeft, Pair::getRight));
}
public static String render(String rawTemplate, Map<String, String> context, boolean allowNull) {
Map<String, List<String>> templateMeta = toPlaceholdersAndVariableNames(rawTemplate);
List<String> placeholders = templateMeta.get(PLACEHOLDERS_KEY);
List<String> variableNames = templateMeta.get(VARIABLE_NAMES_KEY);
// transform template to java style template
String javaTemplate = toJavaTemplate(rawTemplate, placeholders);
Object[] renderValues = toJavaTemplateRenderValues(context, variableNames, allowNull);
return String.format(javaTemplate, renderValues);
}
public static String render(String rawTemplate, Object bean, boolean allowNull) {
Map<String, List<String>> templateMeta = toPlaceholdersAndVariableNames(rawTemplate);
List<String> variableNames = templateMeta.get(VARIABLE_NAMES_KEY);
Map<String, String> mapContext = fromBeanToMap(bean, variableNames);
return render(rawTemplate, mapContext, allowNull);
}
public static void main(String[] args) {
String template = "hello, my name is {{ name }}, and I am {{age}} years old, a null value {{ not_exists }}";
Map<String, String> context = newHashMap();
context.put("name", "felix");
context.put("age", "18");
String s = render(template, context, true);
log.info("{}", s);
try {
render(template, context, false);
} catch (IllegalArgumentException e) {
log.error("error", e);
}
}
}
Sample output:
[main] INFO template.fstyle.FStyleFinal - hello, my name is felix, and I am 18 years old, a null value null
[main] ERROR template.fstyle.FStyleFinal - error
java.lang.IllegalArgumentException: not_exists should not be null
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:142)
at template.fstyle.FStyleFinal.lambda$toJavaTemplateRenderValues$0(FStyleFinal.java:69)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545)
at java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438)
at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:444)
at template.fstyle.FStyleFinal.toJavaTemplateRenderValues(FStyleFinal.java:72)
at template.fstyle.FStyleFinal.render(FStyleFinal.java:93)
at template.fstyle.FStyleFinal.main(FStyleFinal.java:113)