I'm trying to convert the below velocity macro into a velocity Java directive, as I need to add some bells and whistles around the rendering logic:
#macro(renderModules $modules)
#if($modules)
#foreach($module in $modules)
#if(${module.template})
#set($moduleData = $module.data)
#parse("${module.template}.vm")
#end
#end
#end
#end
My equivalent Java Directive:
import org.apache.velocity.context.InternalContextAdapter;
import org.apache.velocity.exception.MethodInvocationException;
import org.apache.velocity.exception.ParseErrorException;
import org.apache.velocity.exception.ResourceNotFoundException;
import org.apache.velocity.runtime.directive.Directive;
import org.apache.velocity.runtime.parser.node.ASTBlock;
import org.apache.velocity.runtime.parser.node.Node;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.Writer;
import java.util.List;
public class RenderModulesDirective extends Directive {
private static final Logger LOGGER = LoggerFactory.getLogger(RenderModulesDirective.class);
#Override
public String getName() {
return "renderModules";
}
#Override
public int getType() {
return LINE;
}
#Override
public boolean render(InternalContextAdapter context, Writer writer, Node node) throws IOException, ResourceNotFoundException, ParseErrorException, MethodInvocationException {
for(int i=0; i<node.jjtGetNumChildren(); i++) {
Node modulesNode = node.jjtGetChild(i);
if (modulesNode != null) {
if(!(modulesNode instanceof ASTBlock)) {
if(i == 0) {
// This should be the list of modules
List<Module> modules = (List<Module>) modulesNode.value(context);
if(modules != null) {
for (Module module : modules) {
context.put("moduleData", module.getData());
String templateName = module.getTemplate() + ".vm";
try {
// ??? How to parse the template here ???
} catch(Exception e) {
LOGGER.error("Encountered an error while rendering the Module {}", module, e);
}
}
break;
}
}
}
}
}
return true;
}
}
So, I'm stuck at the point where I need the Java equivalent of the #parse("<template_name>.vm") call. Is this the right approach? Would it help to instead extend from the Parse directive?
I believe
Template template = Velocity.getTemplate("path/to/template.vm");
template.merge(context, writer);
will accomplish what you're looking to do.
If you have access to RuntimeServices you could call createNewParser() and then call parse(Reader reader, String templateName) inside of the parser, the SimpleNode that comes out has a render() method which I think is what you're looking fo
Related
I am trying to return a list of files from a directory. Here's my code:
package com.demo.web.api.file;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import com.demo.core.Logger;
import io.swagger.v3.oas.annotations.Operation;
#RestController
#RequestMapping(value = "/files")
public class FileService {
private static final Logger logger = Logger.factory(FileService.class);
#Value("${file-upload-path}")
public String DIRECTORY;
#Value("${file-upload-check-subfolders}")
public boolean CHECK_SUBFOLDERS;
#GetMapping(value = "/list")
#Operation(summary = "Get list of Uploaded files")
public ResponseEntity<List<File>> list() {
List<File> files = new ArrayList<>();
if (CHECK_SUBFOLDERS) {
// Recursive check
try (Stream<Path> walk = Files.walk(Paths.get(DIRECTORY))) {
List<Path> result = walk.filter(Files::isRegularFile).collect(Collectors.toList());
for (Path p : result) {
files.add(p.toFile().getAbsoluteFile());
}
} catch (Exception e) {
logger.error(e.getMessage());
}
} else {
// Checks the root directory only.
try (Stream<Path> walk = Files.walk(Paths.get(DIRECTORY), 1)) {
List<Path> result = walk.filter(Files::isRegularFile).collect(Collectors.toList());
for (Path p : result) {
files.add(p.toFile().getAbsoluteFile());
}
} catch (Exception e) {
logger.error(e.getMessage());
}
}
return ResponseEntity.ok().body(files);
}
}
As seen in the code, I am trying to return a list of files.
However, when I test in PostMan, I get a list of string instead.
How can I make it return the file object instead of the file path string? I need to get the file attributes (size, date, etc.) to display in my view.
I would recommend that you change your ResponseEntity<> to return not a List of File but instead, a List of Resource, which you can then use to obtain the file metadata that you need.
public ResponseEntity<List<Resource>> list() {}
You can also try specifying a produces=MediaType... param in your #GetMapping annotation so as to tell the HTTP marshaller which kind of content to expect.
You'd have to Create a separate payload with the details you wanna respond with.
public class FilePayload{
private String id;
private String name;
private String size;
public static fromFile(File file){
// create FilePayload from File object here
}
}
And convert it using a mapper from your internal DTO objects to payload ones.
final List<FilePayload> payload = files.forEach(FilePayload::fromFile).collect(Collectors.toList());
return new ResponseEntity<>(payload, HttpStatus.OK);
I think you should not return a body in this case as you may be unaware of the size.
Better to have another endpoint to GET /files/{id}
I did give this another thought. What I just needed was the filename, size and date of the file. From there, I can get the file extension and make my list display look good already.
Here's the refactored method:
#GetMapping(value = "/list")
#Operation(summary = "Get list of Uploaded files")
public ResponseEntity<String> list() {
JSONObject responseObj = new JSONObject();
List<JSONObject> files = new ArrayList<>();
// If CHECK_SUBFOLDERS is true, pass MAX_VALUE to make it recursive on all
// sub-folders. Otherwise, pass 1 to use the root directory only.
try (Stream<Path> walk = Files.walk(Paths.get(DIRECTORY), CHECK_SUBFOLDERS ? MAX_VALUE : 1)) {
List<Path> result = walk.filter(Files::isRegularFile).collect(Collectors.toList());
for (Path p : result) {
JSONObject file = new JSONObject();
file.put("name", p.toFile().getName());
file.put("size", p.toFile().length());
file.put("lastModified", p.toFile().lastModified());
files.add(file);
}
responseObj.put("data", files);
} catch (Exception e) {
String errMsg = CoreUtils.formatString("%s: Error reading files from the directory: \"%s\"",
e.getClass().getName(), DIRECTORY);
logger.error(e, errMsg);
responseObj.put("errors", errMsg);
}
return ResponseEntity.ok().body(responseObj.toString());
}
The above was what I ended up doing. I created a JSONObject of the props I need, and then returned the error if it did not succeed. This made it a lot better for me.
I am using below java class which uses sardine , i am getting only resources or zip files list in the directory, what should i use to download zip files?
package com.download;
import java.util.List;
import org.mule.api.MuleEventContext;
import org.mule.api.lifecycle.Callable;
import com.github.sardine.DavResource;
import com.github.sardine.Sardine;
import com.github.sardine.SardineFactory;
public class filesdownload implements Callable{
#Override
public Object onCall(MuleEventContext eventContext) throws Exception {
Sardine sardine = SardineFactory.begin("***","***");
List<DavResource> resources = sardine.list("http://hfus.com/vsd");
for (DavResource res : resources)
{
System.out.println(res);
}
return sardine;
}
You need to use sardine.get() method. Method documentation
Don't forget to use absolute path to your file. For example: http://hfus.com/vsd/file.zip.
Code sample:
package com.download;
import java.util.List;
import org.mule.api.MuleEventContext;
import org.mule.api.lifecycle.Callable;
import com.github.sardine.DavResource;
import com.github.sardine.Sardine;
import com.github.sardine.SardineFactory;
//TODO: add missing imports
public class filesdownload implements Callable{
#Override
public Object onCall(MuleEventContext eventContext) throws Exception {
Sardine sardine = SardineFactory.begin("***","***");
List<DavResource> resources = sardine.list(serverUrl()+"/vsd");
for (DavResource res : resources) {
if(res.getName().endsWith(".zip")) {
downloadFile(res);
}
}
return sardine;
}
private void downloadFile(DavResource resource) {
try {
InputStream in = sardine.get(serverUrl()+resource.getPath());
// TODO: handle same file name in subdirectories
OutputStream out = new FileOutputStream(resource.getName());
IOUtils.copy(in, out);
in.close();
out.close();
} catch(IOException ex) {
// TODO: handle exception
}
}
private String serverUrl() {
return "http://hfus.com";
}
}
I'm trying to write a large number of rows (~2 million) from a database to a CSV file using SuperCSV. I need to perform validation on each cell as it is written, and the built-in CellProcessors do very nicely. I want to capture all the exceptions that are thrown by the CellProcessors so that I can go back to the source data and make changes.
The problem is that when there are multiple errors in a single row (e.g. The first value is out of range, the second value is null but shouldn't be), only the first CellProcessor will execute, and so I'll only see one of the errors. I want to process the whole file in a single pass, and have a complete set of exceptions at the end of it.
This is the kind of approach I'm trying:
for (Row row : rows) {
try {
csvBeanWriter.write(row, HEADER_MAPPINGS, CELL_PROCESSORS);
} catch (SuperCsvCellProcessorException e) {
log(e);
}
}
How can I achieve this? Thanks!
EDIT: Here is the code I wrote that's similar to Hound Dog's, in case it helps anyone:
import java.util.List;
import org.supercsv.cellprocessor.CellProcessorAdaptor;
import org.supercsv.cellprocessor.ift.CellProcessor;
import org.supercsv.exception.SuperCsvCellProcessorException;
import org.supercsv.util.CsvContext;
public class ExceptionCapturingCellProcessor extends CellProcessorAdaptor {
private final List<Exception> exceptions;
private final CellProcessor current;
public ExceptionCapturingCellProcessor(CellProcessor current, CellProcessor next, List<Exception> exceptions) {
super(next);
this.exceptions = exceptions;
this.current = current;
}
#Override
public Object execute(Object value, CsvContext context) {
// Check input is not null
try {
validateInputNotNull(value, context);
} catch (SuperCsvCellProcessorException e) {
exceptions.add(e);
}
// Execute wrapped CellProcessor
try {
current.execute(value, context);
} catch (SuperCsvCellProcessorException e) {
exceptions.add(e);
}
return next.execute(value, context);
}
}
I'd recommend writing a custom CellProcessor to achieve this. The following processor can be placed at the start of each CellProcessor chain - it will simply delegate to the processor chained after it, and will suppress any cell processing exceptions.
package example;
import java.util.ArrayList;
import java.util.List;
import org.supercsv.cellprocessor.CellProcessorAdaptor;
import org.supercsv.cellprocessor.ift.CellProcessor;
import org.supercsv.exception.SuperCsvCellProcessorException;
import org.supercsv.util.CsvContext;
public class SuppressException extends CellProcessorAdaptor {
public static List<SuperCsvCellProcessorException> SUPPRESSED_EXCEPTIONS =
new ArrayList<SuperCsvCellProcessorException>();
public SuppressException(CellProcessor next) {
super(next);
}
public Object execute(Object value, CsvContext context) {
try {
// attempt to execute the next processor
return next.execute(value, context);
} catch (SuperCsvCellProcessorException e) {
// save the exception
SUPPRESSED_EXCEPTIONS.add(e);
// and suppress it (null is written as "")
return null;
}
}
}
And here it is in action:
package example;
import java.io.StringWriter;
import java.util.Arrays;
import java.util.List;
import org.supercsv.cellprocessor.constraint.NotNull;
import org.supercsv.cellprocessor.constraint.StrMinMax;
import org.supercsv.cellprocessor.ift.CellProcessor;
import org.supercsv.exception.SuperCsvCellProcessorException;
import org.supercsv.io.CsvBeanWriter;
import org.supercsv.io.ICsvBeanWriter;
import org.supercsv.prefs.CsvPreference;
public class TestSuppressExceptions {
private static final CellProcessor[] PROCESSORS = {
new SuppressException(new StrMinMax(0, 4)),
new SuppressException(new NotNull()) };
private static final String[] HEADER = { "name", "age" };
public static void main(String[] args) throws Exception {
final StringWriter stringWriter = new StringWriter();
ICsvBeanWriter beanWriter = null;
try {
beanWriter = new CsvBeanWriter(stringWriter,
CsvPreference.STANDARD_PREFERENCE);
beanWriter.writeHeader(HEADER);
// set up the data
Person valid = new Person("Rick", 43);
Person nullAge = new Person("Lori", null);
Person totallyInvalid = new Person("Shane", null);
Person valid2 = new Person("Carl", 12);
List<Person> people = Arrays.asList(valid, nullAge, totallyInvalid,
valid2);
for (Person person : people) {
beanWriter.write(person, HEADER, PROCESSORS);
if (!SuppressException.SUPPRESSED_EXCEPTIONS.isEmpty()) {
System.out.println("Suppressed exceptions for row "
+ beanWriter.getRowNumber() + ":");
for (SuperCsvCellProcessorException e :
SuppressException.SUPPRESSED_EXCEPTIONS) {
System.out.println(e);
}
// clear ready for next row
SuppressException.SUPPRESSED_EXCEPTIONS.clear();
}
}
} finally {
beanWriter.close();
}
// CSV will have empty columns for invalid data
System.out.println(stringWriter);
}
}
Here's the suppressed exceptions output (row 4 has two exceptions, one for each column):
Suppressed exceptions for row 3:
org.supercsv.exception.SuperCsvConstraintViolationException: null value
encountered processor=org.supercsv.cellprocessor.constraint.NotNull
context={lineNo=3, rowNo=3, columnNo=2, rowSource=[Lori, null]}
Suppressed exceptions for row 4:
org.supercsv.exception.SuperCsvConstraintViolationException: the length (5)
of value 'Shane' does not lie between the min (0) and max (4) values (inclusive)
processor=org.supercsv.cellprocessor.constraint.StrMinMax
context={lineNo=4, rowNo=4, columnNo=2, rowSource=[Shane, null]}
org.supercsv.exception.SuperCsvConstraintViolationException: null value
encountered processor=org.supercsv.cellprocessor.constraint.NotNull
context={lineNo=4, rowNo=4, columnNo=2, rowSource=[Shane, null]}
And the CSV output
name,age
Rick,43
Lori,
,
Carl,12
Notice how the invalid values were written as "" because the SuppressException processor returned null for those values (not that you'd use the CSV output anyway, as it's not valid!).
hello:
I'm writing code in java for nutch(open source search engine) to remove the movments from arabic words in the indexer.
I don't know what is the error in it.
Tthis is the code:
package com.mycompany.nutch.indexing;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.Text;
import org.apache.log4j.Logger;
import org.apache.nutch.crawl.CrawlDatum;
import org.apache.nutch.crawl.Inlinks;
import org.apache.nutch.indexer.IndexingException;
import org.apache.nutch.indexer.IndexingFilter;
import org.apache.nutch.indexer.NutchDocument;
import org.apache.nutch.parse.getData().parse.getData();
public class InvalidUrlIndexFilter implements IndexingFilter {
private static final Logger LOGGER =
Logger.getLogger(InvalidUrlIndexFilter.class);
private Configuration conf;
public void addIndexBackendOptions(Configuration conf) {
// NOOP
return;
}
public NutchDocument filter(NutchDocument doc, Parse parse, Text url,
CrawlDatum datum, Inlinks inlinks) throws IndexingException {
if (url == null) {
return null;
}
char[] parse.getData() = input.trim().toCharArray();
for(int p=0;p<parse.getData().length;p++)
if(!(parse.getData()[p]=='َ'||parse.getData()[p]=='ً'||parse.getData()[p]=='ُ'||parse.getData()[p]=='ِ'||parse.getData()[p]=='ٍ'||parse.getData()[p]=='ٌ' ||parse.getData()[p]=='ّ'||parse.getData()[p]=='ْ' ||parse.getData()[p]=='"' ))
new String.append(parse.getData()[p]);
return doc;
}
public Configuration getConf() {
return conf;
}
public void setConf(Configuration conf) {
this.conf = conf;
}
}
I think that the error is in using parse.getdata() but I don't know what I should use instead of it?
The line
char[] parse.getData() = input.trim().toCharArray();
will give you a compile error because the left hand side is not a variable. Please replace parse.getData() by a unique variable name (e.g. parsedData) in this line and the following lines.
Second the import of
import org.apache.nutch.parse.getData().parse.getData();
will also fail. Looks a lot like a text replace issue.
what options do I have to profile a page request in a spring mvc app?
I want to get a breakdown of how long the page request takes, along with the various stages like how long it takes to render the freemarker template, hibernate db calls, etc.
We just accomplished something similar with an interceptor and a custom tag. This solution is "light" enough to be used in production, presents its data as HTML comments at the bottom of the response, and allows you to opt into the more verbose logging with a request parameter. You apply the interceptor below to all request paths you want to profile, and you add the custom tag to the bottom of the desired pages. The placement of the custom tag is important; it should be invoked as close to the end of request processing as possible, as it's only aware of time spent (and objects loaded) prior to its invocation.
package com.foo.web.interceptor;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.springframework.web.servlet.ModelAndView;
import org.springframework.web.servlet.handler.HandlerInterceptorAdapter;
public class PageGenerationTimeInterceptor extends HandlerInterceptorAdapter {
public static final String PAGE_START_TIME = "page_start_time";
public static final String PAGE_GENERATION_TIME = "page_generation_time";
public boolean preHandle(HttpServletRequest request, HttpServletResponse response,
Object handler) throws Exception {
request.setAttribute(PAGE_START_TIME, new Long(System.currentTimeMillis()));
return true;
}
public void postHandle(HttpServletRequest request, HttpServletResponse response,
Object handler, ModelAndView modelAndView) throws Exception {
Long startTime = (Long) request.getAttribute(PAGE_START_TIME);
if (startTime != null) {
request.setAttribute(PAGE_GENERATION_TIME, new Long(System.currentTimeMillis() - startTime.longValue()));
}
}
}
The custom tag looks for the request attributes, and uses them to compute the handler time, the view time, and the total time. It can also query the current Hibernate session for first-level cache statistics, which can shed some light on how many objects were loaded by the handler and view. If you don't need the Hibernate information, you can delete the big if block.
package com.foo.web.taglib;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.SortedSet;
import java.util.TreeSet;
import javax.servlet.ServletContext;
import javax.servlet.jsp.JspException;
import javax.servlet.jsp.JspWriter;
import javax.servlet.jsp.tagext.Tag;
import javax.servlet.jsp.tagext.TryCatchFinally;
import org.apache.log4j.LogManager;
import org.apache.log4j.Logger;
import org.hibernate.Session;
import org.hibernate.SessionFactory;
import org.hibernate.engine.CollectionKey;
import org.hibernate.engine.EntityKey;
import org.hibernate.stat.SessionStatistics;
import org.springframework.beans.factory.BeanFactoryUtils;
import org.springframework.context.ApplicationContext;
import org.springframework.web.bind.ServletRequestUtils;
import org.springframework.web.context.support.WebApplicationContextUtils;
import org.springframework.web.servlet.tags.RequestContextAwareTag;
import com.foo.web.interceptor.PageGenerationTimeInterceptor;
public class PageInfoTag extends RequestContextAwareTag implements TryCatchFinally {
private static final long serialVersionUID = -8448960221093136401L;
private static final Logger LOGGER = LogManager.getLogger(PageInfoTag.class);
public static final String SESSION_STATS_PARAM_NAME = "PageInfoTag.SessionStats";
#Override
public int doStartTagInternal() throws JspException {
try {
JspWriter out = pageContext.getOut();
Long startTime = (Long)pageContext.getRequest().getAttribute(PageGenerationTimeInterceptor.PAGE_START_TIME);
Long handlerTime = (Long)pageContext.getRequest().getAttribute(PageGenerationTimeInterceptor.PAGE_GENERATION_TIME);
if (startTime != null && handlerTime != null) {
long responseTime = System.currentTimeMillis() - startTime.longValue();
long viewTime = responseTime - handlerTime;
out.append(String.format("<!-- total: %dms, handler: %dms, view: %dms -->", responseTime, handlerTime, viewTime));
}
if (ServletRequestUtils.getBooleanParameter(pageContext.getRequest(), SESSION_STATS_PARAM_NAME, false)) {
//write another long HTML comment with information about contents of Hibernate first-level cache
ServletContext servletContext = pageContext.getServletContext();
ApplicationContext context = WebApplicationContextUtils.getRequiredWebApplicationContext(servletContext);
String[] beans = BeanFactoryUtils.beanNamesForTypeIncludingAncestors(context,
SessionFactory.class, false, false);
if (beans.length > 0) {
SessionFactory sessionFactory = (SessionFactory) context.getBean(beans[0]);
Session session = sessionFactory.getCurrentSession();
SessionStatistics stats = session.getStatistics();
Map<String, NamedCount> entityHistogram = new HashMap<String, NamedCount>();
out.append("\n<!-- session statistics:\n");
out.append("\tObject keys (").append(String.valueOf(stats.getEntityCount())).append("):\n");
for (Object obj: stats.getEntityKeys()) {
EntityKey key = (EntityKey)obj;
out.append("\t\t").append(key.getEntityName()).append("#").append(key.getIdentifier().toString()).append("\n");
increment(entityHistogram, key.getEntityName());
}
out.append("\tObject key histogram:\n");
SortedSet<NamedCount> orderedEntityHistogram = new TreeSet<NamedCount>(entityHistogram.values());
for (NamedCount entry: orderedEntityHistogram) {
out.append("\t\t").append(entry.name).append(": ").append(String.valueOf(entry.count)).append("\n");
}
Map<String, NamedCount> collectionHistogram = new HashMap<String, NamedCount>();
out.append("\tCollection keys (").append(String.valueOf(stats.getCollectionCount())).append("):\n");
for (Object obj: stats.getCollectionKeys()) {
CollectionKey key = (CollectionKey)obj;
out.append("\t\t").append(key.getRole()).append("#").append(key.getKey().toString()).append("\n");
increment(collectionHistogram, key.getRole());
}
out.append("\tCollection key histogram:\n");
SortedSet<NamedCount> orderedCollectionHistogram = new TreeSet<NamedCount>(collectionHistogram.values());
for (NamedCount entry: orderedCollectionHistogram) {
out.append("\t\t").append(entry.name).append(": ").append(String.valueOf(entry.count)).append("\n");
}
out.append("-->");
}
}
} catch (IOException e) {
LOGGER.error("Unable to write page info tag");
throw new RuntimeException(e);
}
return Tag.EVAL_BODY_INCLUDE;
}
protected void increment(Map<String, NamedCount> histogram, String key) {
NamedCount count = histogram.get(key);
if (count == null) {
count = new NamedCount(key);
histogram.put(key, count);
}
count.count++;
}
class NamedCount implements Comparable<NamedCount> {
public String name;
public int count;
public NamedCount(String name) {
this.name = name;
count = 0;
}
#Override
public int compareTo(NamedCount other) {
//descending count, ascending name
int compared = other.count - this.count;
if (compared == 0) {
compared = this.name.compareTo(other.name);
}
return compared;
}
}
}
Take a look here:
Profiling with Eclipse and remote profile agents on Linux
Tutorial: Profiling with TPTP and Tomcat
An introduction to profiling Java applications using TPTP
TPTP = Eclipse Test and Performance Tools Platform
More links to the stack:
Open Source Profilers in Java