I am working on spring hibernate and I want to import an excel file. I want to make a check on extensions so that no one can upload a file other than excel i.e. restrict the import to files with one of these extensions: xls or xlsx. My code is here:
public class ImportCandidatesFormController extends BNUAbstractFormController {
private ImportCandidatesBL importCandidatesBL;
private ExcelReader reader;
#Override
protected ModelAndView processFormSubmission(HttpServletRequest request,
HttpServletResponse response, Object command, BindException arg3)
throws Exception {
FileUploadVO vo = (FileUploadVO) command;
MultipartFile file = vo.getFile();
System.out.println("File Uploaded: " + file.getOriginalFilename());
boolean isSuccessful = importCandidatesBL.importAndSaveCandidates(
file.getInputStream(),
SessionUtil.getCurrentUser(request.getSession()));
return new ModelAndView(new RedirectView("importCandidates.do?s=1"));
}
public ImportCandidatesBL getImportCandidatesBL() {
return importCandidatesBL;
}
public void setImportCandidatesBL(ImportCandidatesBL importCandidatesBL) {
this.importCandidatesBL = importCandidatesBL;
}
}
String lowercaseFileName = file.getOriginalFilename().toLowerCase();
if (!(lowerCaseFileName.endsWith(".xls") || lowerCaseFileName.endsWith(".xlsx"))) {
// reject file
}
Related
I am writing a handler in my spring boot application to convert object into CSV . If the user wants only header , only CSV header should be returned. there are two other options - header_and_body and body_only.
for some reason, header_only returns only empty file. Why is that ? I tried to add p.println() for the header_only option, still only empty file is returned
The complete code is below: Well not complete , I kept getting this error in Stack Overflow " It looks like your post is mostly code; please add some more details." So I removed few irrelevant code for the topic under discussion.
public class CsvHttpMessageConverter extends AbstractHttpMessageConverter<CsvResponse> {
#Override
protected void writeInternal(CsvResponse response, HttpOutputMessage output)
throws IOException, HttpMessageNotWritableException {
output.getHeaders().setContentType(MEDIA_TYPE);
output.getHeaders().set("Content-Disposition", "attachment; filename=\"" + response.getFilename() + "\"");
OutputStream out = output.getBody();
writeRecords(response, out, response.getOption());
out.close();
}
private void writeRecords(CsvResponse response, OutputStream out, String option) throws IOException {
PrintWriter p = new PrintWriter(out);
List<DynamicMarketData> list = response.getRecords();
String header = mdhTotemConfiguration.getColumnListForCommodities();
if (option.equals("header_only") || option.equals("header_and_body"))
{
p.print(header);
}
if (option.equals("body_only") || option.equals("header_and_body"))
{
for (DynamicMarketData elem : list) {
p.println();
Iterator<Map.Entry<String, String>> entries = elem.getDynamicMarketData().entrySet().iterator();
do {
Map.Entry<String, String> entry = entries.next();
p.print(entry.getValue());
if (entries.hasNext() == true)
p.print(",");
} while (entries.hasNext());
}
}
}
}
I'm trying to upload a file to amazon s3. Instead of uploading, I want to read the data from database using spring batch and write the file directly into the s3 storage. Is there anyway we can do that ?
Spring Cloud AWS adds support for the Amazon S3 service to load and write resources with the resource loader and the s3 protocol. Once you have configured the AWS resource loader, you can write a custom Spring Batch writer like:
import java.io.OutputStream;
import java.util.List;
import org.springframework.batch.item.ItemWriter;
import org.springframework.core.io.ResourceLoader;
import org.springframework.core.io.WritableResource;
public class AwsS3ItemWriter implements ItemWriter<String> {
private ResourceLoader resourceLoader;
private WritableResource resource;
public AwsS3ItemWriter(ResourceLoader resourceLoader, String resource) {
this.resourceLoader = resourceLoader;
this.resource = (WritableResource) this.resourceLoader.getResource(resource);
}
#Override
public void write(List<? extends String> items) throws Exception {
try (OutputStream outputStream = resource.getOutputStream()) {
for (String item : items) {
outputStream.write(item.getBytes());
}
}
}
}
Then you should be able to use this writer with an S3 resource like s3://myBucket/myFile.log.
Is there anyway we can do that ?
Please note that I did not compile/test the previous code. I just wanted to give you a starting point of how to do it.
Hope this helps.
The problem is that the OutputStream will only write the last List items sent by the step...
I think you might need to write a temporary file on file system and then send the whole file in a separate tasklet
See this example :
https://github.com/TerrenceMiao/AWS/blob/master/dynamodb-java/src/main/java/org/paradise/microservice/userpreference/service/writer/CSVFileWriter.java
I had the same thing to do. Because spring has no clas to write to a stream alone I made one my self like above example:
You need to classes for this. A Resource class which implements WriteableResource and extends AbstractResource:
...
public class S3Resource extends AbstractResource implements WritableResource {
ByteArrayOutputStream resource = new ByteArrayOutputStream();
#Override
public String getDescription() {
return null;
}
#Override
public InputStream getInputStream() throws IOException {
return new ByteArrayInputStream(resource.toByteArray());
}
#Override
public OutputStream getOutputStream() throws IOException {
return resource;
}
}
And your writer which extends ItemWriter:
public class AmazonStreamWriter<T> implements ItemWriter<T>{
private WritableResource resource;
private LineAggregator<T> lineAggregator;
private String lineSeparator;
public String getLineSeparator() {
return lineSeparator;
}
public void setLineSeparator(String lineSeparator) {
this.lineSeparator = lineSeparator;
}
AmazonStreamWriter(WritableResource resource){
this.resource = resource;
}
public WritableResource getResource() {
return resource;
}
public void setResource(WritableResource resource) {
this.resource = resource;
}
public LineAggregator<T> getLineAggregator() {
return lineAggregator;
}
public void setLineAggregator(LineAggregator<T> lineAggregator) {
this.lineAggregator = lineAggregator;
}
#Override
public void write(List<? extends T> items) throws Exception {
try (OutputStream outputStream = resource.getOutputStream()) {
StringBuilder lines = new StringBuilder();
Iterator var3 = items.iterator();
while(var3.hasNext()) {
T item = (T) var3.next();
lines.append(this.lineAggregator.aggregate(item)).append(this.lineSeparator);
}
outputStream.write(lines.toString().getBytes());
}
}
}
With this setup you will write your Item-Information you recieve from your database and write it to your Customresource via an OutputStream. The filled resource then can be used in one of your Steps zu open an InputStream and upload to S3 via Client.
I did it with: amazonS3.putObject(awsBucketName, awsBucketKey , resource.getInputStream(), new ObjectMetadata());
My solution may be not the perfect aproach, but from here on you can optimize it.
I am creating my first Rest service using JSON objects for the data
transfer between user and server, with the help of the Gson library 2.5.
I am not using any frameworks like Jersey or anything like that. (That was my
project requirment). The java version i use is 1.6 (part of my requirment)
jboss server and Eclipse as IDE.
At the moment i have 2 small functions from a simple HTML form. The first is
suposed to requests the data from the JSON file and the second is suposed to
add a new json information to the json document.
Problem is: When i try to acces the JSON file, a array its returned with the
last submited Person. When i save a new Person information, that information is
not saved in the personsJsonFile but someplace else [have no ideea where].
My json file is found in the Projects main folder.
Any help is deeply apreciated.
GetData class:
#Path("/data")
public class GetDataClass {
#GET
#Produces("text/plain")
public ArrayList<PersonConstructor> displayJsonFile() throws IOException{
ArrayList<PersonConstructor> newLib = new ArrayList<PersonConstructor>();
File jsonFile = new File("personsJsonFile.json");
Scanner fileInput = new Scanner(jsonFile);
Gson gson = new Gson();
while(fileInput.hasNextLine()){
String jsonLine = fileInput.nextLine();
PersonConstructor singlePerson = gson.fromJson(jsonLine, PersonConstructor.class);
newLib.add(singlePerson);
}
fileInput.close();
return newLib;
}
}
AddData Class:
#Path("/add")
public class AddPersonsClass {
#POST
public String addPersons(
#FormParam("idInput") int idInput,
#FormParam("surnameInput") String surnameInput,
#FormParam("nameInput") String nameInput
) throws IOException
{
Gson gson = new Gson();
PersonConstructor newPerson = new PersonConstructor();
newPerson.setPersonId(idInput);
newPerson.setPersonNume(nameInput);
newPerson.setPersonPrenume(surnameInput);
File jsonFile = new File("personsJsonFile.json");
FileWriter jsonWriter = new FileWriter(jsonFile);
System.out.println(newPerson);
String jsonLine = gson.toJson(newPerson);
System.out.println(newPerson);
jsonWriter.write(jsonLine+"\n");
jsonWriter.close();
return "Element: " + newPerson + "has been added";
}
}
PersonConstructor Class:
public class PersonConstructor {
private int personId;
private String personNume;
private String personPrenume;
public PersonConstructor(int personId, String personNume,String personPrenume){
this.personId = personId;
this.personPrenume = personPrenume;
this.personNume = personNume;
}
public PersonConstructor() {
}
public int getPersonId(){
return personId;
}
public void setPersonId(int personId){
this.personId = personId;
}
public String getPersonNume(){
return personNume;
}
public void setPersonNume(String personNume){
this.personNume = personNume;
}
public String getPersonPrenume(){
return personPrenume;
}
public void setPersonPrenume(String personPrenume){
this.personPrenume = personPrenume;
}
public String toString(){
return String.format("\n%s %s %s\n", this.personId, this.personNume, this.personPrenume);
}
}
Json file contains:
{"personId":5,"personNume":"Ursu","personPrenume":"Niculae"},
{"personId":6,"personNume":"Ivan","personPrenume":"Claudiu"},
{"personId":7,"personNume":"Hap","personPrenume":"Dorel"}
Your problem seems to that you have not specified the path where to save the file.
Add the path when creating a file.
final String jsonDirectory = "path to file";
File file = new File(jsonDirectory + "\\results.txt");
Tried the following:
import java.io.IOException;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.spi.FileTypeDetector;
import org.apache.tika.Tika;
import org.apache.tika.mime.MimeTypes;
/**
*
* #author kiriti.k
*/
public class TikaFileTypeDetector {
private final Tika tika = new Tika();
public TikaFileTypeDetector() {
super();
}
public String probeContentType(Path path) throws IOException {
// Try to detect based on the file name only for efficiency
String fileNameDetect = tika.detect(path.toString());
if (!fileNameDetect.equals(MimeTypes.OCTET_STREAM)) {
return fileNameDetect;
}
// Then check the file content if necessary
String fileContentDetect = tika.detect(path.toFile());
if (!fileContentDetect.equals(MimeTypes.OCTET_STREAM)) {
return fileContentDetect;
}
// Specification says to return null if we could not
// conclusively determine the file type
return null;
}
public static void main(String[] args) throws IOException {
Tika tika = new Tika();
// expects file path as the program argument
if (args.length != 1) {
printUsage();
return;
}
Path path = Paths.get(args[0]);
TikaFileTypeDetector detector = new TikaFileTypeDetector();
// Analyse the file - first based on file name for efficiency.
// If cannot determine based on name and then analyse content
String contentType = detector.probeContentType(path);
System.out.println("File is of type - " + contentType);
}
public static void printUsage() {
System.out.print("Usage: java -classpath ... "
+ TikaFileTypeDetector.class.getName()
+ " ");
}
}
The above program is checking based on file extension only. How do I make it to check content type also(mime) and then determine the type. I am using tika-app-1.8.jar in netbean 8.0.2. What am I missing?
The code checks the file extension first and returns the MIME type based on that, if it finds a result. If you want it to check the content first, just switch the two statements:
public String probeContentType(Path path) throws IOException {
// Check contents first
String fileContentDetect = tika.detect(path.toFile());
if (!fileContentDetect.equals(MimeTypes.OCTET_STREAM)) {
return fileContentDetect;
}
// Try file name only if content search was not successful
String fileNameDetect = tika.detect(path.toString());
if (!fileNameDetect.equals(MimeTypes.OCTET_STREAM)) {
return fileNameDetect;
}
// Specification says to return null if we could not
// conclusively determine the file type
return null;
}
Be aware that this may have huge performance impact.
You can use Files.probeContentType(path)
What my application is doing is creating a large csv file (its a report) and the idea is to deliver the contents of the csv file without actually saving a file for it. Here's my code
String csvData; //this is the string that contains the csv contents
byte[] csvContents = csvData.getBytes();
response.contentType = "text/csv";
response.headers.put("Content-Disposition", new Header(
"Content-Disposition", "attachment;" + "test.csv"));
response.headers.put("Cache-Control", new Header("Cache-Control",
"max-age=0"));
response.out.write(csvContents);
ok();
The csv files that are being generated are rather large and the error i am getting is
org.jboss.netty.handler.codec.frame.TooLongFrameException: An HTTP line is larger than 4096 bytes.
Whats the best way to overcome this issue?
My tech stack is java 6 with play framework 1.2.5.
Note: the origin of the response object is play.mvc.Controller.response
Please use
ServletOutputStream
like
String csvData; //this is the string that contains the csv contents
byte[] csvContents = csvData.getBytes();
ServletOutputStream sos = response.getOutputStream();
response.setContentType("text/csv");
response.setHeader("Content-Disposition", "attachment; filename=test.csv");
sos.write(csvContents);
We use this to show the results of an action directly in the browser,
window.location='data:text/csv;charset=utf8,' + encodeURIComponent(your-csv-data);
I am not sure about the out of memory error but I would at least try this:
request.format = "csv";
renderBinary(new ByteArrayInputStream(csvContents));
Apparently netty complains that the http-header is too long - maybe it somehow thinks that your file is part of the header, see also
http://lists.jboss.org/pipermail/netty-users/2010-November/003596.html
as nylund states, using renderBinary should do the trick.
We use writeChunk oursleves to output large reports on the fly, like:
Controller:
public static void getReport() {
final Report report = new Report(code, from, to );
try {
while (report.hasMoreData()) {
final String data = await(report.getData());
response.writeChunk(data);
}
} catch (final Exception e) {
final Throwable cause = e.getCause();
if (cause != null && cause.getMessage().contains("HTTP output stream closed")) {
logger.warn(e, "user cancelled download");
} else {
logger.error(e, "error retrieving data");
}
}
}
in report code
public class Report {
public Report(final String code, final Date from, final Date to) {
}
public boolean hasMoreData() {
// find out if there is more data
}
public Future<String> getData() {
final Job<String> queryJob = new Job<String>() {
#Override
public String doJobWithResult() throws Exception {
// grab data (e.g read form db) and return it
return data;
}
};
return queryJob.now();
}
}