save multiple attachments in java - java

I have worked on saving a single file into database(postgresql) and it works fine. But I am stuck when i try to upload multiple files.
My jsp fileupload :
<input type="file" id="reqFile" name="attachments.myFile" multiple/>
This is my action method :
public String SaveAttachments() throws SQLException, IOException {
int case_id = (Integer) session.get("case_id");
System.out.println("in save attachments()");
int uploaded_by = 1;
try {
File destFile = new File(attachments.getMyFileFileName());
FileUtils.copyFile(attachments.getMyFile(), destFile);
fis = new FileInputStream(destFile);
currentCon = ConnectionManager.getConnection();
pstmt = (PreparedStatement) currentCon.prepareStatement("insert into case_attachments(case_id, attachment, attachment_title, upload_date, uploaded_by) values(?,?,?,current_date,?)");
pstmt.setInt(1, case_id);
pstmt.setBinaryStream(2, fis, (int) destFile.length());
pstmt.setString(3, attachments.getMyFileFileName());
pstmt.setInt(4, uploaded_by);
pstmt.executeUpdate();
} catch (Exception e) {
System.out.println("Error From SaveAttachments :" + e);
}
return SUCCESS;
}
When I attach multiple files it goes into the same column in database in comma seperated format. How can i save files in different rows when the user attaches multiple files?
Thanks In advance.

You can upload multiple files like described here, then you simply need to loop them and insert them in the database one at time.
You just need to decide if using a business layer or doing the insert job directly from the Action (not the best practice...), and if putting all the files in a single transaction or each one in its own.
Remember that HTML5 multiple attribute should be multiple="multiple", not just multiple (although the majority of the browsers handles it right even when "quirk").

use array for multiple uploading.
<td><input type="file" name="file[]" class="multi" id="reqFile"></td>

Related

Parse huge CSV file

I have a huge CSV file
need to read it
validate
write to db
After research, I found this solution
//configure input format using
CsvParserSettings settings = new CsvParserSettings();
//get an interator
CsvParser parser = new CsvParser(settings);
Iterator<String[]> it = parser.iterate(new File("/path/to/your.csv"), "UTF-8").iterator();
//connect to the database and create an insert statement
Connection connection = getYourDatabaseConnectionSomehow();
final int COLUMN_COUNT = 2;
PreparedStatement statement = connection.prepareStatement("INSERT INTO some_table(column1, column2) VALUES (?,?)");
//run batch inserts of 1000 rows per batch
int batchSize = 0;
while (it.hasNext()) {
//get next row from parser and set values in your statement
String[] row = it.next();
//validation
if (!row[0].matches(some regex)){
badDataList.add(row);
conitunue;
}
for(int i = 0; i < COLUMN_COUNT; i++){
if(i < row.length){
statement.setObject(i + 1, row[i]);
} else { //row in input is shorter than COLUMN_COUNT
statement.setObject(i + 1, null);
}
}
//add the values to the batch
statement.addBatch();
batchSize++;
//once 1000 rows made into the batch, execute it
if (batchSize == 1000) {
statement.executeBatch();
batchSize = 0;
}
}
// the last batch probably won't have 1000 rows.
if (batchSize > 0) {
statement.executeBatch();
}
// or use jook#loadArrays
context.loadInto("book")
.batchAfter(500)
.loadArrays(new ArrayList <String[]>)
However, it is still too slow because it's executing in same thread. Is there any way to do it faster with multi-threading?
Instead of iterating records one by one, use commands such as LOAD DATA INFILE that imports data in bulk:
JDBC: CSV raw data export/import from/to remote MySQL database using streams (SELECT INTO OUTFILE / LOAD DATA INFILE)
Note: As #XtremeBaumer said each database vendor has its own command for bulk importing from files.
Validation can be done with different strategies, for example if validation is possible using SQL, you can import data to a temporary table and then select valid data to target table.
Or you can validate data using Java code then use bulk import on validated data instead of importing them one by one.
First you should close statement and connection, use try-with.-resources.. Then check (auto)commit transactionality.
connection.setAutoCommit(true);
In the same category would be a database lock on the table, should the database be in use.
Regex is slow, instead:
if (!row[0].matches(some regex)) {
do
private static Pattern SKIP_PATTERN = Pattern.compile(some regex);
...
if (SKIP_PATTERN.matcher(row[0]).matches()) { continue; }
If there is a running number like an integer ID, the batch might be better by keeping the number in a long (statement.setLong(...)).
If the value is a short finite domain, instead of 1000 different String instances, you could use an identity map of string to the same string. Not sure whethe these two measures help.
Multithreading seems dubious and should be the last resource. You could write to a queue parsing the CSV and at the same time consume from it to the database.

What is other better way to send response and print data in table structure?

I am new to ajax, I am making ajax call, it will hit servlet, it fetches data and prints data to jsp using out.println(). It is working fine but I feel its not good way. Here is my coding ,
Ajax call,
xmlHttpReq.open('GET', "RTMonitor?rtype=groups&groupname="+temp, true);
In RTMonitor servlet I have,
sql ="SELECT a.vehicleno,a.lat,a.lng,a.status,a.rdate,a.rtime from latlng a,vehicle_details b where a.vehicleno=b.vehicleno and b.clientid="+clientid +" and b.groupid in(select groupid from group_details where groupname='"+gname+"' and clientid='"+clientid+"')";
resultSet = statement.executeQuery(sql);
while(resultSet.next())
{
response.setContentType("text/html");
out.println("<tr>"+
"<td>"+"&nbsp"+"&nbsp"+resultSet.getString("vehicleno")+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"&nbsp"+"<br>"+"<br>"+"</td>"+);
//other <td>s
}
I think this is not good way. So I think about returning response as JSON object. Tell me how to return object as JSON and set values in <td>. and tell me JSON is a good way , or is there any other way please suggest me.
Object to JSON -conversion is explained in question: Converting Java Object to Json using Marshaller
Other things:
Your SQL is unsafe! Please refer to the following question that explains prepared statements and has examples too: Difference between Statement and PreparedStatement
Generally you should not write your low-level AJAX-code by yourself unless you are aiming to learn things. There are many cross browser functioning Javascript libraries that provide these things in a robust manner, such as JQuery. JQuery's API has the getJSON which you will undoubtedly find very useful (API doc):
var params = {myObjectId: 1337}; // just some parameters
$.getJSON( "myUrl/myAjaxAction", params, function( data ) { /* this is the success handler*/
alert(data.myObject.name); // assuming that returned data (JSON) is: {myObject: {name: 'Hello World!'}}
});
You should try to avoid as much as possible mixing server-side code with client-side code.
Your client-side code should only offer a nice and rich user interface, by manipulating the data which is provided by the server. The server-side code should only process the data - coming from different calls, or taken from a storage, usually a database.
Usually the comunication( asynchrounous or not) from a client and a server goes like that :
client sends a request to the server
server process the request and it gives a response, usually some html or json/xml
client process the response from the server
Ok, now lets move our attention to your specific problem.
Your ajax call : xmlHttpReq.open('GET', "RTMonitor?rtype=groups&groupname="+temp, true); should send the request to the servlet and expect some data back to process and render in a nice way to the user. Your servlet should handle the request, by querying the database( you should definitely change your code so it uses prepared statements as they are preventing SQL injection). By doing so, you've separate your client-side code from server-side code.
private List<YourObject> loadObjectsBySomeLogic() throws SQLException{
String sql ="SELECT a.vehicleno,a.lat,a.lng,a.status,a.rdate,a.rtime FROM latlng a,vehicle_details b WHERE a.vehicleno=b.vehicleno AND b.clientid= ? AND b.groupid in(select groupid from group_details where groupname= ? and clientid= ?)";
List<YourObject> list = new ArrayList<YourObject>();//new ArrayList<>(); for Java versions > 1.7
PreparedStatement ps = null;
ResultSet rs = null;
try{
ps = connection.prepareStatement(sql);
ps.setLong(1, clientId);
ps.setString(2, gname);
ps.setLong(3, clientId);
rs = ps.executeQuery();
while(rs .next())
{
//load data from ResultSet into an object/list of objects
}
}finally{
closeResources(rs , ps);
}
return list;
}
private static final void closeResources(final ResultSet rs , final PreparedStatement ps){
if (rs != null) {
try {
rs.close();
} catch (SQLException e) {
//nasty rs. log the exception?
LOGGER.error("Could not close the ResultSet!" , e);
}
}
if (ps != null) {
try {
ps.close();
} catch (SQLException e) {
//nasty ps. log the exception?
LOGGER.error("Could not close the PreparedStatement!" , e);
}
}
}
You could delegate this method to a different object, which handles the business/aplication domain logic, but that's not our point in this case.
You can use Json for your data format, because it has a nice, and easy to understand way to format data, and it is more lightweight compared to XML. You can use any Java library to encode data as Json. I'll provide an example which uses Gson library.
List<YourObject> list = loadObjectsBySomeLogic();
String json = new Gson().toJson(list);
response.setContentType("application/json");
response.setCharacterEncoding("UTF-8");
response.getWriter().write(json);
Now your Ajax request, should handle the Json data coming from server( I recommend you to use jQuery to make Ajax calls as it's been tested and it works great on all major browsers).
$.get('RTMonitor', function(responseJson) {
//handle your json response by rendering it using html + css.
});

performance optimization in spring with transaction

I am making one enterprise application. In which i have task to parse one csv file that contain suppose 50,000 k records. This csv file supplied by the end user at the time of registration. I made one program to parse the csv file into java object and then save all these object into database. This file contain mobile number and before save the csv file as a java object it firstly validated for mobile number. either it exist on database or not. It it is exist then it fails the validation and stop the exection.
Now Suppose two different user called A and B send a request for registration.Controller listen this request by the following code.
controller layer
#Transactional
#RequestMapping(value = "/saveCsvData")
public ModelAndView saveVMNDataFromCsv(#ModelAttribute("vmn") #Valid VMN vmn, BindingResult result, HttpSession session, HttpServletRequest request) {
String response = parseCsvService.parseAndPersistCsv(vmn.getVmnFile(),vmn.getNumberType(), request);
}
on the controller method i have user #Transactional annotation so that this method can complete its work completely.This controller call the helper call to read line by line from csv file and put them into java object.After getting the list of VMN Object with the help of loop i call service method which again call dao method for ecah line.
helper class
public String parseAndPersistCsv(MultipartFile csvFile,String numberType, HttpServletRequest request){
List<VMN> vmnList = new ArrayList<VMN>();
if(save){
for(VMN vmn : vmnList){
System.out.println("Remote Host :" + request.getRemoteHost());
System.out.println("Remote Add :" + request.getRemoteAddr());
vmnService.saveVmn(vmn, numberType);
}
response = constantService.getSuccess();
}
}
service Layer
public String saveVmn(final VMN vmn, String numberType) {
vmnService.saveVmn(vmn, numberType);
}
At Dao Layer method looks like this. This method insert record into multiple tables as it can be seen in method code.
Dao Layer
public String saveVmn(final VMN vmn, String numberType) {
String result = "error";
try {
final StringBuffer sql = new StringBuffer();
sql.append(constantService.getInsertInto());
sql.append(VMNTableSingleton.getInstance().getTableName());
sql.append(" (");
sql.append(VMNTableSingleton.getInstance().getVmnNo());
sql.append(constantService.getComma());
sql.append(VMNTableSingleton.getInstance().getNumberType());
sql.append(constantService.getComma());
sql.append(VMNTableSingleton.getInstance().getOperator());
sql.append(constantService.getComma());
sql.append(VMNTableSingleton.getInstance().getCircle());
sql.append(constantService.getComma());
sql.append(VMNTableSingleton.getInstance().getBuyingPrice());
sql.append(constantService.getComma());
sql.append(VMNTableSingleton.getInstance().getRecurringPrice());
sql.append(constantService.getComma());
sql.append(VMNTableSingleton.getInstance().getCreationDate());
sql.append(constantService.getComma());
sql.append(VMNTableSingleton.getInstance().getUpdationDate());
sql.append(constantService.getComma());
sql.append(VMNTableSingleton.getInstance().getActive());
sql.append(constantService.getComma());
sql.append(VMNTableSingleton.getInstance().getStatus());
sql.append(")");
sql.append(constantService.getValues());
sql.append(" (?,?,?,?,?,?,?,?,?,?)");
logger.info("Saving Vmn..." + sql);
KeyHolder keyHolder = new GeneratedKeyHolder();
int response = jdbcTemplate.update(new PreparedStatementCreator() {
#Override
public PreparedStatement createPreparedStatement(Connection conn) throws SQLException {
PreparedStatement ps = conn.prepareStatement(sql.toString(), Statement.RETURN_GENERATED_KEYS);
ps.setObject(1, vmn.getVmnNo());
ps.setObject(2, vmn.getNumberType());
ps.setObject(3, vmn.getOperator());
ps.setObject(4, vmn.getCircle());
ps.setObject(5, vmn.getBuyingPrice());
ps.setObject(6, vmn.getBuyingPrice());
ps.setObject(7, new Date());
ps.setObject(8, new Date());
ps.setObject(9, true);
ps.setObject(10, vmn.getStatus());
return ps;
}
}, keyHolder);
logger.info("Saved Successfully");
if (response == 1) {
if(vmn.getMappedVmn() != null){
Long vmnId = keyHolder.getKey().longValue();
if(vmnId > 0){
StringBuffer mappedsql = new StringBuffer();
mappedsql.append(constantService.getInsertInto());
mappedsql.append(MapDIDVMNTableSingleton.getInstance().getTableName());
mappedsql.append(" (");
mappedsql.append(MapDIDVMNTableSingleton.getInstance().getDidId());
mappedsql.append(constantService.getComma());
mappedsql.append(MapDIDVMNTableSingleton.getInstance().getMappedId());
mappedsql.append(constantService.getComma());
mappedsql.append(MapDIDVMNTableSingleton.getInstance().getType());
mappedsql.append(constantService.getComma());
mappedsql.append(MapDIDVMNTableSingleton.getInstance().getCreationDate());
mappedsql.append(constantService.getComma());
mappedsql.append(MapDIDVMNTableSingleton.getInstance().getModifiedDate());
mappedsql.append(")");
mappedsql.append(constantService.getValues());
mappedsql.append(" (?,?,?,?,?)");
logger.info("Mapping... DID with VMN");
int mappedresponse = jdbcTemplate.update(mappedsql.toString(),
new Object[] {vmn.getMappedVmn().getVmnId(),vmnId ,vmn.getNumberType(),new Date(),new Date()});
logger.info("Mapped Successfully");
if(mappedresponse == 1){
stringBuffer updatesql = new StringBuffer();
updatesql.append(constantService.getUpdate());
updatesql.append(VMNTableSingleton.getInstance().getTableName());
updatesql.append(constantService.getSet());
updatesql.append(VMNTableSingleton.getInstance().getStatus());
updatesql.append(constantService.getEqual());
updatesql.append(constantService.getQuestionMark());
updatesql.append(constantService.getComma());
updatesql.append(VMNTableSingleton.getInstance().getAllocationDateTime());
updatesql.append(constantService.getEqual());
updatesql.append(constantService.getQuestionMark());
updatesql.append(constantService.getWhere());
updatesql.append(VMNTableSingleton.getInstance().getVmnId());
updatesql.append(constantService.getEqual());
updatesql.append(constantService.getQuestionMark());
logger.info("Updating Vmn..." + updatesql);
jdbcTemplate.update(updatesql.toString(),
new Object[] { constantService.getMapped(),new Date(), vmn.getMappedVmn().getVmnId()});
logger.info("Saved Successfully");
}
}
}
result = "success";
} else {
result = "error";
}
} catch (Exception ex) {
}
return result;
}
Now when i have sent to request toward controller for registration. then i have seen on console both thread accessing method one by by one. By this code inside for loop.
System.out.println("Remote Host :" + request.getRemoteHost()); //10.0.0.0114
System.out.println("Remote Add :" + request.getRemoteAddr());//110.0.0.115
Not this can be dirty read problem because if one thread is reading data then other might be inserting. so to resolve this i have used sync block. Like this
synchronized (this) {
if(save){
for(VMN vmn : vmnList){
System.out.println("Remote Host :" + request.getRemoteHost());
System.out.println("Remote Add :" + request.getRemoteAddr());
vmnService.saveVmn(vmn, numberType);
}
response = constantService.getSuccess();
}
}
Now my question is, is this is the right way to do this or it can be done in some other way too.
this can be dirty read problem because if one thread is reading data
then other might be inserting
If your concern is visibility then I think this can be taken care of by transaction isolation configured ( usually READ COMMITTED by default) where a thread can only see the committed data and the data it is trying to update.
Also you should consider using batchUpdate method of JDBCTemplate which uses jdbc batching feature. Read and check the existence of number in batches and then update in batches.
Typically such features are handled well by Spring Batch framework. Your use case fits the description of a Spring job which can utilize inbuilt csv readers, processors and chunk based processing to achieve high volume processing of data. The job can be triggered from UI using Spring MVC controllers. For more details you can take a look here.

struts2 file upload loosing parameters

Using Struts 2.3.15.1
Implementing file upload in struts2. This is something I've done a number of times, however, I'm trying to include some sanity checks (i.e. max file size primarily). I have the fileUpload interceptor in place as the last interceptor in my stack (i.e. struts.xml). My stack includes a few in-house interceptors as well as the validationWorkflowStack. I've set the following property in my struts.properties file:
struts.multipart.maxSize = 2000000
In addition to the file upload, I'm passing a few other params in my form. Form is defined as:
<s:form action="addResource" method="post" enctype="multipart/form-data">
<s:hidden name="rfqId" value='%{rfq.id}' />
<s:file name="uploadFile" id="uploadFile" label="File" size="40" value=""/>
....
</s:form>
As I'm sure we all know, the validationWorkflowStack includes the params interceptor, which sets the request params onto the action. Here's the issue, when the file being uploaded exceeds the maxSize, there are no params for the params interceptor to set. I've stepped through is and there's nothing in the actionContext. This is not good, because I need those params to handle the INPUT error that will result.
Am I missing something?
Problem solved !
From the updated documentation, now the problem can be solved by using the new JakartaStreamMultiPartRequest :
As from Struts version 2.3.18 a new implementation of MultiPartRequest
was added - JakartaStreamMultiPartRequest. It can be used to handle
large files, see WW-3025 for more details, but you can simple set
<constant name="struts.multipart.parser" value="jakarta-stream" />
in struts.xml to start using it.
From the linked JIRA body :
When any size limits exceed, immediately a
FileUploadBase.SizeLimitExceededException or
FileUploadBase.FileSizeLimitExceededException is thrown and parsing of
the multipart request terminates without providing request parameters
for further processing.
This basically makes it impossible for any web application to handle
size limit exceeded cases gracefully.
My proposal is that request parsing should always complete to deliver
the request parameters. Size limit exceeded cases/exceptions might be
collected for later retrieval, FileSizeLimitExeededException should be
mapped to the FileItem to allow some validation on the FileItem on
application level. This would allow to mark upload input fields as
erronous if the uploaded file was too big.
Actually I made a patch for that (see attachment). With this patch,
commons-fileupload always completes request parsing in case of size
limit exceedings and only after complete parsing will throw an
exception if one was detected.
and Chris Cranford's comment:
I am working on a new multipart parser for Struts2 I am calling
JakartaStreamMultiPartRequest.
This multi-part parser behaves identical to the existing Jakarta
multi-part parser except that it uses the Commons FileUpload Streaming
API and rather than delegating maximum request size check to the File
Upload API, it's done internally to avoid the existing problem of the
Upload API breaking the loop iteration and parameters being lost.
Awesome, thanks guys :)
Old answer
I guess it is due to the different behavior of
a single file (or more files) that is exceeding its maximum defined size, and then can be redirected back at the end of a normal process with the INPUT result, and
the violation of the maximum size of the entire Request, that will (probably?) break any other element parsing, because it is a security mechanism, not a feature like the file size check;
When the files are parsed first (it should depend on their order in the page), if a file breaks the limit of the multipart request size, the other fields (the form fields) won't be read and hence not returned back with the INPUT result.
Struts2 uses the Jakarta implementation for the MultiPartRequestWrapper:
struts.multipart.parser - This property should be set to a class that extends MultiPartRequest. Currently, the framework ships with the Jakarta FileUpload implementation.
You can find the source code on Struts2 official site or here (faster to google); this is what is called when posting a multipart form:
public void parse(HttpServletRequest request, String saveDir) throws IOException {
try {
setLocale(request);
processUpload(request, saveDir);
} catch (FileUploadBase.SizeLimitExceededException e) {
if (LOG.isWarnEnabled()) {
LOG.warn("Request exceeded size limit!", e);
}
String errorMessage = buildErrorMessage(e, new Object[]{e.getPermittedSize(), e.getActualSize()});
if (!errors.contains(errorMessage)) {
errors.add(errorMessage);
}
} catch (Exception e) {
if (LOG.isWarnEnabled()) {
LOG.warn("Unable to parse request", e);
}
String errorMessage = buildErrorMessage(e, new Object[]{});
if (!errors.contains(errorMessage)) {
errors.add(errorMessage);
}
}
}
then, this is where it cycles the multipart Items, both files and form fields:
private void processUpload(HttpServletRequest request, String saveDir) throws FileUploadException, UnsupportedEncodingException {
for (FileItem item : parseRequest(request, saveDir)) {
if (LOG.isDebugEnabled()) {
LOG.debug("Found item " + item.getFieldName());
}
if (item.isFormField()) {
processNormalFormField(item, request.getCharacterEncoding());
} else {
processFileField(item);
}
}
}
that will end, in the FileUploadBase, in this implementation for each item:
FileItemStreamImpl(String pName, String pFieldName,
String pContentType, boolean pFormField,
long pContentLength) throws IOException {
name = pName;
fieldName = pFieldName;
contentType = pContentType;
formField = pFormField;
final ItemInputStream itemStream = multi.newInputStream();
InputStream istream = itemStream;
if (fileSizeMax != -1) {
if (pContentLength != -1
&& pContentLength > fileSizeMax) {
FileSizeLimitExceededException e =
new FileSizeLimitExceededException(
format("The field %s exceeds its maximum permitted size of %s bytes.",
fieldName, fileSizeMax),
pContentLength, fileSizeMax);
e.setFileName(pName);
e.setFieldName(pFieldName);
throw new FileUploadIOException(e);
}
istream = new LimitedInputStream(istream, fileSizeMax) {
#Override
protected void raiseError(long pSizeMax, long pCount)
throws IOException {
itemStream.close(true);
FileSizeLimitExceededException e =
new FileSizeLimitExceededException(
format("The field %s exceeds its maximum permitted size of %s bytes.",
fieldName, pSizeMax),
pCount, pSizeMax);
e.setFieldName(fieldName);
e.setFileName(name);
throw new FileUploadIOException(e);
}
};
}
stream = istream;
}
as you can see, it handles pretty differently the file size cap and the request size cap;
I've looked at the source for fun but you could really confirm (or correct) this assumptions, trying to debug the MultiPartRequestWrapper to see if what happens inside is what I think is going on... good luck and have fun.
Here's how I've worked around this issue. I wouldn't call this a solution.
Try putting a javascript check at at the early stage :
<!DOCTYPE html>
<html>
<head>
<script type="text/javascript">
function checkSize(max_img_size)
{
var input = document.getElementById("upload");
// check for browser support (may need to be modified)
if(input.files && input.files.length == 1)
{
if (input.files[0].size > max_img_size)
{
alert("The file must be less than " + (max_img_size/1024/1024) + "MB");
return false;
}
}
return true;
}
</script>
</head>
<body>
<form action="demo_post_enctype.asp" method="post" enctype="multipart/form-data"
onsubmit="return checkSize(2097152)">
<input type="file" id="upload" />
<input type="submit" />
</body>
</html>

how to find number of records in ResultSet

I am getting ResultSet after an Oracle query. when I iterating through the ResultSet its going in infinite loop.
ResultSet rs = (ResultSet) // getting from statement
while (rs.next()) {
//
//
}
this loop is not terminating so I tried finding number of records using rs.getFetchSize() and its returning a value 10.
I want to know if this is the correct method to find out number of records in ResultSet and if the count is 10 why is it going in infinite loop.
Please give your opinion.
Actually, the ResultSet doesn't have a clue about the real number of rows it will return.
In fact, using a hierachical query or a pipelined function, the number might as well be infinite. 10 is the suggested number of rows that the resultset should/will try to fetch in a single operation. (see comment below).
It's best to check your query, if it returns more rows than you expect.
To know number of records present, try the following code
ResultSet rs = // getting from statement
try {
boolean b = rs.last();
int numberOfRecords = 0;
if(b){
numberOfRecords = rs.getRow();
}
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
A simple getRowCount method can look like this :
private int getRowCount(ResultSet resultSet) {
if (resultSet == null) {
return 0;
}
try {
resultSet.last();
return resultSet.getRow();
} catch (SQLException exp) {
exp.printStackTrace();
} finally {
try {
resultSet.beforeFirst();
} catch (SQLException exp) {
exp.printStackTrace();
}
}
return 0;
}
Your resultSet should be scrollable to use this method.
Just looked this seems to be on similar lines on this question
When you execute a query and get a ResultSet, I would say it is really at this moment you or even the program-self actually don't how many results will be returned, this case is very similar Oracle CURSOR, it is just declare to Oracle that you want do such a query, hence then we have to for each ResultSet to get row one by one up to the last one.
As the above guys has ready answered: rs.last will iterate to last one at this time the program has ability to totally how many rows will be returned.
if(res.getRow()>0)
{
// Data present in resultset<br>
}
else
{
//Data not present in resultset<br>
}
You can look at snippet of code below where you can find how to calculate the loaded number of records from data set. This example is working with external data set (whiich comes in json format) so you can start with yours. The necessary piece of code is placed in script of controller (this page is based on ApPML javascript and cotroller works with loaded objects of ApPML). Code in controller returns number of the loaded reocords of data set and number of fields of data model.
<!DOCTYPE html>
<html lang="en-US">
<title>Customers</title>
<style>
body {font: 14px Verdana, sans-serif;}
h1 { color: #996600; }
table { width: 100%;border-collapse: collapse; }
th, td { border: 1px solid grey;padding: 5px;text-align: left; }
table tr:nth-child(odd) {background-color: #f1f1f1;}
</style>
<script src="http://www.w3schools.com/appml/2.0.2/appml.js"></script>
<body>
<div appml-data="http://www.w3schools.com/appml/customers.aspx" appml-controller="LukController">
<h1>Customers</h1>
<p></p>
<b>It was loaded {{totalRec}} records in total.</b>
<p></p>
<table>
<tr>
<th>Customer</th>
<th>City</th>
<th>Country</th>
</tr>
<tr appml-repeat="records">
<td>{{CustomerName}}</td>
<td>{{City}}</td>
<td>{{Country}}</td>
</tr>
</table>
</div>
<script>
function LukController($appml) {
if ($appml.message == "loaded") {
$appml.totalRec = Object.keys($appml.data.records).length;
}
}
// *****************************************************************
// Message Description
//
// ready Sent after AppML is initiated, and ready to load data.
// loaded Sent after AppML is fully loaded, ready to display data.
// display Sent before AppML displays a data item.
// done Sent after AppML is done (finished displaying).
// submit Sent before AppML submits data.
// error Sent after AppML has encountered an error.
// *****************************************************************
</script>
</body>
</html>
I got answer:- The below are steps that you need to follow:
Make sure your are using select query(e.g select * from employee).
Don't use count query(e.g select count(*) from employee).
Then use below steps:
Statement stmt=conn.createStatement();
ResultSet rs=stmt.executeQuery("select * from employee");
while(rs.next()){
rowCount++;
}
return rowCount;
}
where rs is object of ResultSet.
Then you will get exact number of row count.

Categories