Writing in excel file with poi - java

I am getting a phone number from one excel file and write into another excel file using the following code
cellph = row.getCell(3);
Object phone = cellph.getNumericCellValue();
String strphone = phone.toString();
cellfour.setCellType(cellfour.CELL_TYPE_STRING);
cellfour.setCellValue("0"+strphone);
It writes the phone number as 09.8546586. I want to write it as 098546586(without precision value). How to do that?

Your problem isn't with the write. Your problem is with the read, that's what's giving you the floating point number
From your code and description, it looks like your phone numbers are stored in Excel as number cells, with an integer format applied to it. That means that when you retrieve the cell, you'll get a double number, and a cell format that tells you how to format it like Excel does.
I think what you probably want to do is something more like:
DataFormatter formatter = new DataFormatter();
cellph = row.getCell(3);
String strphone = "(none available)";
if (cellph.getCellType() == Cell.CELL_TYPE_NUMERIC) {
// Number with a format
strphone = "0" + formatter.formatCellValue(cellph);
}
if (cellph.getCellType() == Cell.CELL_TYPE_STRING) {
// String, eg they typed ' before the number
strphone = "0" + cellph.getStringCellValue();
}
// For all other types, we'll show the none available message
cellfour.setCellType(cellfour.CELL_TYPE_STRING);
cellfour.setCellValue(strphone);

Related

How to get Integer value from excel and store in Soap UI properties step

I have one data in excel which holds numeric value and when I am sending the same value from Excel to Soap UI properties, it's value getting converted into string like below:
In Excel:
Value of Data column is 200
In Soap UI properties:
Value of Data field is getting changed to 200.0
Can anyone help me to get the same numeric value in Soap UI properties?
FileInputStream fis = new FileInputSTream("Excel File Location");
XSSFWorkbook workbook = new XSSFWorkbook(fis);
XSSFSheet Sheet = new XSSFSheet("SheetName");
int totalRows = sheet.getPhysicalNumberOfRows();
testRunner.testCase.SetPropertyvalue("totalRows",totalRows.toString())
def int totalcolumn = sheet.getrow(0).getLastCellNum();
def columns = []
def data = []
def rowIndex = testrunner.testCase.getPropertyValue("RowIndex").toInteger();
if(rowIndex<totalRows)
{
for(int i = 0;i<totalcolumn;i++)
{
colData = sheet.getRow(0).getCell(i).toString();
propdataarray.add(colData)
testData = sheet.getRow(rowIndex).getCell(i).toString();
dataArray.add(testData);
testRunner.testCase.getTestStepByName("Properties").setPropertyValue(columns.get[i],data.get[i])
}
}
As Axel Richter said, use DataFormatter.formatCellValue to correct the value from the source. Or you can use an abstraction script that works very well with Groovy that handles that for you.
As you said, SoapUI only uses string for property values. If you want to correct it on the receiving/use side, you can use this:
def propVal = '200.0'
assert propVal instanceof java.lang.String
println propVal.toFloat().toInteger()
assert propVal.toFloat().toInteger() instanceof java.lang.Integer
It is generally better to correct the format at source (before storing it in a property).

Android Localization query

I have been working on localisation for my app and cant seem to find any information about how to handle decimal values and dates from different locals to store in sqllite.
for example:
German decimal 123,53
Uk decimal 123.53
So how do you convert from an edittext field to a valid decimal. At the moment I have my code outputting to a textview rather than sql just for testing. The below code works great when using UK decimal but if I use the german decimal it fails!!
Configuration sysConfig = getResources().getConfiguration();
Locale curLocale = sysConfig.locale;
NumberFormat nf = NumberFormat.getInstance(curLocale);
String convertedString = nf.format(Double.parseDouble(EditTextField.getText().toString()));
TextView showLocalisedNumeric;
showLocalisedNumeric = (TextView)findViewById(R.id.TestNumericValue);
showLocalisedNumeric.setText(convertedString);
I have not started with dates yet but I am assuming converting dates for is more straight forward.
After some time working on this I found the solution for localisation, taking a value from input and parsing it to a format that can be understood by SQLlite - For this exercise and to reduce code I have just set it to output to a text view.
// set initial value to variables
String convertedString = "";
double parsedValue = 0.0;
//get value from text field
EditText EditTextField =(EditText)findViewById(R.id.TestNumericValueEntered);
String valueFromInput = EditTextField.getText().toString();
//Get current Locale from system
Configuration sysConfig = getResources().getConfiguration();
Locale curLocale = sysConfig.locale;
//Set number formats
NumberFormat nf_in = NumberFormat.getNumberInstance(curLocale);
NumberFormat nf_out = NumberFormat.getNumberInstance(Locale.UK);
//try to parse value, otherwise return error message
try {
parsedValue = nf_in.parse(valueFromInput).doubleValue();
// use nf_out.setMaximumFractionDigits(3) to set max number of digits allowed after decimal;
convertedString = nf_out.format(parsedValue);
}catch(ParseException e){
convertedString = "Unable to translate value";
}
//Output result
TextView showLocalisedNumeric;
showLocalisedNumeric = (TextView)findViewById(R.id.TestNumericValue);
showLocalisedNumeric.setText(convertedString);
As I was adding this answer I realised that a nice addition to the code would be to check if the current locale is the same as the one you plan to parse to, if so then the conversion(parsing) can be skipped.

How to read data from CSV if contains more than excepted separators?

I use CsvJDBC for read data from a CSV. I get CSV from web service request, so not loaded from file. I adjust these properties:
Properties props = new java.util.Properties();
props.put("separator", ";"); // separator is a semicolon
props.put("fileExtension", ".txt"); // file extension is .txt
props.put("charset", "UTF-8"); // UTF-8
My sample1.txt contains these datas:
code;description
c01;d01
c02;d02
my sample2.txt contains these datas:
code;description
c01;d01
c02;d0;;;;;2
It is optional for me deleted headers from CSV. But not optional for me change semi-colon separator.
EDIT: My query for resultSet: SELECT * FROM myCSV
I want to read code column in sample1.txt and sample2.txt with:
resultSet.getString(1)
and read full description column with many semi-colons (d0;;;;;2). Is it possible with CsvJdbc driver or need to change driver?
Thank you any advice!
This is a problem that occurs when you have messy, invalid input, which you need to try to interpret, that's being read by a too-high-level package that only handles clean input. A similar example is trying to read arbitrary HTML with an XML parser - close, but no cigar.
You can guess where I'm going: you need to pre-process your input.
The preprocessing may be very easy if you can make some assumptions about the data - for example, if there are guaranteed to be no quoted semi-colons in the first column.
You could try supercsv. We have implemented such a solution in our project. More on this can be found in http://supercsv.sourceforge.net/
and
Using CsvBeanReader to read a CSV file with a variable number of columns
Finally this problem solved without a CSVJdbc or SuperCSV driver. These drivers works fine. There are possible query data form CSV file and many features content. In my case I don't need query data from CSV. Unfortunately, sometimes the description column content one or more semi-colons and which it is my separator.
First I check code in answer of #Maher Abuthraa and modified to:
private String createDescriptionFromResult(ResultSet resultSet, int columnCount) throws SQLException {
if (columnCount > 2) {
StringBuilder data_list = new StringBuilder();
for (int ii = 2; ii <= columnCount; ii++) {
data_list.append(resultSet.getString(ii));
if (ii != columnCount)
data_list.append(";");
}
// data_list has all data from all index you are looking for ..
return data_list.toString();
} else {
// use standard way
return resultSet.getString(2);
}
}
The loop started from 2, because 1 column is code and only description column content many semi-colons. The CSVJdbc driver split columns by separator ; and these semi-colons disappears from columns data. So, I explicit add semi-colons to description, except the last column, because it is not relevant in my case.
This code work fine. But not solved my all problem. When I adjusted two columns in header of CSV I get error in row, which content more than two semi-colons. So I try adjust ignore of headers or add many column name (or simple ;) to a header. In superCSV ignore of headers option work fine.
My colleague opinion was: you are don't need CSV driver, because try load CSV which not would be CSV, if separator is sometimes relevant data.
I think my colleague has right and I loaded CSV data whith following code:
InputStream in = null;
try {
in = new ByteArrayInputStream(csvData);
List lines = IOUtils.readLines(in, "UTF-8");
Iterator it = lines.iterator();
String line = "";
while (it.hasNext()) {
line = (String) it.next();
String description = null;
String code = null;
String[] columns = line.split(";");
if (columns.length >= 2) {
code = columns[0];
String[] dest = new String[columns.length - 1];
System.arraycopy(columns, 1, dest, 0, columns.length - 1);
description = org.apache.commons.lang.StringUtils.join(dest, ";");
(...)
ok.. my solution to go and read all fields if columns are more than 2 ... like:
int ccc = meta.getColumnCount();
if (ccc > 2) {
ArrayList<String> data_list = new ArrayList<String>();
for (int ii = 1; ii < ccc; ii++) {
data_list.add(resultSet.getString(i));
}
//data_list has all data from all index you are looking for ..
} else {
//use standard way
resultSet.getString(1);
}
If the table is defined to have as many columns as there could be semi-colons in the source, ignoring the initial column definitions, then the excess semi-colons would be consumed by the database driver automatically.
The most likely reason for them to appear in the final column is because the parser returns the balance of the row to the terminator in the field.
Simply increasing the number of columns in the table to match the maximum possible in the input will avoid the need for custom parsing in the program. Try:
code;description;dummy1;dummy2;dummy3;dummy4;dummy5
c01;d01
c02;d0;;;;;2
Then, the additional ';' delimiters will be consumed by the parser correctly.

xlsx analysis:where XMLvalue is different from cells value

now we've building a function that import a large xlsx files (more than 200MB) to DB,using apache.poi and go through all the xml files reading for that data.
that function had completed but have a questuion:
when I input a value '1:16' in a xlsx cell,it would auto covery store type to 'user-defined-numeric'
in xml file you'll see
<c r="A1" s="1"><v>5.2777777777777778E-2</v></c>
and i just need get that value '1:16'
how can i do?
The "number" 1:16 is converted by Excel to a time and date/times in excel are stored as a number where the integer part is the number of days since the epoch and the decimal part is the percentage of the day.
So in your example:
= 0.0527777777777778 *24 *60 (hours * minutes)
= 76 mins
= 1 hour 16 mins
With POI you will need to use a data formatter. Something like:
DataFormatter formatter = new DataFormatter(Locale.US);
if(DateUtil.isCellDateFormatted(cell)) {
String formattedData = formatter.formatCellValue(cell);
...
}

error during grouping files based on the date field

I have a large file which has 10,000 rows and each row has a date appended at the end. All the fields in a row are tab separated. There are 10 dates available and those 10 dates have randomly been assigned to all the 10,000 rows. I am now writing a java code to write all those rows with the same date into a separate file where each file has the corresponding rows with that date.
I am trying to do it using string manipulations, but when I am trying to sort the rows based on date, I am getting an error while mentioning the date and the error says the literal is out of range. Here is the code that I used. Please have a look at it let me know if this is the right approach, if not, kindly suggest a better approach. I tried changing the datatype to Long, but still the same error. The row in the file looks something like this:
Each field is tab separated and the fields are:
business id, category, city, biz.name, longitude, state, latitude, type, date
**
qarobAbxGSHI7ygf1f7a_Q ["Sandwiches","Restaurants"] Gilbert Jersey
Mike's Subs -111.8120071 AZ 3.5 33.3788385 business 06012010
**
The code is:
File f=new File(fn);
if(f.exists() && f.length()>0)
{
BufferedReader br=new BufferedReader(new FileReader(fn));
BufferedWriter bw = new BufferedWriter(new FileWriter("FilteredDate.txt"));
String s=null;
while((s=br.readLine())!=null){
String[] st=s.split("\t");
if(Integer.parseInt(st[13])==06012010){
Thanks a lot for your time..
Try this,
List<String> sampleList = new ArrayList<String>();
sampleList.add("06012012");
sampleList.add("06012013");
sampleList.add("06012014");
sampleList.add("06012015");
//
//
String[] sampleArray = s.split(" ");
if (sampleArray != null)
{
String sample = sampleArray[sampleArray.length - 1];
if (sampleList.contains(sample))
{
stringBuilder.append(sample + "\n");
}
}
i suggest not to use split, but rather use
String str = s.subtring(s.lastIndexOf('\t'));
in any case, you try to take st[13] when i see you only have 9 columns. might be you just need st[8]
one last thing, look at this post to learn what 06012010 really means

Categories