DbUnit NoSuchTableException - Workaround for long table names in Oracle - java

I'm working on creating a test suite that runs on multiple databases using dbunit xml. Unfortunately, yesterday I discovered that some table names in our schema are over 30 characters, and are truncated for Oracle. For example, a table named unusually_long_table_name_error in mysql is named unusually_long_table_name_erro in Oracle. This means that my dbunit file contains lines like <unusually_long_table_name_error col1="value1" col2="value2 />. These lines throw a NoSuchTableException when running the tests in Oracle.
Is there a programmatic workaround for this? I'd really like to avoid generating special xml files for Oracle. I looked into a custom MetadataHandler but it returns lots of java.sql datatypes that I don't know how to intercept/spoof. I could read the xml myself, truncate each table name to 30 characters, write that out to a temp file or StringBufferInputStream and then use that as input to my DataSetBuilder, but that seems like a whole lot of steps to accomplish very little. Maybe there's some ninja Oracle trick with synonyms or stored procedures or goodness-know-what-else that could help me. Is one of these ideas clearly better than the others? Is there some other approach that would blow me away with its simplicity and elegance? Thanks!

In light of the lack of answers, I ended up going with my own suggested approach, which
Reads the .xml file
Regex's out the table name
Truncates the table name if it's over 30 characters
Appends the (potentially modified) line to a StringBuilder
Feeds that StringBuilder into a ByteArrayInputStream, suitable for passing into a DataSetBuilder
public InputStream oracleWorkaroundStream(String fileName) throws IOException
{
String ls = System.getProperty("line.separator");
// This pattern isolates the table name from the rest of the line
Pattern pattern = Pattern.compile("(\\s*<)(\\w+)(.*/>)");
FileInputStream fis = new FileInputStream(fileName);
// Use a StringBuidler for better performance over repeated concatenation
StringBuilder sb = new StringBuilder(fis.available()*2);
InputStreamReader isr = new InputStreamReader(fis, "UTF-8");
BufferedReader buff = new BufferedReader(isr);
while (buff.ready())
{
// Read a line from the source xml file
String line = buff.readLine();
Matcher matcher = pattern.matcher(line);
// See if the line contains a table name
if (matcher.matches())
{
String tableName = matcher.group(2);
if (tableName.length() > 30)
{
tableName = tableName.substring(0, 30);
}
// Append the (potentially modified) line
sb.append(matcher.group(1));
sb.append(tableName);
sb.append(matcher.group(3));
}
else
{
// Some lines don't have tables names (<dataset>, <?xml?>, etc.)
sb.append(line);
}
sb.append(ls);
}
return new ByteArrayInputStream(sb.toString().getBytes("UTF-8"));
}
EDIT: Swtiched to StringBuilder from repeated String concatenation, which gives a huge performance boost

Related

Importing two CSV files into Java and then parsing them. The first one works the second doesnt

Im working on my code where I am importing two csv files and then parsing them
//Importing CSV File for betreuen
String filename = "betreuen_4.csv";
File file = new File(filename);
//Importing CSV File for lieferant
String filename1 = "lieferant.csv";
File file1 = new File(filename1);
I then proceed to parse them. For the first csv file everything works fine. The code is
try {
Scanner inputStream = new Scanner(file);
while(inputStream.hasNext()) {
String data = inputStream.next();
String[] values = data.split(",");
int PInummer = Integer.parseInt(values[1]);
String MNummer = values[0];
String KundenID = values[2];
//System.out.println(MNummer);
//create the caring object with the required paramaters
//Caring caring = new Caring(MNummer,PInummer,KundenID);
//betreuen.add(caring);
}
inputStream.close();
}catch(FileNotFoundException d) {
d.printStackTrace();
}
I then proceed to parse the other csv file the code is
// parsing csv file lieferant
try {
Scanner inputStream1 = new Scanner(file1);
while(inputStream1.hasNext()) {
String data1 = inputStream1.next();
String[] values1 = data1.split(",");
int LIDnummer = Integer.parseInt(values1[0]);
String citynames = values1[1];
System.out.println(LIDnummer);
String firmanames = values1[2];
//create the suppliers object with the required paramaters
//Suppliers suppliers = new
//Suppliers(LIDnummer,citynames,firmanames);
//lieferant.add(suppliers);
}
inputStream1.close();
}catch(FileNotFoundException d) {
d.printStackTrace();
}
the first error I get is
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 2
at Verbindung.main(Verbindung.java:61)
So I look at my array which is firmaname at line 61 and I think, well it's impossible that its out of range since in my CSV file there are three columns and at index 2 (which I know is the third column in the CSV file) is my list of company names. I know the array is not empty because when i wrote
`System.out.println(firmanames)`
it would print out three of the first company names. So in order to see if there is something else causing the problem I commented line 61 out and I ran the code again. I get the following error
`Exception in thread "main" java.lang.NumberFormatException: For input
string: "Ridge"
at java.lang.NumberFormatException.forInputString(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at Verbindung.main(Verbindung.java:58)`
I google these errors and you know it was saying im trying to parse something into an Integer which cannot be an integer, but the only thing that I am trying to parse into an Integer is the code
int LIDnummer = Integer.parseInt(values1[0]);
Which indeed is a column containing only Integers.
My second column is also indeed just a column of city names in the USA. The only thing with that column is that there are spaces in some town names like Middle brook but I don't think that would cause problems for a String type. Also in my company columns there are names like AT&T but i would think that the & symbol would also not cause problems for a string. I don't know where I am going wrong here.
I cant include the csv file but here is a pic of a part of it. The length of each column is a 1000.
A pic of the csv file
Scanner by default splits its input by whitespace (docs). Whitespace means spaces, tabs and newlines.
So your code will, I think, split the whole input file at every space and every newline, which is not what you want.
So, the first three elements your code will read are
5416499,Prairie
Ridge,NIKE
1765368,Edison,Cartier
I suggest using method readLine of BufferedReader then calling split on that.
The alternative is to explicitly tell Scanner how you want it to split the input
Scanner inputStream1 = new Scanner(file1).useDelimiter("\n");
but I think this is not the best use of Scanner when a simpler class (BufferedReader) will do.
First of all, I would highly suggest you try and use an existing CSV parser, for example this one.
But if you really want to use your own, you are going to need to do some simple debugging. I don't know how large your file is, but the symptoms you are describing lead me to believe that somewhere in the csv there may be a missing comma or an accidental escape character. You need to find out what line it is. So run this code and check its output before it crashes:
int line = 1;
try {
Scanner inputStream1 = new Scanner(file1);
while(inputStream1.hasNext()) {
String data1 = inputStream1.next();
String[] values1 = data1.split(",");
int LIDnummer = Integer.parseInt(values1[0]);
String citynames = values1[1];
System.out.println(LIDnummer);
String firmanames = values1[2];
line++;
}
} catch (ArrayIndexOutOfBoundsException e){
System.err.println("The issue in the csv is at line:" + line);
}
Once you find what line it is, the answer should be obvious. If not, post a picture of that line and we'll see...

How to access values of a line, while reading in a text file in Java

I am trying to load in two files at the same time but also access the first gps1 file. I want to access the gps1 file line-by-line and depending on the sentence type which I will explain later I want to do different stuff with that line and then move to the next line.
Basically gps1 for example has multiple lines but each line falls under a couple of catagories all starting with $GPS(then other characters). Some of these types have a time stamp which I need to collect and some types do not have a time stamp.
File gps1File = new File(gpsFile1);
File gps2File = new File(gpsFile2);
FileReader filegps1 = new FileReader(gpsFile1);
FileReader filegps2 = new FileReader(gpsFile2);
BufferedReader buffer1 = new BufferedReader(filegps1);
BufferedReader buffer2 = new BufferedReader(filegps2);
String gps1;
String gps2;
while ((gps1 = buffer1.readLine()) != null) {
The gps1 data file is as follows
$GPGSA,A,3,28,09,26,15,08,05,21,24,07,,,,1.6,1.0,1.3*3A
$GPRMC,151018.000,A,5225.9627,N,00401.1624,W,0.11,104.71,210214,,*14
$GPGGA,151019.000,5225.9627,N,00401.1624,W,1,09,1.0,38.9,M,51.1,M,,0000*72
$GPGSA,A,3,28,09,26,15,08,05,21,24,07,,,,1.6,1.0,1.3*3A
Thanks
I don't really understand the problem you are facing but anyway, if you want to get your lines content you can use a StringTokenizer
StringTokenizer st = new StringTokenizer(gps1, ",");
And then access the data one by one
while(st.hasMoreToken)
String s = st.nextToken();
EDIT:
NB: the first token will be your "$GPXXX" attribute

Exporting CSV really strange formatted

What am i doing? I am exporting my sqlite database into a csv -- atleast i try to
I've done this both manually and with "OpenCSV".
With both methods I get very strange results. They just seem not well formatted. Neither the columns (which are usually seperated by ',' ? ) nor special characters (which are said to be handled within opencsv) look like they should. code:
CSVWriter writer = new CSVWriter(new FileWriter(file),'\n',',');
String[] items = new String[11];
c.moveToFirst();
while(!c.isAfterLast()){
items[0] = c.getString(c.getColumnIndex(BaseColumns._ID));
items[1] = c.getString(c.getColumnIndex(DepotTableMetaData.ITEM_QRCODE));
items[2] = c.getString(c.getColumnIndex(DepotTableMetaData.ITEM_NAME));
items[3] = c.getString(c.getColumnIndex(DepotTableMetaData.ITEM_AMOUNT));
items[4] = c.getString(c.getColumnIndex(DepotTableMetaData.ITEM_UNIT));
items[5] = c.getString(c.getColumnIndex(DepotTableMetaData.ITEM_PPU));
items[6] = c.getString(c.getColumnIndex(DepotTableMetaData.ITEM_TOTAL));
items[7] = c.getString(c.getColumnIndex(DepotTableMetaData.ITEM_COMMENT));
items[8] = c.getString(c.getColumnIndex(DepotTableMetaData.ITEM_SHOPPING));
items[9] = c.getString(c.getColumnIndex(DepotTableMetaData.CREATED_DATE));
items[10] = c.getString(c.getColumnIndex(DepotTableMetaData.MODIFIED_DATE));
c.moveToNext();
writer.writeNext(items);
}
writer.close();
and it all gives this as a result:
I've also done it through FileWriter and StringBuffer but it seems to give exactly the same results...I'd love if you could help me ;)
I have looked through stackoverflow but couldn't find any matching question ;/
edit: yes i know that I use the "old, deprecated" cursor, but that's not the question here. Thanks.
edit2: SOLVED !
you have to assign some common encoding !
CSVWriter writer = new CSVWriter(new OutputStreamWriter(new FileOutputStream(destination+"/output.csv"),"UTF-8"));
did the job perfectly!
You use an OpenCSV Writer, which takes a row of the CSV file as an array of Strings, and generates the separators between columns and rows automatically, but instead of letting OpenCSV do it for you, you do it explicitely by appending all the values of a row in a single String. So obviously, OpenCSV takes your unique value and considers it contains a single column, where commas and newlines must be encoded.
You should call writer.writeNext() with an array of Strings, each String in the array being a single cell from the table. The writer will generate the commas and the newlines for you.

Java - Load file, replace string, save

I have a program that loads lines from a user file, then selects the last part of the String (which would be an int)
Here's the style it's saved in:
nameOfValue = 0
nameOfValue2 = 0
and so on. I have selected the value for sure - I debugged it by printing. I just can't seem to save it back in.
if(nameOfValue.equals(type)) {
System.out.println(nameOfValue+" equals "+type);
value.replace(value, Integer.toString(Integer.parseInt(value)+1));
}
How would I resave it? I've tried bufferedwriter but it just erases everything in the file.
My suggestion is, save all the contents of the original file (either in memory or in a temporary file; I'll do it in memory) and then write it again, including the modifications. I believe this would work:
public static void replaceSelected(File file, String type) throws IOException {
// we need to store all the lines
List<String> lines = new ArrayList<String>();
// first, read the file and store the changes
BufferedReader in = new BufferedReader(new FileReader(file));
String line = in.readLine();
while (line != null) {
if (line.startsWith(type)) {
String sValue = line.substring(line.indexOf('=')+1).trim();
int nValue = Integer.parseInt(sValue);
line = type + " = " + (nValue+1);
}
lines.add(line);
line = in.readLine();
}
in.close();
// now, write the file again with the changes
PrintWriter out = new PrintWriter(file);
for (String l : lines)
out.println(l);
out.close();
}
And you'd call the method like this, providing the File you want to modify and the name of the value you want to select:
replaceSelected(new File("test.txt"), "nameOfValue2");
I think most convenient way is:
Read text file line by line using BufferedReader
For each line find the int part using regular expression and replace
it with your new value.
Create a new file with the newly created text lines.
Delete source file and rename your new created file.
Please let me know if you need the Java program implemented above algorithm.
Hard to answer without the complete code...
Is value a string ? If so the replace will create a new string but you are not saving this string anywhere. Remember Strings in Java are immutable.
You say you use a BufferedWriter, did you flush and close it ? This is often a cause of values mysteriously disappearing when they should be there. This exactly why Java has a finally keyword.
Also difficult to answer without more details on your problem, what exactly are you trying to acheive ? There may be simpler ways to do this that are already there.

How to read and update row in file with Java

currently i creating a java apps and no database required
that why i using text file to make it
the structure of file is like this
unique6id username identitynumber point
unique6id username identitynumber point
may i know how could i read and find match unique6id then update the correspond row of point ?
Sorry for lack of information
and here is the part i type is
public class Cust{
string name;
long idenid, uniqueid;
int pts;
customer(){}
customer(string n,long ide, long uni, int pt){
name = n;
idenid = ide;
uniqueid = uni;
pts = pt;
}
FileWriter fstream = new FileWriter("Data.txt", true);
BufferedWriter fbw = new BufferedWriter(fstream);
Cust newCust = new Cust();
newCust.name = memUNTF.getText();
newCust.ic = Long.parseLong(memICTF.getText());
newCust.uniqueID = Long.parseLong(memIDTF.getText());
newCust.pts= points;
fbw.write(newCust.name + " " + newCust.ic + " " + newCust.uniqueID + " " + newCust.point);
fbw.newLine();
fbw.close();
this is the way i text in the data
then the result inside Data.txt is
spencerlim 900419129876 448505 0
Eugene 900419081234 586026 0
when user type in 586026 then it will grab row of eugene
bind into Cust
and update the pts (0 in this case, try to update it into other number eg. 30)
Thx for reply =D
Reading is pretty easy, but updating a text file in-place (ie without rewriting the whole file) is very awkward.
So, you have two options:
Read the whole file, make your changes, and then write the whole file to disk, overwriting the old version; this is quite easy, and will be fast enough for small files, but is not a good idea for very large files.
Use a format that is not a simple text file. A database would be one option (and bear in mind that there is one, Derby, built into the JDK); there are other ways of keeping simple key-value stores on disk (like a HashMap, but in a file), but there's nothing built into the JDK.
You can use OpenCSV with custom separators.
Here's a sample method that updates the info for a specified user:
public static void updateUserInfo(
String userId, // user id
String[] values // new values
) throws IOException{
String fileName = "yourfile.txt.csv";
CSVReader reader = new CSVReader(new FileReader(fileName), ' ');
List<String[]> lines = reader.readAll();
Iterator<String[]> iterator = lines.iterator();
while(iterator.hasNext()){
String[] items = (String[]) iterator.next();
if(items[0].equals(userId)){
for(int i = 0; i < values.length; i++){
String value = values[i];
if(value!=null){
// for every array value that's not null,
// update the corresponding field
items[i+1]=value;
}
}
break;
}
}
new CSVWriter(new FileWriter(fileName), ' ').writeAll(lines);
}
Use InputStream(s) and Reader(s) to read file.
Here is a code snippet that shows how to read file.
BufferedReader reader = new BufferedReader(new InputStreamReader(new FileInputStream("c:/myfile.txt")));
String line = null;
while ((line = reader.readLine()) != null) {
// do something with the line.
}
Use OutputStream and Writer(s) to write to file. Although you can use random access files, i.e. write to the specific place of the file I do not recommend you to do this. Much easier and robust way is to create new file every time you have to write something. I know that it is probably not the most efficient way, but you do not want to use DB for some reasons... If you have to save and update partial information relatively often and perform search into the file I'd recommend you to use DB. There are very light weight implementations including pure java implementations (e.g. h2: http://www.h2database.com/html/main.html).

Categories