Screen capture of Result tree
Code keeps running in infinite loop , wheather the data matches or not.
Also i want to ignore the first/header column when comparing. Can you
please help with this.
Thanks
I am using below code to compare 2 csv files.
Input Files:
File1
ColumnA
A
B
C
File2
ColumnA
A
B
D
Code
#import java.io.File;#
#import java.io.FileReader;#
#import java.io.LineNumberReader;#
String strfile1 = vars.get("thisScriptPath") + "file1.csv";
String strfile2 = vars.get("thisScriptPath") + "file2.csv";
def file1 = new File('/path/to/file1')
def file2 = new File('/path/to/file2')
def file1Lines = file1.readLines('UTF-8')
def file2Lines = file2.readLines('UTF-8')
if (file1Lines.size() != file2Lines.size()) {
SampleResult.setSussessful(false)
SampleResult.setResponseMessage('Files size is different, omitting line-by-line compare')
} else {
def differences = new StringBuilder()
file1Lines.eachWithIndex { String file1Line, int number ->
String file2Line = file2Lines.get(number)
if (!file1Line.equals(file2Line)) {
differences.append('Difference # ').append(number).append('. Expected: ').append(file1Line).append('. Actual: ' + file2Line)
differences.append(System.getProperty('line.separator'))
}
}
if (differences.toString().length() > 0) {
SampleResult.setSuccessful(false)
SampleResult.setResponseMessage(differences.toString())
}
}
REFER IMAGE
Related
I am trying to add column to the end of a file without changing the contents using SuperCSV and kotlin.
I cannot use CSVWriter due to limitation of resources.
So, my idea is to read from the original file row by row and add that to a string and have the result be used as a byte array.
fun addColumnToCSV(csvData: ByteArray, columnName: String, columnValue: String): ByteArray {
val prefs = CsvPreference.Builder(CsvPreference.STANDARD_PREFERENCE)
.useQuoteMode(NormalQuoteMode()).build()
val mapReader = CsvMapReader(BufferedReader(InputStreamReader(csvData.inputStream())), prefs)
val readHeader = mapReader.getHeader(true)
var row = mapReader.read(*readHeader)
var csv: String = readHeader.joinToString(",", postfix = ",$columnName\n")
while (row != null) {
val rowValue=readHeader.map { header-> row.getOrDefault(header,"\\s") }
csv += rowValue.joinToString(",", postfix = ",$columnValue\n")
row = mapReader.read(*readHeader)
}
csv = csv.trim()
mapReader.close()
return csv.toByteArray()
}
So, I have an example here and written a test for it.
#Test
fun `should add extra column in csv data when there are missing values`() {
val columnName = "ExtraColumn"
val columnValue = "Value"
val expectedCSV = "Name,LastName,$columnName\n" +
"John,Doe,$columnValue\n" +
"Jane,,$columnValue"
val csvData = "Name,LastName\n" + "John,Doe\n" + "Jane,"
val csv = addColumnToCSV(csvData.toByteArray(), columnName, columnValue)
Assertions.assertThat(String(csv)).isEqualTo(expectedCSV)
}
This test fails because the actual of csv data is
Name,LastName,ExtraColumn
John,Doe,Value
Jane,null,Value
I want it to be this, so that I am not changing the existing values that are present in the csv file.
Name,LastName,ExtraColumn
John,Doe,Value
Jane,,Value
I have tried with row.getOrDefault(header,"") its still the same result. How do I achieve this?
The problem seems to be on this line:
val rowValue=readHeader.map { header-> row.getOrDefault(header,"\\s") }
Without testing this, I would say that there's a null in row at index LastName, hence default value in getOrDefault is not applied because map contains the key.
Please try something like this:
val rowValue=readHeader.map { header-> row.getOrDefault(header,"\\s") ?: "" }
I want to join two csv files based on a common column in. My two csv files and final csv file looks like this.
Here are the example files - 1st file looks like:
sno,first name,last name
--------------------------
1,xx,yy
2,aa,bb
2nd file looks like:
sno,place
-----------
1,pp
2,qq
Output:
sno,first name,last name,place
------------------------------
1,xx,yy,pp
2,aa,bb,qq
Code:
CSVReader r1 = new CSVReader(new FileReader("c:/csv/file1.csv"));;
CSVReader r2 = new CSVReader(new FileReader("c:/csv/file2.csv"));;
HashMap<String,String[]> dic = new HashMap<String,String[]>();
int commonCol = 1;
r1.readNext(); // skip header
String[] line = null;
while ((line = r1.readNext()) != null)
{
dic.put(line[commonCol],line)
}
commonCol = 1;
r2.readNext();
String[] line2 = null;
while ((line2 = r2.readNext()) != null)
{
if (dic.keySet().contains(line2[commonCol])
{
// append line to existing entry
}
else
{
// create a new entry and pre-pend it with default values
// for the columns of file1
}
}
foreach (String[] line : dic.valueSet())
{
// write line to the output file.
}
I don't know how to proceed further to get desired output. Any help will be appreciated.
Thanks
First, you need to use zero as your commonCol value as the first column has index zero rather than one.
if (dic.keySet().contains(line2[commonCol])
{
//Get the whole line from the first file.
String firstPart = dic.get(line2[commonCol]);
//Gets the line from the second file, without the common column.
String secondPart = String.join (Arrays.copyOfRange(line2, 1, line2.length -1), ",");
// Join together and put in Hashmap.
dic.put(line2[commonCol], String.join (firstPart, secondPart));
}
else
{
// create a new entry and pre-pend it with default values
// for the columns of file1
String firstPart = String.join(",","some", "default", "values")
String secondPart = String.join (Arrays.copyOfRange(line2, 1, line2.length -1), ",");
dic.put(line2[commonCol], String.join (firstPart, secondPart));
}
probably my question is rather obvious. Would like to look on the one directory and createthe the List of the String, where each string represents the file name stored in given direcotry, e.g. List("file1.csv", "file2.csv", "file3.csv").
I use the function which creates list, but it's list of Files (not Strings) and includes full paths (not only the file names).
import java.io.File
def getFileNames(path: String): List[File] = {
val d = new File(path)
if (d.exists && d.isDirectory) {
d
.listFiles // create list of File
.filter(_.isFile)
.toList
.sortBy(_.getAbsolutePath().replaceAll("[^a-zA-Z0-9]",""))
} else {
Nil // return empty list
}
}
Thank you for all the ideas.
Try changing the return type of getFileNames toList[String] and use map(_.getName) like so
def getFileNames(path: String): List[String] = {
val d = new File(path)
if (d.exists && d.isDirectory) {
d
.listFiles // create list of File
.filter(_.isFile)
.toList
.sortBy(_.getAbsolutePath().replaceAll("[^a-zA-Z0-9]",""))
.map(_.getName)
} else {
Nil // return empty list
}
}
Make sure .map(_.getName) is the last in the chain, that is, after sortBy.
better-files would simplify this to
import better.files._
import better.files.Dsl._
val file = file"."
ls(file).toList.filter(_.isRegularFile).map(_.name)
you can use getName method
and as Tomasz pointed out, filter and map can be combined to collect as following
def getFileNames(path: String): List[String] = {
val d = new File(path)
if (d.exists && d.isDirectory) {
d
.listFiles // create list of File
.collect{ case f if f.isFile => f.getName }// gets the name of the file <--
.toList
.sortBy(_.getAbsolutePath().replaceAll("[^a-zA-Z0-9]",""))
} else {
Nil // return empty list
}
}
I'm a junior java Developer, entrusted with a Java Tool.
I have the following problem:
This tool takes in input 2 CSV files with specific fields.
The tool then generates 2 csv files as Output. First and Second Output.
Both Output files have the same fields, the first Output is based on some conditions and the second output on some other.
These 2 Output files contain different reconciliations for some data.Some records of the file have the same ID.
Example:
record1 = ID10-name One,
record2 = ID10-Blue Two,
record3 = ID10-name Three
One of the conditions is as follows:
if (line.getName (). toLowerCase (). contains ("Blue" .toLowerCase ())
|| line.getName (). equalsIgnoreCase ("Orange")) {
return true;
The method who implement this is a boolean ,and all the logic of the tool it's based on this logic.The tool scrolls/processing line by line .
Iterator <BaseElaboration> itElab = result.iterator ();
while (itElab.hasNext ()) {
BaseLine Processing = itElab.next ();
On the SECOND OUTPUT file,I find a line/record that has the name beginning with Blue.The tool rightly, it takes the line and inserts it in the Second Output File,cause all record who has name (getName), with Blue or Orange go on it
I should instead clump all the lines for the same ID even if only one of them has the name with blue.
Currently the tool do this :
FIRST FILE OUTPUT
record1 = ID10-name
record3 = ID10-name Three
SECOND FILE OUTPUT
record2 = ID10-Blue Two
The expected output is
FIRST FILE OUTPUT
nothing cause one of the group of IDs,cointains a name with Blue
SECOND FILE OUTPUT
record1 = ID10-name
record2 = ID10-Blue Two
record3 = ID10-name Three
i think something like this,but doesnt work
if (line.getID() && line.getCollector().toLowerCase().contains("Blue".toLowerCase())
|| line.getName().equalsIgnoreCase("black")) {
return true;
How to group in java lines for lined for the same ID,and do the esclusione on the noutput
CODE
Output
private void creaCSVOutput() throws IOException, CsvDataTypeMismatchException, CsvRequiredFieldEmptyException, ParseException {
Writer writerOutput = new FileWriter(pathOutput);
Writer writerEsclusi = new FileWriter(pathOutputEsclusi);
StatefulBeanToCsv<BaseElaborazione> beanToCsv = new StatefulBeanToCsvBuilder<BaseElaborazione>(writerOutput)
.withSeparator(';').withQuotechar('"').build();
StatefulBeanToCsv<BaseElaborazione> beanToCsvEsclusi = new StatefulBeanToCsvBuilder<BaseElaborazione>(writerEsclusi)
.withSeparator(';').withQuotechar('"').build();
beanToCsv.write(CsvHelper.genHeaderBeanBase());
beanToCsvEsclusi.write(CsvHelper.genHeaderBeanBase());
Iterator<BaseElaborazione> itElab = result.iterator();
while (itElab.hasNext()) {
BaseElaborazione riga = itElab.next();
some set if and condition ecc
esclusi.add(riga);
itElab.remove();
}
}
for (BaseElaborazione riga : result) {
if(riga.getNota() == null || riga.getNota().isEmpty()) {
riga.setNota(mapNota.get(cuvNota.get(riga.getCuv())));
}
beanToCsv.write(riga);
}
for (BaseElaborazione riga : esclusi) {
if(riga.getNota() == null || riga.getNota().isEmpty()) {
riga.setNota(mapNota.get(cuvNota.get(riga.getCuv())));
}
beanToCsvEsclusi.write(riga);
}
writerOutput.close();
writerEsclusi.close();
}
The method for the esclusi( 2 output)
private boolean checkPerimetroJunk(BaseElaborazione riga) {
if (riga.getMercato().toLowerCase().contains("Energia Libero".toLowerCase())) {
if (riga.getStrategia().toLowerCase().startsWith("STRATEGIA FO".toLowerCase())
|| (riga.getStrategia().toLowerCase().contains("CREDITI CEDUTI".toLowerCase())
|| (riga.getAttivita().equalsIgnoreCase("Proposta di Recupero Stragiudiziale FO")
|| (riga.getAttivita().toLowerCase().contains("Cessione".toLowerCase())
|| (riga.getLegalenome().equalsIgnoreCase("Euroservice junk STR FO")
|| (riga.getLegalenome().equalsIgnoreCase("Euroservice_FO"))))))) {
onlyCUV=true;
}
else if(Collections.frequency(storedIds,riga.getCuv()) >= 1 ){
onlyCUV = true;
}
return onlyCUV;
}
else if (riga.getMercato().equals("MAGGIOR TUTELA")) {
if(riga.getCollector().toLowerCase().contains("Cessione".toLowerCase())
|| (riga.getCollector().equalsIgnoreCase("Euroservice_Fo"))
|| (riga.getAttivitaCrabb().toLowerCase().contains("*FO".toLowerCase())
|| (riga.getaNomeCluster().equalsIgnoreCase("Full Outsourcing")))) {
onlyCUV = true;
}
else if(Collections.frequency(storedIds,riga.getCuv()) >= 1 ){
onlyCUV = true;
}
return onlyCUV;
}
return false;
}
WHERE riga=lines
cessione ecc are people who have black ecc its an example
Now the part of MAGGIOR TUTELA ITS WORKING,but not working the part of LIBERO.I dont know why.
I have the following code:
BufferedReader metaRead = new BufferedReader(new FileReader(metaFile));
String metaLine = "";
String [] metaData = new String [100000];
while ((metaLine = metaRead.readLine()) != null){
metaData = metaLine.split(",");
for (int i = 0; i < metaData.length; i++)
System.out.println(metaData[0]);
}
This is what's in the file:
testTable2 Name java.lang.Integer TRUE test
testTable2 age java.lang.String FALSE test
testTable2 ID java.lang.Integer FALSE test
I want the array to have at metaData[0] testTable2, metaData[1] would be Name, but when I run it at 0 I get testtable2testtable2testtable2, and at 1 I'd get NameageID and OutOfBoundsException.
Any ideas what to do in order to get the result I want?
Just print metaData[i] instead of metaData[0] and split each string by "[ ]+" (that means "1 or more spaces"):
metaData = metaLine.split("[ ]+");
As a result, you will get the following arrays:
[testTable2, Name, java.lang.Integer, TRUE, test]
[testTable2, age, java.lang.String, FALSE, test]
[testTable2, ID, java.lang.Integer, FALSE, test]
The code snippet to the preceding output results:
while ((metaLine = metaRead.readLine()) != null) {
metaData = metaLine.split("[ ]+");
for (int i = 0; i < metaData.length; i++)
System.out.print(metaData[i] + " ");
System.out.println();
}
Also, I've written your task by using Java 8 and Stream API:
List<String> collect = metaRead
.lines()
.flatMap(line -> Arrays.stream(line.split("[ ]+")))
.collect(Collectors.toList());
And, finally, there is the most straight-forward way:
final int LINES, WORDS;
String[] metaData = new String[LINES = 5 * (WORDS = 3)]; // I don't like it
int i = 0;
while ((metaLine = metaRead.readLine()) != null) {
for (String s : metaLine.split("[ ]+")) metaData[i++] = s;
}
Correct your code following line inside the for loop,
System.out.println(metaData[0]);
As
System.out.println(metaData[i]);
Although my answer may not fit completely with your question. But as i can see, your file format is TSV or CSV.
May be you should consider using OpenCSV
for your problem.
The library will handle reading, splitting process for you.