Save Word Document with JACOB (Java) - java

i'm trying to make a simple Java program to open an existing word-document, change something and save it as .html-file.
The part which is not working is to save it as .html .
The problem is, i got the html-file but it's only a renamed doc-file. So not really a .html-file which I can work with.
This is what I found with Google:
Object oWordBasic = Dispatch.call(oWord, "WordBasic").getDispatch();
Dispatch.call((Dispatch) oWordBasic, "FileSaveAs", path);
What I have to do, to get a html-file as output?
Thank you in advance.

It's using the OLE Automation Object to save the file, so you have to find the method or parameter to indicate filetype.
This is the macro I could record using Word:
ActiveDocument.SaveAs filename:="asdd.htm", FileFormat:=wdFormatHTML, _
LockComments:=False, Password:="", AddToRecentFiles:=True, WritePassword _
:="", ReadOnlyRecommended:=False, EmbedTrueTypeFonts:=False, _
SaveNativePictureFormat:=False, SaveFormsData:=False, SaveAsAOCELetter:= _
False
So it means you have to indicate FileFormat := wdFormatHTML (or the constant value) parameter to the SaveAs method. That's left as an exercise to the reader :)

I figured it out, thanks to helios for the tip.
The correct code is:
Object oWordBasic = Dispatch.call(oWord, "WordBasic").getDispatch();
Dispatch.call((Dispatch) oWordBasic, "FileSaveAs", path, new Variant(8));
The Parameter of the variant is the output format. (for example 8 is html, 6 is rtf, 17 is pdf)
You can find the full list at: WdSaveFormat Enumeration

Related

File wildcard use *

I am trying to read a file which has name: K2ssal.timestamp.
I want to handle the time stamp part of the file name as wildcard.
How can I achieve this ?
tried * after file name but not working.
var getK2SSal: Iterator[String] = Source.fromFile("C://Users/nrakhad/Desktop/Work/Data stage migration/Input files/K2Ssal.*").getLines()
You can use Files.newDirectoryStream with directory + glob:
import java.nio.file.{Paths, Files}
val yourFile = Files.newDirectoryStream(
Paths.get("/path/to/the/directory"), // where is the file?
"K2Ssal.*" // glob of the file name
).iterator.next // get first match
Misconception on your end: unless the library call is specifically implemented to do so, using a wildcard simply doesn't work like you expect it to.
Meaning: a file system doesn't know about wildcards. It only knows about existing files and folders. The fact that you can put * on certain commands, and that the wildcard is replaced with file names is a property of the tool(s) you are using. And most often, programming APIs that allow you to query the file system do not include that special wild card handling.
In other words: there is no sense in adding that asterisk like that.
You have to step back and write code that actively searches for files itself. Here are some examples for scala.
You can read the directory and filter on files based upon the string.
val l = new File("""C://Users/nrakhad/Desktop/Work/Data stage migration/Input files/""").listFiles
val s = l.filter(_.toString.contains("K2Ssal."))

csv file giving showing wrong value [duplicate]

I'm developing a part of an application that's responsible for exporting some data into CSV files. The application always uses UTF-8 because of its multilingual nature at all levels. But opening such CSV files (containing e.g. diacritics, cyrillic letters, Greek letters) in Excel does not achieve the expected results showing something like Г„/Г¤, Г–/Г¶. And I don't know how to force Excel understand that the open CSV file is encoded in UTF-8. I also tried specifying UTF-8 BOM EF BB BF, but Excel ignores that.
Is there any workaround?
P.S. Which tools may potentially behave like Excel does?
UPDATE
I have to say that I've confused the community with the formulation of the question. When I was asking this question, I asked for a way of opening a UTF-8 CSV file in Excel without any problems for a user, in a fluent and transparent way. However, I used a wrong formulation asking for doing it automatically. That is very confusing and it clashes with VBA macro automation. There are two answers for this questions that I appreciate the most: the very first answer by Alex https://stackoverflow.com/a/6002338/166589, and I've accepted this answer; and the second one by Mark https://stackoverflow.com/a/6488070/166589 that have appeared a little later. From the usability point of view, Excel seemed to have lack of a good user-friendly UTF-8 CSV support, so I consider both answers are correct, and I have accepted Alex's answer first because it really stated that Excel was not able to do that transparently. That is what I confused with automatically here. Mark's answer promotes a more complicated way for more advanced users to achieve the expected result. Both answers are great, but Alex's one fits my not clearly specified question a little better.
UPDATE 2
Five months later after the last edit, I've noticed that Alex's answer has disappeared for some reason. I really hope it wasn't a technical issue and I hope there is no more discussion on which answer is greater now. So I'm accepting Mark's answer as the best one.
Alex is correct, but as you have to export to csv, you can give the users this advice when opening the csv files:
Save the exported file as a csv
Open Excel
Import the data using Data-->Import External Data --> Import Data
Select the file type of "csv" and browse to your file
In the import wizard change the File_Origin to "65001 UTF" (or choose correct language character identifier)
Change the Delimiter to comma
Select where to import to and Finish
This way the special characters should show correctly.
The UTF-8 Byte-order mark will clue Excel 2007+ in to the fact that you're using UTF-8. (See this SO post).
In case anybody is having the same issues I was, .NET's UTF8 encoding class does not output a byte-order marker in a GetBytes() call. You need to use streams (or use a workaround) to get the BOM to output.
The bug with ignored BOM seems to be fixed for Excel 2013. I had same problem with Cyrillic letters, but adding BOM character \uFEFF did help.
It is incredible that there are so many answers but none answers the question:
"When I was asking this question, I asked for a way of opening a UTF-8
CSV file in Excel without any problems for a user,..."
The answer marked as the accepted answer with 200+ up-votes is useless for me because I don't want to give my users a manual how to configure Excel.
Apart from that: this manual will apply to one Excel version but other Excel versions have different menus and configuration dialogs. You would need a manual for each Excel version.
So the question is how to make Excel show UTF8 data with a simple double click?
Well at least in Excel 2007 this is not possible if you use CSV files because the UTF8 BOM is ignored and you will see only garbage. This is already part of the question of Lyubomyr Shaydariv:
"I also tried specifying UTF-8 BOM EF BB BF, but Excel ignores that."
I make the same experience: Writing russian or greek data into a UTF8 CSV file with BOM results in garbage in Excel:
Content of UTF8 CSV file:
Colum1;Column2
Val1;Val2
Авиабилет;Tλληνικ
Result in Excel 2007:
A solution is to not use CSV at all. This format is implemented so stupidly by Microsoft that it depends on the region settings in control panel if comma or semicolon is used as separator. So the same CSV file may open correctly on one computer but on anther computer not. "CSV" means "Comma Separated Values" but for example on a german Windows by default semicolon must be used as separator while comma does not work. (Here it should be named SSV = Semicolon Separated Values) CSV files cannot be interchanged between different language versions of Windows. This is an additional problem to the UTF-8 problem.
Excel exists since decades. It is a shame that Microsoft was not able to implement such a basic thing as CSV import in all these years.
However, if you put the same values into a HTML file and save that file as UTF8 file with BOM with the file extension XLS you will get the correct result.
Content of UTF8 XLS file:
<table>
<tr><td>Colum1</td><td>Column2</td></tr>
<tr><td>Val1</td><td>Val2</td></tr>
<tr><td>Авиабилет</td><td>Tλληνικ</td></tr>
</table>
Result in Excel 2007:
You can even use colors in HTML which Excel will show correctly.
<style>
.Head { background-color:gray; color:white; }
.Red { color:red; }
</style>
<table border=1>
<tr><td class=Head>Colum1</td><td class=Head>Column2</td></tr>
<tr><td>Val1</td><td>Val2</td></tr>
<tr><td class=Red>Авиабилет</td><td class=Red>Tλληνικ</td></tr>
</table>
Result in Excel 2007:
In this case only the table itself has a black border and lines. If you want ALL cells to display gridlines this is also possible in HTML:
<html xmlns:x="urn:schemas-microsoft-com:office:excel">
<head>
<meta http-equiv="content-type" content="text/plain; charset=UTF-8"/>
<xml>
<x:ExcelWorkbook>
<x:ExcelWorksheets>
<x:ExcelWorksheet>
<x:Name>MySuperSheet</x:Name>
<x:WorksheetOptions>
<x:DisplayGridlines/>
</x:WorksheetOptions>
</x:ExcelWorksheet>
</x:ExcelWorksheets>
</x:ExcelWorkbook>
</xml>
</head>
<body>
<table>
<tr><td>Colum1</td><td>Column2</td></tr>
<tr><td>Val1</td><td>Val2</td></tr>
<tr><td>Авиабилет</td><td>Tλληνικ</td></tr>
</table>
</body>
</html>
This code even allows to specify the name of the worksheet (here "MySuperSheet")
Result in Excel 2007:
We have used this workaround:
Convert CSV to UTF-16 LE
Insert BOM at beginning of file
Use tab as field separator
Had the same problems with PHP-generated CSV files.
Excel ignored the BOM when the Separator was defined via "sep=,\n" at the beginning of the content (but of course after the BOM).
So adding a BOM ("\xEF\xBB\xBF") at the beginning of the content and setting the semicolon as separator via fputcsv($fh, $data_array, ";"); does the trick.
You can convert .csv file to UTF-8 with BOM via Notepad++:
Open the file in Notepad++.
Go to menu Encoding→Convert to UTF-8-BOM.
Go to menu File→Save.
Close Notepad++.
Open the file in Excel .
Worked in Microsoft Excel 2013 (15.0.5093.1000) MSO (15.0.5101.1000) 64-bit from Microsoft Office Professional Plus 2013 on Windows 8.1 with locale for non-Unicode programs set to "German (Germany)".
Old question but heck, the simplest solution is:
Open CSV in Notepad
Save As -> select the right encoding
Open the new file
I have had the same issue in the past (how to produce files that Excel can read, and other tools can also read). I was using TSV rather than CSV, but the same problem with encodings came up.
I failed to find any way to get Excel to recognize UTF-8 automatically, and I was not willing/able to inflict on the consumers of the files complicated instructions how to open them. So I encoded them as UTF-16le (with a BOM) instead of UTF-8. Twice the size, but Excel can recognize the encoding. And they compress well, so the size rarely (but sadly not never) matters.
As I posted on http://thinkinginsoftware.blogspot.com/2017/12/correctly-generate-csv-that-excel-can.html:
Tell the software developer in charge of generating the CSV to correct it. As a quick workaround you can use gsed to insert the UTF-8 BOM at the beginning of the string:
gsed -i '1s/^\(\xef\xbb\xbf\)\?/\xef\xbb\xbf/' file.csv
This command inserts the UTF-4 BOM if not present. Therefore it is an idempotent command. Now you should be able to double click the file and open it in Excel.
In php you just prepend $bom to your $csv_string:
$bom = sprintf( "%c%c%c", 239, 187, 191); // EF BB BF
file_put_contents( $file_name, $bom . $csv_string );
Tested with MS Excel 2016, php 7.2.4
Simple vba macro for opening utf-8 text and csv files
Sub OpenTextFile()
filetoopen = Application.GetOpenFilename("Text Files (*.txt;*.csv), *.txt;*.csv")
If filetoopen = Null Or filetoopen = Empty Then Exit Sub
Workbooks.OpenText Filename:=filetoopen, _
Origin:=65001, DataType:=xlDelimited, Comma:=True
End Sub
Origin:=65001 is UTF-8.
Comma:True for .csv files distributed in colums
Save it in Personal.xlsb to have it always available.
Personalise excel toolbar adding a macro call button and open files from there.
You can add more formating to the macro, like column autofit , alignment,etc.
Just for help users interested on opening the file on Excel that achieve this thread like me.
I have used the wizard below and it worked fine for me, importing an UTF-8 file.
Not transparent, but useful if you already have the file.
Open Microsoft Excel 2007.
Click on the Data menu bar option.
Click on the From Text icon.
Navigate to the location of the file that you want to import. Click on the filename and then click on the Import button. The Text Import Wizard - Step 1 or 3 window will now appear on the screen.
Choose the file type that best describes your data - Delimited or Fixed Width.
Choose 65001: Unicode (UTF-8) from the drop-down list that appears next to File origin.
Click on the Next button to display the Text Import Wizard - Step 2 or 3 window.
Place a checkmark next to the delimiter that was used in the file you wish to import into Microsoft Excel 2007. The Data preview window will show you how your data will appear based on the delimiter that you chose.
Click on the Next button to display the Text Import Wizard - Step 3 of 3.
Choose the appropriate data format for each column of data that you want to import. You also have the option to not import one or more columns of data if you want.
Click on the Finish button to finish importing your data into Microsoft Excel 2007.
Source: https://www.itg.ias.edu/content/how-import-csv-file-uses-utf-8-character-encoding-0
A truly amazing list of answers, but since one pretty good one is still missing, I'll mention it here: open the csv file with google sheets and save it back to your local computer as an excel file.
In contrast to Microsoft, Google has managed to support UTF-8 csv files so it just works to open the file there. And the export to excel format also just works. So even though this may not be the preferred solution for all, it is pretty fail safe and the number of clicks is not as high as it may sound, especially when you're already logged into google anyway.
This is my working solution:
vbFILEOPEN = "your_utf8_file.csv"
Workbooks.OpenText Filename:=vbFILEOPEN, DataType:=xlDelimited, Semicolon:=True, Local:=True, Origin:=65001
The key is Origin:=65001
Yes it is possible. When writing the stream creating the csv, the first thing to do is this:
myStream.Write(Encoding.UTF8.GetPreamble(), 0, Encoding.UTF8.GetPreamble().Length)
Yes, this is possible. As previously noted by multiple users, there seems to be a problem with excel reading the correct Byte Order Mark when the file is encoded in UTF-8. With UTF-16 it does not seem to have a problem, so it is endemic to UTF-8. The solution I use for this is adding the BOM, TWICE. For this I execute the following sed command twice:
sed -I '1s/^/\xef\xbb\xbf/' *.csv
, where the wildcard can be replaced with any file name. However, this leads to a mutation of the sep= at the beginning of the .csv file. The .csv file will then open normally in excel, but with an extra row with "sep=" in the first cell.
The "sep=" can also be removed in the source .csv itself, but when opening the file with VBA the delimiter should be specified:
Workbooks.Open(name, Format:=6, Delimiter:=";", Local:=True)
Format 6 is the .csv format. Set Local to true, in case there are dates in the file. If Local is not set to true the dates will be Americanized, which in some cases will corrupt the .csv format.
This is not accurately addressing the question but since i stumbled across this and the above solutions didn't work for me or had requirements i couldn't meet, here is another way to add the BOM when you have access to vim:
vim -e -s +"set bomb|set encoding=utf-8|wq" filename.csv
hi i'm using ruby on rails for csv generation. In our application we plan to go for the multi language(I18n) and we faced an issue while viewing I18n content in the CSV file of windows excel.
Was fine with Linux (Ubuntu) and mac.
We identified that windows excel need to be imported the data again to view the actual data. While import we will get more options to choose character set.
But this can’t be educated for each and every user, so solution we looking for is to open just by double click.
Then we identified the way of showing data by open mode and bom in windows excel with the help of aghuddleston gist. Added at reference.
Example I18n content
In Mac and Linux
Swedish : Förnamn
English : First name
In Windows
Swedish : Förnamn
English : First name
def user_information_report(report_file_path, user_id)
user = User.find(user_id)
I18n.locale = user.current_lang
open_mode = "w+:UTF-16LE:UTF-8"
bom = "\xEF\xBB\xBF"
body user, open_mode, bom
end
def headers
headers = [
"ID", "SDN ID",
I18n.t('sys_first_name'), I18n.t('sys_last_name'), I18n.t('sys_dob'),
I18n.t('sys_gender'), I18n.t('sys_email'), I18n.t('sys_address'),
I18n.t('sys_city'), I18n.t('sys_state'), I18n.t('sys_zip'),
I18n.t('sys_phone_number')
]
end
def body tenant, open_mode, bom
File.open(report_file_path, open_mode) do |f|
csv_file = CSV.generate(col_sep: "\t") do |csv|
csv << headers
tenant.patients.find_each(batch_size: 10) do |patient|
csv << [
patient.id, patient.patientid,
patient.first_name, patient.last_name, "#{patient.dob}",
"#{translate_gender(patient.gender)}", patient.email, "#{patient.address_1.to_s} #{patient.address_2.to_s}",
"#{patient.city}", "#{patient.state}", "#{patient.zip}",
"#{patient.phone_number}"
]
end
end
f.write bom
f.write(csv_file)
end
end
Important things to note here is open mode and bom
open_mode = "w+:UTF-16LE:UTF-8"
bom = "\xEF\xBB\xBF"
Before writing the CSV insert BOM
f.write bom
f.write(csv_file)
Windows and Mac
File can be opened directly by double clicking.
Linux (ubuntu)
While opening a file ask for the separator options -> choose “TAB”
Download & install LibreOffice Calc
Open the csv file of your choice in LibreOffice Calc
Thank the heavens that an import text wizard shows up...
...select your delimiter and character encoding options
Select the resulting data in Calc and copy paste to Excel
I faced the same problem a few days ago, and could not find any solution because I cannot use the import from csv feature because it makes everything to be styled as string.
My solution was to first open the file with notpad++ and change the encode to ASCII.
Then just opened the file in excel and it worked as expected.
Working solution for office 365
save in UTF-16 (no LE, BE)
use separator \t
Code in PHP
$header = ['číslo', 'vytvořeno', 'ěščřžýáíé'];
$fileName = 'excel365.csv';
$fp = fopen($fileName, 'w');
fputcsv($fp, $header, "\t");
fclose($fp);
$handle = fopen($fileName, "r");
$contents = fread($handle, filesize($fileName));
$contents = iconv('UTF-8', 'UTF-16', $contents);
fclose($handle);
$handle = fopen($fileName, "w");
fwrite($handle, $contents);
fclose($handle);
This is an old question but I've just encountered had a similar problem and the solution may help others:
Had the same issue where writing out CSV text data to a file, then opening the resulting .csv in Excel shifts all the text into a single column. After having a read of the above answers I tried the following, which seems to sort the problem out.
Apply an encoding of UTF-8 when you create your StreamWriter. That's it.
Example:
using (StreamWriter output = new StreamWriter(outputFileName, false, Encoding.UTF8, 2 << 22)) {
/* ... do stuff .... */
output.Close();
}
If you want to make it fully automatic, one click, or to load automatically into Excel from say a web page, but can't generate proper Excel files, then I would suggest looking at SYLK format as an alternative. OK it is not as simple as CSV but it is text based and very easy to implement and it supports UTF-8 with no issues.
I wrote a PHP class that receives the data and outputs a SYLK file which will open directly in Excel by just clicking the file (or will auto-launch Excel if you write the file to a web page with the correct mime type. You can even add formatting (like bold, format numbers in particular ways etc) and change column sizes, or auto size columns to the text in the columns and all in all the code is probably not more than about 100 lines.
It is dead easy to reverse engineer SYLK by creating a simple spreadsheet and saving as SYLK and then reading it with a text editor. The first block are headers and standard number formats that you will recognise (which you just regurgitate in every file you create), then the data is simply an X/Y coordinate and a value.
I am generating csv files from a simple C# application and had the same problem. My solution was to ensure the file is written with UTF8 encoding, like so:
// Use UTF8 encoding so that Excel is ok with accents and such.
using (StreamWriter writer = new StreamWriter(path, false, Encoding.UTF8))
{
SaveCSV(writer);
}
I originally had the following code, with which accents look fine in Notepad++ but were getting mangled in Excel:
using (StreamWriter writer = new StreamWriter(path))
{
SaveCSV(writer);
}
Your mileage may vary - I'm using .NET 4 and Excel from Office 365.
I tried everything I could find on this thread and similar, nothing worked fully. However, importing to google sheets and simply downloading as csv worked like a charm. Try it out if you come to my frustration point.
It's March 2022, and it seems we cannot use both a BOM and the sep=... line.
Adding the sep=\t or similar, makes Excel ignore the BOM.
Using a semicolon seems to be a default Excel understands, in which case we can skip the sep=... line and it works.
This is Microsoft 365 with Excel version 2110 build 14527.20276.
Found a solution for ASP.NET Core to download CSV's as UTF8 with POM:
byte[] csvBytes = Encoding.Default.GetBytes(csvString);
UTF8Encoding utf8 = new UTF8Encoding(true);
byte[] bom = utf8.GetPreamble();
var result = bom.Concat(csvBytes).ToArray();
return new FileContentResult(result, MediaTypeHeaderValue.Parse("text/csv; charset=utf-8"));
Excel is recognizes the downloaded CSV file than as UTF8.
Just sharing a comprehensive function that might make your life easier working with CSV files.... please note last function argument in relation to this topic
function array2csv($data, $file = '', $download = true, $mode = 'w+', $delimiter = ',', $enclosure = '"', $escape_char = "\\", $addUnicodeBom = false)
{
$return = false;
if ($file == '') {
$f = fopen('php://memory', 'r+');
} else {
$f = fopen($file, $mode);
}
if ($addUnicodeBom) {
$utf8_with_bom = chr(239) . chr(187) . chr(191);
fwrite($f, $utf8_with_bom);
}
foreach ($data as $line => $item) {
fputcsv($f, $item, $delimiter, $enclosure, $escape_char);
}
rewind($f);
if ($download == true) {
$return = stream_get_contents($f);
} else {
$return = true;
}
return $return;
}
First save the Excel spreadsheet as Unicode text. Open the TXT file using Internet explorer and click "Save as" TXT Encoding - choose the appropriate encoding, i.e. for Win Cyrillic 1251

unable to recognize file type

this is my first post. I'm new in Java. I'm working on file parser. I've tried to identify if it is CSV or another file format, but it looks like it is not quite a standard format. I'm working on apache camel solution (my first and last idea :( ), but maybe some of you recognize this kind of file format? Additionally, I've got .imp file for my output.
Here is my example input:
NrDok:FS-2222/17/W
Data:12.02.2017
SposobPlatn:GOT
NazwaWystawcy:MAAKAI Gawron
AdresWystawcy:33-123 bABA
KodWystawcy:33-112
MiastoWystawcy:bABA
UlicaWystawcy:czysfa 8
NIPWystawcy:123-19-85-123
NazwaOdbiorcy:abc abc-HANDLOWO-USŁUGOWE
AdresOdbiorcy:33-123 fghd
KodOdbiorcy:33-123
MiastoOdbiorcy:Tdsfs
UlicaOdbiorcy:dfdfdA 39
NIPOdbiorcy:82334349
TelefonOdbiorcy:654-522-124
NrOdbiorcyWSieciSklepow:efdsS-sffgsA
IloscLinii:1
Linia:Nazwa{ĆWIARTKA KG}Kod{C1}Vat{5}Jm{kg.}Asortyment{dfgv}Sww{}PKWIU{10.12.10}Ilosc{3.40}Cena{n3.21}Wartosc{n11.83}IleWOpak{1}CenaSp{b0.00}
DoZaplaty:252.32
And here is my example output file:
FH 2015.07.31 2015.07.31 F04443 Gotowka
FO 812-123-45-11 P.a.b.Uc"fdad" abcd deffF UL.fdfgdfdA 12/33 33-123 afvdf
FS 779-19-06-082 badfdf S.A. ul. Wisniowa 89 60-003 Poznan
FP 00218746 CHRZAN TARTY EXTRA POLONAISE 180G SZT 32.00 2.21 8 10.39.17.0 32.00 5900138000055
Is there any easy way to convert the first file to second file format? Maybe you know the type of this file? In a meanwhile, I'm continuing my work with apache camel.
Thanks in advance for your time and help!
I suggest you to play with https://tika.apache.org/1.1/detection.html#Mime_Magic_Detection
It's very good lib for file type recognition.
Here https://www.tutorialspoint.com/tika/tika_document_type_detection.htm we have simple example.
Your file can be read as standard Java .properties file. This type of files allows both = and : as key and value separators. While the fact that it contains non ISO-8859-1 characters like Polish Ć may prevent Java from correctly parsing it.
This line
Nazwa{ĆWIARTKA KG}Kod{C1}Vat{5}Jm{kg.}Asortyment{dfgv}Sww{}PKWIU{10.12.10}Ilosc{3.40}Cena{n3.21}Wartosc{n11.83}IleWOpak{1}CenaSp{b0.00}
Seem to be some custom serialization format of the object in the form
key1{value1}key2{value2}...
Your output file contains lots of data that is not listed in the input which makes me think that there is some data querying from external systems to build the output. You should investigate it yourself. There is no way anyone can guess the transformation with provided input.

Invalid byte 1 of 1-byte UTF-8 sequence: RestTemplate [duplicate]

I am trying to fetch the below xml from db using a java method but I am getting an error
Code used to parse the xml
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
InputSource is = new InputSource(new ByteArrayInputStream(cond.getBytes()));
Document doc = db.parse(is);
Element elem = doc.getDocumentElement();
// here we expect a series of <data><name>N</name><value>V</value></data>
NodeList nodes = elem.getElementsByTagName("data");
TableID jobId = new TableID(_processInstanceId);
Job myJob = Job.queryByID(_clientContext, jobId, true);
if (nodes.getLength() == 0) {
log(Level.DEBUG, "No data found on condition XML");
}
for (int i = 0; i < nodes.getLength(); i++) {
// loop through the <data> in the XML
Element dataTags = (Element) nodes.item(i);
String name = getChildTagValue(dataTags, "name");
String value = getChildTagValue(dataTags, "value");
log(Level.INFO, "UserData/Value=" + name + "/" + value);
myJob.setBulkUserData(name, value);
}
myJob.save();
The Data
<ContactDetails>307896043</ContactDetails>
<ContactName>307896043</ContactName>
<Preferred_Completion_Date>
</Preferred_Completion_Date>
<service_address>A-End Address: 1ST HELIERST HELIERJT2 3XP832THE CABLES 1 POONHA LANEST HELIER JE JT2 3XP</service_address>
<ServiceOrderId>315473043</ServiceOrderId>
<ServiceOrderTypeId>50</ServiceOrderTypeId>
<CustDesiredDate>2013-03-20T18:12:04</CustDesiredDate>
<OrderId>307896043</OrderId>
<CreateWho>csmuser</CreateWho>
<AccountInternalId>20100333</AccountInternalId>
<ServiceInternalId>20766093</ServiceInternalId>
<ServiceInternalIdResets>0</ServiceInternalIdResets>
<Primary_Offer_Name action='del'>MyMobile Blue £44.99 [12 month term]</Primary_Offer_Name>
<Disc_Reason action='del'>8</Disc_Reason>
<Sup_Offer action='del'>80000257</Sup_Offer>
<Service_Type action='del'>A-01-00</Service_Type>
<Priority action='del'>4</Priority>
<Account_Number action='del'>0</Account_Number>
<Offer action='del'>80000257</Offer>
<msisdn action='del'>447797142520</msisdn>
<imsi action='del'>234503184</imsi>
<sim action='del'>5535</sim>
<ocb9_ARM action='del'>false</ocb9_ARM>
<port_in_required action='del'>
</port_in_required>
<ocb9_mob action='del'>none</ocb9_mob>
<ocb9_mob_BB action='del'>
</ocb9_mob_BB>
<ocb9_LandLine action='del'>
</ocb9_LandLine>
<ocb9_LandLine_BB action='del'>
</ocb9_LandLine_BB>
<Contact_2>
</Contact_2>
<Acc_middle_name>
</Acc_middle_name>
<MarketCode>7</MarketCode>
<Acc_last_name>Port_OUT</Acc_last_name>
<Contact_1>
</Contact_1>
<Acc_first_name>.</Acc_first_name>
<EmaiId>
</EmaiId>
The ERROR
org.apache.xerces.impl.io.MalformedByteSequenceException: Invalid byte 1 of 1-byte UTF-8 sequence.
I read in some threads it's because of some special characters in the xml.
How to fix this issue ?
How to fix this issue ?
Read the data using the correct character encoding. The error message means that you are trying to read the data as UTF-8 (either deliberately or because that is the default encoding for an XML file that does not specify <?xml version="1.0" encoding="somethingelse"?>) but it is actually in a different encoding such as ISO-8859-1 or Windows-1252.
To be able to advise on how you should do this I'd have to see the code you're currently using to read the XML.
Open the xml in notepad
Make sure you dont have extra space at the beginning and end of the document.
Select File -> Save As
select save as type -> All files
Enter file name as abcd.xml
select Encoding - UTF-8 -> Click Save
Try:
InputStream inputStream= // Your InputStream from your database.
Reader reader = new InputStreamReader(inputStream,"UTF-8");
InputSource is = new InputSource(reader);
is.setEncoding("UTF-8");
saxParser.parse(is, handler);
If it's anything else than UTF-8, just change the encoding part for the good one.
I was getting the xml as a String and using xml.getBytes() and getting this error. Changing to xml.getBytes(Charset.forName("UTF-8")) worked for me.
I had the same problem in my JSF application which was having a comment line containing some special characters in the XMHTL page. When I compared the previous version in my eclipse it had a comment,
//Some �  special characters found
Removed those characters and the page loaded fine. Mostly it is related to XML files, so please compare it with the working version.
I had this problem, but the file was in UTF-8, it was just that somehow on character had come in that was not encoded in UTF-8. To solve the problem I did what is stated in this thread, i.e. I validated the file:
How to check whether a file is valid UTF-8?
Basically you run the command:
$ iconv -f UTF-8 your_file -o /dev/null
And if there is something that is not encoded in UTF-8 it will give you the line and row numbers so that you can find it.
I happened to run into this problem because of an Ant build.
That Ant build took files and applied filterchain expandproperties to it. During this file filtering, my Windows machine's implicit default non-UTF-8 character encoding was used to generate the filtered files - therefore characters outside of its character set could not be mapped correctly.
One solution was to provide Ant with an explicit environment variable for UTF-8.
In Cygwin, before launching Ant: export ANT_OPTS="-Dfile.encoding=UTF-8".
This error comes when you are trying to load jasper report file with the extension .jasper
For Example
c://reports//EmployeeReport.jasper"
While you should load jasper report file with the extension .jrxml
For Example
c://reports//EmployeeReport.jrxml"
[See Problem Screenshot ][1] [1]: https://i.stack.imgur.com/D5SzR.png
[See Solution Screenshot][2] [2]: https://i.stack.imgur.com/VeQb9.png
I had a similar problem.
I had saved some xml in a file and when reading it into a DOM document, it failed due to special character. Then I used the following code to fix it:
String enco = new String(Files.readAllBytes(Paths.get(listPayloadPath+"/Payload.xml")), StandardCharsets.UTF_8);
Document doc = builder.parse(new ByteArrayInputStream(enco.getBytes(StandardCharsets.UTF_8)));
Let me know if it works for you.
I have met the same problem and after long investigation of my XML file I found the problem: there was few unescaped characters like « ».
Those like me who understand character encoding principles, also read Joel's article which is funny as it contains wrong characters anyway and still can't figure out what the heck (spoiler alert, I'm Mac user) then your solution can be as simple as removing your local repo and clone it again.
My code base did not change since the last time it was running OK so it made no sense to have UTF errors given the fact that our build system never complained about it....till I remembered that I accidentally unplugged my computer few days ago with IntelliJ Idea and the whole thing running (Java/Tomcat/Hibernate)
My Mac did a brilliant job as pretending nothing happened and I carried on business as usual but the underlying file system was left corrupted somehow. Wasted the whole day trying to figure this one out. I hope it helps somebody.
I had the same issue. My problem was it was missing “-Dfile.encoding=UTF8” argument under the JAVA_OPTION in statWeblogic.cmd file in WebLogic server.
You have a library that needs to be erased
Like the following library
implementation 'org.apache.maven.plugins:maven-surefire-plugin:2.4.3'
This error surprised me in production...
The error is because the char encoding is wrong, so the best solution is implement a way to auto detect the input charset.
This is one way to do it:
...
import org.xml.sax.InputSource;
...
InputSource inputSource = new InputSource(inputStream);
someReader(
inputSource.getByteStream(), inputSource.getEncoding()
);
Input sample:
<?xml version="1.0" encoding="utf-16"?>
<rss xmlns:dc="https://purl.org/dc/elements/1.1/" version="2.0">
<channel>
...

Merging two .odt files from code

How do you merge two .odt files? Doing that by hand, opening each file and copying the content would work, but is unfeasable.
I have tried odttoolkit Simple API (simple-odf-0.8.1-incubating) to achieve that task, creating an empty TextDocument and merging everything into it:
private File masterFile = new File(...);
...
TextDocument t = TextDocument.newTextDocument();
t.save(masterFile);
...
for(File f : filesToMerge){
joinOdt(f);
}
...
void joinOdt(File joinee){
TextDocument master = (TextDocument) TextDocument.loadDocument(masterFile);
TextDocument slave = (TextDocument) TextDocument.loadDocument(joinee);
master.insertContentFromDocumentAfter(slave, master.getParagraphByReverseIndex(0, false), true);
master.save(masterFile);
}
And that works reasonably well, however it looses information about fonts - original files are a combination of Arial Narrow and Windings (for check boxes), output masterFile is all in TimesNewRoman. At first I suspected last parameter of insertContentFromDocumentAfter, but changing it to false breaks (almost) all formatting. Am I doing something wrong? Is there any other way?
I think this is "works as designed".
I tried this once with a global document, which imports documents and display them as is... as long as paragraph styles have different names !
Using same named templates are overwritten with the values the "master" document have.
So I ended up cloning standard styles with unique (per document) names.
HTH
Ma case was a rather simple one, files I wanted to merge were generated the same way and used the same basic formatting. Therefore, starting off of one of my files, instead of an empty document fixed my problem.
However this question will remain open until someone comes up with a more general solution to formatting retention (possibly based on ngulams answer and comments?).

Categories