I'm trying to parse a simil-InkML document. Every content's node has more tuple (separated by comma) with 6 or 7 number (negative and decimal too).
In testing I see that the method character of SAX don't memorize all the data.
The code:
public class PenParser extends DefaultHandler {
//code useless
public void characters(char ch[], int start, int length) throws SAXException {
//begin my debug print
StringBuilder buffer=new StringBuilder ();
for(int i=start;i<length;i++){
buffer.append(ch[i]);
}
System.out.println(">"+buffer);
//end my debug print
In debug, I see that buffer don't contain all the number of the interested tag, but it contain only the first 107 (more or less) char of content of the tag (my rows are not longer that 4610 char): it's strange this cut of char by StringBuffer and SAX parsing, in my opinion.
I had used the StringBuilder too but the problem remain.
Any suggest?
Yes - that's pretty obvious.
characters may be called several times when one node is parsed.
You'll have to use the StringBuilder as member, append the content in characters and deal with the content in endElement.
edited
btw. you do not need to build the buffer character by character -
this is my implementation of characters (which I always use)
#Override
public void characters(char[] ch, int start, int length) throws SAXException
{
characters.append(new String(ch,start,length));
}
... and not to forget ....
#Override
public void endElement(String uri, String localName, String qName)
throws SAXException
{
final String content = characters.toString().trim();
// .... deal with content
// reset characters
characters.setLength(0);
}
private final StringBuilder characters = new StringBuilder(64);
Related
I have normal String property inside an object, containing accented characters.
If I debug the software (with Netbeans), into the variables panel I will see that string in the right way:
But when I'm going to print out the variable with System.out.println I will see strange things:
As you can see every "à" become "a'" and so on, and this will lead to a wrong character count, even in Matcher on the string.
How I can fix this? I need the accented characters, to have the right characters count and to use the matcher on it.
I tried many ways but is not going to work, for sure I'm missing something.
Thanks in advance.
EDIT
EDIT AGAIN
This is the code:
public class TextLine {
public List<TextPosition> textPositions = null;
public String text = "";
}
public class myStripper extends PDFTextStripper {
public ArrayList<TextLine> lines = null;
boolean startOfLine = true;
public myStripper() throws IOException
{
}
private void newLine() {
startOfLine = true;
}
#Override
protected void startPage(PDPage page) throws IOException
{
newLine();
super.startPage(page);
}
#Override
protected void writeLineSeparator() throws IOException
{
newLine();
super.writeLineSeparator();
}
#Override
public String getText(PDDocument doc) throws IOException
{
lines = new ArrayList<TextLine>();
return super.getText(doc);
}
#Override
protected void writeWordSeparator() throws IOException
{
TextLine tmpline = null;
tmpline = lines.get(lines.size() - 1);
tmpline.text += getWordSeparator();
tmpline.textPositions.add(null);
super.writeWordSeparator();
}
#Override
protected void writeString(String text, List<TextPosition> textPositions) throws IOException
{
TextLine tmpline = null;
if (startOfLine) {
tmpline = new TextLine();
tmpline.text = text;
tmpline.textPositions = textPositions;
lines.add(tmpline);
} else {
tmpline = lines.get(lines.size() - 1);
tmpline.text += text;
tmpline.textPositions.addAll(textPositions);
}
if (startOfLine) {
startOfLine = false;
}
super.writeString(text, textPositions);
}
}
It is about the representation of certain Unicode characters.
What is a character? That question is hard to answer. Is à one character, or two (the a and ` on top of eachother)? It depends what you consider to be a character.
The accent graves (`) you are seeing are actually combining diacritical marks. Combining diacritical marks are separate Unicode characters, but are combined with the previous character by many text processors. For instance, java.text.Normalizer.normalize(str, Normalizer.Form.NFC) does such a job for you.
The library you are using (Apache PDFBox) possibly normalizes the text, so diacritics are combined with the preceding character. So in your text, some TextPosition instances contain two code points (more precisely, e` and a`). So the length of the list with TextPosition instances is 65.
However, your String, which is in fact a CharSequence, holds 67 characters, because the diacritic itself takes up 1 char.
System.out.println() just prints each character of the string, and that is represented as "dere che Geova e` il Creatore e Colui che da` la vita. Probabilmen-"
Then why is the Netbeans debugger showing "dere che Geova è il Creatore e Colui che dà la vita. Probabilmen-" as value of the string?
That is simply because the Netbeans debugger displays the normalized text for you.
I need to truncate html string that was already sanitized by my app before storing in DB & contains only links, images & formatting tags. But while presenting to users, it need to be truncated for presenting an overview of content.
So I need to abbreviate html strings in java such that
<img src="http://d2qxdzx5iw7vis.cloudfront.net/34775606.jpg" />
<br/><a href="http://d2qxdzx5iw7vis.cloudfront.net/34775606.jpg" />
when truncated does not return something like this
<img src="http://d2qxdzx5iw7vis.cloudfront.net/34775606.jpg" />
<br/><a href="htt
but instead returns
<img src="http://d2qxdzx5iw7vis.cloudfront.net/34775606.jpg" />
<br/>
Your requirements are a bit vague, even after reading all the comments. Given your example and explanations, I assume your requirements are the following:
The input is a string consisting of (x)html tags. Your example doesn't contain this, but I assume the input can contain text between the tags.
In the context of your problem, we do not care about nesting. So the input is really only text intermingled with tags, where opening, closing and self-closing tags are all considered equivalent.
Tags can contain quoted values.
You want to truncate your string such that the string is not truncated in the middle of a tag. So in the truncated string every '<' character must have a corresponding '>' character.
I'll give you two solutions, a simple one which may not be correct, depending on what the input looks like exactly, and a more complex one which is correct.
First solution
For the first solution, we first find the last '>' character before the truncate size (this corresponds to the last tag which was completely closed). After this character may come text which does not belong to any tag, so we then search for the first '<' character after the last closed tag. In code:
public static String truncate1(String input, int size)
{
if (input.length() < size) return input;
int pos = input.lastIndexOf('>', size);
int pos2 = input.indexOf('<', pos);
if (pos2 < 0 || pos2 >= size) {
return input.substring(0, size);
}
else {
return input.substring(0, pos2);
}
}
Of course this solution does not consider the quoted value strings: the '<' and '>' characters might occur inside a string, in which case they should be ignored. I mention the solution anyway because you mention your input is sanatized, so possibly you can ensure that the quoted strings never contain '<' and '>' characters.
Second solution
To consider the quoted strings, we cannot rely on standard Java classes anymore, but we have to scan the input ourselves and remember if we are currently inside a tag and inside a string or not. If we encounter a '<' character outside of a string, we remember its position, so that when we reach the truncate point we know the position of the last opened tag. If that tag wasn't closed, we truncate before the beginning of that tag. In code:
public static String truncate2(String input, int size)
{
if (input.length() < size) return input;
int lastTagStart = 0;
boolean inString = false;
boolean inTag = false;
for (int pos = 0; pos < size; pos++) {
switch (input.charAt(pos)) {
case '<':
if (!inString && !inTag) {
lastTagStart = pos;
inTag = true;
}
break;
case '>':
if (!inString) inTag = false;
break;
case '\"':
if (inTag) inString = !inString;
break;
}
}
if (!inTag) lastTagStart = size;
return input.substring(0, lastTagStart);
}
A robust way of doing it is to use the hotsax code which parses HTML letting you interface with the parser using the traditional low level SAX XML API [Note it is not an XML parser it parses poorly formed HTML in only chooses to let you interface with it using a standard XML API).
Here on github I have created a working quick-and-dirty example project which has a main class that parses your truncated example string:
XMLReader parser = XMLReaderFactory.createXMLReader("hotsax.html.sax.SaxParser");
final StringBuilder builder = new StringBuilder();
ContentHandler handler = new DoNothingContentHandler(){
StringBuilder wholeTag = new StringBuilder();
boolean hasText = false;
boolean hasElements = false;
String lastStart = "";
#Override
public void characters(char[] ch, int start, int length)
throws SAXException {
String text = (new String(ch, start, length)).trim();
wholeTag.append(text);
hasText = true;
}
#Override
public void endElement(String namespaceURI, String localName,
String qName) throws SAXException {
if( !hasText && !hasElements && lastStart.equals(localName)) {
builder.append("<"+localName+"/>");
} else {
wholeTag.append("</"+ localName +">");
builder.append(wholeTag.toString());
}
wholeTag = new StringBuilder();
hasText = false;
hasElements = false;
}
#Override
public void startElement(String namespaceURI, String localName,
String qName, Attributes atts) throws SAXException {
wholeTag.append("<"+ localName);
for( int i = 0; i < atts.getLength(); i++) {
wholeTag.append(" "+atts.getQName(i)+"='"+atts.getValue(i)+"'");
hasElements = true;
}
wholeTag.append(">");
lastStart = localName;
hasText = false;
}
};
parser.setContentHandler(handler);
//parser.parse(new InputSource( new StringReader( "<div>this is the <em>end</em> my <br> friend some link" ) ));
parser.parse(new InputSource( new StringReader( "<img src=\"http://d2qxdzx5iw7vis.cloudfront.net/34775606.jpg\" />\n<br/><a href=\"htt" ) ));
System.out.println( builder.toString() );
It outputs:
<img src='http://d2qxdzx5iw7vis.cloudfront.net/34775606.jpg'></img><br/>
It is adding an </img> tag but thats harmless for html and it would be possible to tweak the code to exactly match the input in the output if you felt that necessary.
Hotsax is actually generated code from using yacc/flex compiler tools run over the HtmlParser.y and StyleLexer.flex files which define the low level grammar of html. So you benefit from the work of the person who created that grammar; all you need to do is write some fairly trivial code and test cases to reassemble the parsed fragments as shown above. That's much better than trying to write your own regular expressions, or worst and coded string scanner, to try to interpret the string as that is very fragile.
Afer I understand what you want here is the most simple solution I could come up with.
Just work from the end of your substring to the start until you find '>' This is the end mark of the last tag. So you can be sure that you only have complete tags in the majority of cases.
But what if the > is inside texts?
Well to be sure about this just search on until you find < and ensure this is part of a tag (do you know the tag string for instance?, since you only have links, images and formating you can easily check this. If you find another > before finding < starting a tag this is the new end of your string.
Easy to do, correct and should work for you.
If you are not certain if strings / attributes can contain < or > you need to check the appearence of " and =" to check if you are inside a string or not. (Remember you can cut of an attribute values). But I think this is overengineering. I never found an attribute with < and > in it and usually within text it is also escaped using & lt ; and something alike.
I don't know the context of the problem the OP needs to solve, but I am not sure if it makes a lot of sense to truncate html code by the length of its source code instead of the length of its visual representation (which can become arbitrarily complex, of course).
Maybe a combined solution could be useful, so you don't penalize html code with a lot of markup or long links, but also set a clear total limit which cannot be exceeded. Like others already wrote, the usage of a dedicated HTML parser like JSoup allows the processing of non well-formed or even invalid HTML.
The solution is loosely based on JSoup's Cleaner. It traverses the parsed dom tree of the source code and tries to recreate a destination tree while continuously checking, if a limit has been reached.
import org.jsoup.nodes.*;
import org.jsoup.parser.*;
import org.jsoup.select.*;
String html = "<img src=\"http://d2qxdzx5iw7vis.cloudfront.net/34775606.jpg\" />" +
"<br/><a href=\"http://d2qxdzx5iw7vis.cloudfront.net/34775606.jpg\" />";
//String html = "<b>foo</b>bar<p class=\"baz\">Some <img />Long Text</p><a href='#'>hello</a>";
Document srcDoc = Parser.parseBodyFragment(html, "");
srcDoc.outputSettings().prettyPrint(false);
Document dstDoc = Document.createShell(srcDoc.baseUri());
dstDoc.outputSettings().prettyPrint(false);
Element dst = dstDoc.body();
NodeVisitor v = new NodeVisitor() {
private static final int MAX_HTML_LEN = 85;
private static final int MAX_TEXT_LEN = 40;
Element cur = dst;
boolean stop = false;
int resTextLength = 0;
#Override
public void head(Node node, int depth) {
// ignore "body" element
if (depth > 0) {
if (node instanceof Element) {
Element curElement = (Element) node;
cur = cur.appendElement(curElement.tagName());
cur.attributes().addAll(curElement.attributes());
String resHtml = dst.html();
if (resHtml.length() > MAX_HTML_LEN) {
cur.remove();
throw new IllegalStateException("html too long");
}
} else if (node instanceof TextNode) {
String curText = ((TextNode) node).getWholeText();
String resHtml = dst.html();
if (curText.length() + resHtml.length() > MAX_HTML_LEN) {
cur.appendText(curText.substring(0, MAX_HTML_LEN - resHtml.length()));
throw new IllegalStateException("html too long");
} else if (curText.length() + resTextLength > MAX_TEXT_LEN) {
cur.appendText(curText.substring(0, MAX_TEXT_LEN - resTextLength));
throw new IllegalStateException("text too long");
} else {
resTextLength += curText.length();
cur.appendText(curText);
}
}
}
}
#Override
public void tail(Node node, int depth) {
if (depth > 0 && node instanceof Element) {
cur = cur.parent();
}
}
};
try {
NodeTraversor t = new NodeTraversor(v);
t.traverse(srcDoc.body());
} catch (IllegalStateException ex) {
System.out.println(ex.getMessage());
}
System.out.println(" in='" + srcDoc.body().html() + "'");
System.out.println("out='" + dst.html() + "'");
For the given example with max length of 85, the result is:
html too long
in='<img src="http://d2qxdzx5iw7vis.cloudfront.net/34775606.jpg"><br>'
out='<img src="http://d2qxdzx5iw7vis.cloudfront.net/34775606.jpg"><br>'
It also correctly truncates within nested elements, for a max html length of 16 the result is:
html too long
in='<i>f<b>oo</b>b</i>ar'
out='<i>f<b>o</b></i>'
For a maximum text length of 2, the result of a long link would be:
text too long
in='<b>foo</b>bar'
out='<b>fo</b>'
You can achieve this with library "JSOUP" - html parser.
You can download it from below link.
Download JSOUP
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.select.Elements;
public class HTMLParser
{
public static void main(String[] args)
{
String html = "<img src=\"http://d2qxdzx5iw7vis.cloudfront.net/34775606.jpg\" /><br/><a href=\"http://d2qxdzx5iw7vis.cloudfront.net/34775606.jpg\" /><img src=\"http://d2qxdzx5iw7vis.cloudfront.net/34775606.jpg\" /><br/><a href=\"http://d2qxdzx5iw7vis.cloudfront.net/34775606.jpg\" />";
Document doc = Jsoup.parse(html);
doc.select("a").remove();
System.out.println(doc.body().children());
}
}
Well whatever you want to do. There are two libraries out there jSoup and HtmlParser which I tend to use. Please check them out. Also I see bearly XHTML in the wild anymore. Its more about HTML5 (which does not have an XHTML counterpart) nowadays.
[Update]
I mention JSoup and HtmlParser since they are fault tollerant in a way the browser is. Please check if they suite you since they are very good at dealing with malformed and damaged HTML text. Create a DOM out of your HTML and write it back to string you should get rid of the damaged tags also you can filter the DOM by yourself and remove even more content if you have to.
PS: I guess the XML decade is finally (and gladly) over. Today JSON is going to be overused.
A third potential answer I would consider as a potential solution is not to work with strings ins the first place.
When I remember correctly there are DOM tree representations that work closely with the underlying string presentation. Therefore they are character exact. I wrote one myself but I think jSoup has such a mode. Since there are a lot of parsers out there you should be able to find one that actually does.
With such a parser you can easily see which tag runs from what string position to another. Actually those parsers maintain a String of the document and alter it but only store range information like start and stop positions within the document avoiding to multiply those information for nested nodes.
Therefore you can find the most outer node for a given position, know exactly from what to where and easily can decide if this tag (including all its children) can be used to be presented within your snippet. So you will have the chance to print complete text nodes and alike without the risk to only present partial tag information or headline text and alike.
If you do not find a parser that suites you on this, you can ask me for advise.
I am using java sax parser and i override
#Override
public void characters(char ch[], int start, int length) throws SAXException {
value = new String(ch, start, length);
in some case array ch contains qName of element but not contains entire value.
Example:
ch = [... , x, s, d, :, n, a, m, e, >, 1, 2, 3]
but the real value of xsd:name is 123456789
EDIT
String responseString = Utils.getXml(url);
SAXParserFactory factory = SAXParserFactory.newInstance();
SAXParser saxParser = factory.newSAXParser();
handler = new SimpleHandler();
saxParser.parse(new InputSource(new StringReader(responseString)), handler);
List<Entit> list = handler.getList();
I have xml like this (ofcourse the original xml is much bigger)
<root>
<el>
<xsd:name>11111111</xsd:name>
</el>
<el>
<xsd:name>22222222</xsd:name>
</el>
<el>
<xsd:name>123456789</xsd:name>
</el>
<el>
<xsd:name>333333333</xsd:name>
</el>
</root>
i get error just for one value in xml.
How to fix that.
The characters method does not necessarily return the entire set of characters. You need to store the result each time characters is called, something like:
final StringBuilder sb = new StringBuilder();
#Override
public void characters(char ch[], int start, int length) throws SAXException {
sb.append(ch, start, length);
}
You then need to reset your StringBuilder (or whatever you are using) when you find an end element tag or a begin element tag or whatever the case may be.
Read the specification for characters:
"The Parser will call this method to report each chunk of character data. SAX parsers may return all contiguous character data in a single chunk, or they may split it into several chunks; however, all of the characters in any single event must come from the same external entity so that the Locator provides useful information."
Generally, what you should do is delete the text buffer when you see startElement or endElement. Usually you will do something with the current buffer when these are seen.
Any idea what am I doing wrong here.
This is the xml file
<text xml:space="preserve">{{Redirect|Anarchist|the fictional character|Anarchist (comics)}}
{{Redirect|Anarchists}}
{{Anarchism sidebar}}
{{Libertarianism sidebar}}
</text>
Now when I am parsing it with the help of SAX parser, for eg this is my characters method
public void characters (char ch[], int start, int length) throws SAXException{
System.out.println(text);
if (text){
System.out.println(testData); //testData is StringBuilder
if (testData != null){
for (int j=start; j < (start + length); j++){
testData.append(ch[j]);
}
}
}
text = false
}
This is my startElement method
public void startElement(String uri, String localname, String qName, Attributes attributes) throws SAXException {
if (qName.equalsIgnoreCase("text")) {
text = true;
}
}
but my characters function is called only once. I thought it would be called several times and then I could append the
The "ignore whitespace" flag controls whether the XML parser considers whitespace in between XML elements to be significant or whether it should ignore it. As long as you don't have ignoreWhitespace set, the parser is quite correct in feeding all characters--whitespace or not--to your characters() method.
I'm using SAX parser in my Android application to read a few feeds a time. The script is executed as follows.
// Begin FeedLezer
try {
/** Handling XML **/
SAXParserFactory spf = SAXParserFactory.newInstance();
SAXParser sp = spf.newSAXParser();
XMLReader xr = sp.getXMLReader();
/** Send URL to parse XML Tags **/
URL sourceUrl = new URL(
BronFeeds[i]);
/** Create handler to handle XML Tags ( extends DefaultHandler ) **/
Feed_XMLHandler myXMLHandler = new Feed_XMLHandler();
xr.setContentHandler(myXMLHandler);
xr.parse(new InputSource(sourceUrl.openStream()));
} catch (Exception e) {
System.out.println("XML Pasing Excpetion = " + e);
}
sitesList = Feed_XMLHandler.sitesList;
String titels = sitesList.getMergedTitles();
And here are Feed_XMLHandler.java and Feed_XMLList.java, which I basically both just took from the web.
However, this code fails at times. I'll show some examples.
http://imm.io/media/2I/2IAs.jpg
It goes very well here. It even recognizes and displays apostrophes. Even when clicking the articles open, almost all of the text shows, so that's all good. The source feed is here. I can't control the feed.
http://imm.io/media/2I/2IB1.jpg Here, it doesn't go so well. It does display the ï, but it chokes on the apostrophe (there's supposed to be 'NORAD' after the Waarom). Here
http://imm.io/media/2I/2IBQ.jpg This is the worst one. As you can see, the title only displays an apostrophe, whilst it is supposed to be a 'blablabla'. Also, the text ends in the middle of the line, without any special characters in the quote. The feed is here
In all cases, I have no control over the feed. I think the script does choke on special characters. How can I make sure SAX fetches all the strings correctly?
If anyone knows an answer to this, you really help me out a LOT :D
Thanks in advance.
This is from the FAQ of Xerces.
Why does the SAX parser lose some
character data or why is the data
split into several chunks? If you
read the SAX documentation, you will
find that SAX may deliver contiguous
text as multiple calls to characters,
for reasons having to do with parser
efficiency and input buffering. It is
the programmer's responsibility to
deal with that appropriately, e.g. by
accumulating text until the next
non-characters event.
You're code is very well adapted from one of many XML Parsing tutorials (like this one here) Now, the tutorial is good and all, but they fail to mention something very important...
Notice this part here...
public void characters(char[] ch, int start, int length)
throws SAXException
{
if(in_ThisTag){
myobj.setName(new String(ch,start,length))
}
}
I bet at this point you're checking up booleans to mark which tag you're under and then setting a value in some kind of class you made? or something like that....
But the problem is, the SAX parser (which is buffered) will not necesarily get you all the characters between a tag at one go....say if <tag> Lorem Ipsum...really long sentence...</tag> so your SAX parser calls characters function is chunks....
So the trick here, is to keep appending the values to a string variable and the actually set (or commit) it to your structure when the tag ends...(ie in endElement)
Example
#Override
public void endElement(String uri, String localName, String qName)
throws SAXException {
currentElement = false;
/** set value */
if (localName.equalsIgnoreCase("tag"))
{
sitesList.setName(currentValue);
currentValue = ""; //reset the currentValue
}
}
#Override
public void characters(char[] ch, int start, int length)
throws SAXException {
if (in_Tag) {
currentValue += new String(ch, start, length); //keep appending string, don't set it right here....maybe there's more to come.
}
}
Also, it would be better if you use StringBuilder for the appending, since that'll be more efficient....
Hope it makes sense! If it didn't check this and here