In our Cocoon environment we have a few forms with textareas. Once the user submits a form, an overview is displayed before the final submit is done.
Therefor, each form-object's data is stored in POJOs.
If the user is on that overview page and decides to go back to the form, the form is filled with the already submitted data read from the POJOs. However, when filling the textarea with data from the JavaObject, some linebreaks and whitespaces are added to the data.
I checked the POJO's data for these linebreaks but the String looks clean. Each whitespace entered by the user is of Character 32, which is a simple space.
I also checked the Serializer (we use a custom one that extends Cocoon's AbstractSerializer) but no linebreaks/whitespaces added by accident here.
When using Javascript to output the current content of that Textarea though, it contains linebreak characters ('\n') as well as the aforementioned additional whitespaces.
My suspicion is that the conversion from Java's Space-Character to HTML's space characters somehow fails.
These linebreaks appear instead of spaces, not inside a single word. They also change position depending on the textarea's size. They are not at the end of a line, so they can't be forced by wrap or something.
Example:
User input "test test test test test" becomes "test\n [36x Space] test test test test"
Here's a thought... What do you use to actually output the page to the client? I'm not entirely familiar with the Cocoon environment but I assume you're using some sort of a "templating" engine (JSP? Velocity?). I'm talking about the actual file, on the server side, that has the textarea element; paste here the snippet of code that involves the textarea element and we'll see.
These extra linebreaks and whitespaces are typical from XSL transformations (that were developed unaware of such linebreaks/whitespace issues).
It is likely that you use XSLT in your cocoon application, and maybe they should be checked on that matter.
There are a number of well-known cautions you can take. You can start on SO (XSLT - remove whitespace from template) to get an idea of these.
Related
I am trying to display ŵ through my jsf page but unable to do so. Basically the text with special characters is read from properties file , but on my application screen it becomes something else . I did try to use entity values but not succeeding for example if original text is :
ŵyhsne klqdw dwql
then after replacing with with entity or hexvalues:
**&wcirc ;**yhsne klqdw dwql but in my page it displays as it is
I can just guess your question. Please edit it and improve it.
If you are displaying in web, you should use ŵ (note: without spaces), but this also requires a fonts on client site that support such character.
If the string is in your code: replace the character with \u0175.
But probably the best way it is to use just ŵ either in code on in web, or on any file, and you should assure that such files (or sources) are interpreted ad UTF-8, and you deliver pages are UTF-8. If you are not using UTF-8, just check in similar way, that you are using consistently the correct encoding.
And sending a character doesn't mean it could be displayed. There is always the possibility that a font will not have all *special" characters in it.
In my Java webapp, I create summary text of long HTML text. In the process of truncation, the HTML fragments in the string often break, producing HTML string with invalid & broken fragments. Like this example HTML string:
Visit this link <img src="htt
Is there any Java library to deal with this better so that such broken fragments as above are avoided ?
Or could I let this be included in the HTML pages & somehow deal with this using client side code ?
Since browsers will usually be able to deal with almost any garbage you feed into it (if it ain't XHTML...), if the only thing that actually happens with the input (assuming it's valid HTML of any kind) is being sliced, then the only thing you have to worry about is to actually get rid of invalid opening tags; you won't be able to distinguish broken 'endings' of tags, since they, in themselves, ain't special in any way. I'd just take a slice I've generated and parse it from the end; if I encounter a stray '<', I'd get rid of everything after it. Likewise, I'd keep track of the last opened tag - if the next close after it wasn't closing that exact tag, it's likely the closing tag got out, so I'd insert it.
This would still generate a lot of garbage, but would at least fix some rudimentary problems.
A better way would be to manage a stack of opened/closed tags and generate/remove the needed/broken/unnecessary ones as they emerge. A stack is a proper solution since HTML tags musn't 'cross' [by the spec, AFAIR it's this way from HTML 4], i.e. <span><div></span></div> isn't valid.
A much better way would be to splice the document after first parsing it as SGML/HTML/XML (depends on the exact HTML doctype) - then you could just remove the nodes, without damaging the structure.
Note that you can't actually know if a tag is correct without providing an exact algorithm you use to generate this 'garbled' content.
I used owasp-java-html-sanitizer to fix those broken fragments to generate safe HTML markup from Java.
PolicyFactory html_sanitize_policy = Sanitizers.LINKS.and(Sanitizers.IMAGES);
String safeHTML = html_sanitize_policy.sanitize(htmlString);
This seemed to be easiest of all solutions I came across.
In our application we have a textfield that is controlled by TinyMCE. If the customer pastes text from Word into the textfield, Oracle balks when we are trying to store this text in our database:
ORA-01461: can bind a LONG value only for insert into a LONG column
Cleaning the text in, say Notepad, will not produce any problems, so my guess is that the problem lies in the input string containing some kind of binary junk that Oracle uses as a delimiter between the values that are used in the sql insert string.
Upgrading our ancient TinyMCE will probably fix the problem, but I also want to ensure the text really is clean when passed to the lower layers. So I thought that I might ensure the text is true ASCII, and if not, clean everything that does not pass as ASCII by looping through the lines in the input and do the following:
line.replaceAll("[^\\p{ASCII}]", "")
Is this a viable solution, and if not, what are the pitfalls?
What about cleaning the pastes content like i described here?
This might also remove junk.
Bit of a random one, i am wanting to have a play with some NLP stuff and I would like to:
Get all the text that will be displayed to the user in a browser from HTML.
My ideal output would not have any tags in it and would only have fullstops (and any other punctuation used) and new line characters, though i can tolerate a fairly reasonable amount of failure in this (random other stuff ending up in output).
If there was a way of inserting a newline or full stop in situations where the content was likely not to continue on then that would be considered an added bonus. e.g:
items in an ul or option tag could be separated by full stops (or to be honest just ignored).
I am working Java, but would be interested in seeing any code that does this.
I can (and will if required) come up with something to do this, just wondered if there was anything out there like this already, as it would probably be better than what I come up with in an afternoon ;-).
An example of the code I might write if I do end up doing this would be to use a SAX parser to find content in p tags, strip it of any span or strong etc tags, and add a full stop if I hit a div or another p without having had a fullstop.
Any pointers or suggestions very welcome.
Hmmm ... almost any HTML parser could be used to create the effect you want -- just run through all of the tags and emit only the text elements, and emit a LF for the closing tag of every block element. As you say, a SAX implementation would be simple and straight-forward.
I would just strip everything out that has <> tags and if you want to have a full stop at the end of every sentence you check for closing tags and place a full stop.
If you have
<strong> test </strong>
(and other tags that change the look of the test) you could place in conditions to not place a full stop here.
HTML parsers seem to be a reasonable starting point for this.
there are a number of them for example: HTMLCleaner and Nekohtml seem to work fine.
They are good as they fix the tags to allow you to more consistently process them, even if you are just removing them.
But as it turns out you probably want to get rid of script tags meta data etc. And in that case you are better working with well formed XML which these guy get for you from "wild" html.
there are many SO questions relating to this (like this one) you should search for "HTML parsing" though ;-)
I have built a web application that can be seen as an overcomplicated application form. There are bunch of text areas with a given character limit. After the form submission various things happen and one of them is PDF generation.
The text is queried from the DB and inserted in the PDF template created in iReports. This works fine but the major pain is overflowing text.
The maximum number of characters is set based on 'average' text. But sometimes people prefer to write with CAPS or add plenty of linefeeds to format their text. These then cause user's text to overflow the space given in PDF. Unfortunately the PDF document must look like a real application form so I cannot allow unlimited space.
What kind of approaches you have used to tackle this?
Clean/restrict user input?
Calculate the space requirement of the text based on font metrics?
Provide preview of the PDF? (too bad users are not allowed to change their input after submission...)
Ideally, calculate the requirement based on metrics. I don't know how iReports handles text, but with iText, it lays everything out itself, you just present the data as a streaming document, so we don't worry about overflowing text.
However, iReport may not support that, or you may need to have the PDF layout fit within certain bounds. I'd try to clean the input (ie: if it's all caps, lowercase/sentence case/proper case it), strip extra whitespace. If cleaning the input can't be reliably done, or people are still getting past that, I'd also restrict it.
As a last resort, I'd present the PDF for the user to authorize. Really, users shouldn't be given more work to do, and they're not going to do it anyways.
Your own suggested solutions to your problem are all good. Probably the most important question to have answered is what should your PDF look like when the data to be displayed in a field won't fit? Do you ever need the "full answer" for anything else? When you know the answer to these, you'll have your options reduced.
For example if a field must be limited to 1/2 a page, and users sometimes enter more than 1/2 a page of text you can either
1) limit the user input - on submission calculate the size (using font-metrics as you said) and reject the submission until corrected. This assumes you can legitimately force the user to reduce their data entry.
2) accept the user input and truncate in the display of this report. Some systems use "..." to indicate data has been truncated, and can provide a hyperlink (even within the PDF) to get more information.
Providing a preview would work really well, but only if the users are good at checking and correcting and your system can handle the extra load this will generate.
Do you have control of the font that is used when generating the PDF? If so, I would look for a font in the Monospace family. This will give you consistent length for a given number of chars, regardless of puncuation, capitalization, etc.