After submitting an input form (JSP) in one server I can correctly see umlaut chars while in the other not and are replaced with strange one. The web.xml correctly set the encoding filter for utf8. What can be the issue? Are maybe JVM variables in play here? App is running on Pivotal Clound Foundry.
Related
I'm developing a web application in Spring Framework under Wildfly 10 application server. I use Eclipse IDE.
I experienced problems with the character encoding which I wasn't able to solve: some of the characters doesn't appear properly when I send them from server side to client side.
My test String object I use is "árvíztűrő tükörfúrógép"
When I read the value from database and send it to the client side, everything appears fine.
When I set the String value in the Controller as a ModelAndView object, it appears as '??rv?zt?±r?? t??k?¶rf??r??g?©p'
When I send the value from client side by ajax as a POST variable and send it back to client side, it appears as 'árvízt?r? tükörfúrógép'.
I set all the .jsp files encoding UTF8: <%#page contentType="text/html" pageEncoding="UTF-8"%>
In Eclipse I set all the Maven modules text file encoding to UTF8. All the files are in UTF8.
What did I miss? What else should I set to get the String value right on client side? Should I set the character encoding in Wildfly 10 somehow?
Could somebody help me? If you need further information, please don't hesitate to ask. Thank you.
EDIT: Setting the character encoding as Maven property solved the second case. Now I only have problems with the third case:
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
After nearly 2 months of searching I was able to find solution to my problem. In addition to configure the server and Spring, I needed to add two more things:
On the web module of my Maven project I had to set the character encoding of the source:
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
Also when I sent JSONObject to client side, I had to set the character encoding:
#RequestMapping(value = "/historyContent.do", produces = { "application/json; charset=UTF-8" })
And finally I can see what I wanted.
I deployed an existing Maven project in my Tomcat Server on Windows7 environment. I'm using tomcat7 , spring-security-core 3.1.0 .
However, everytime I'm logging in my webapp, I received an error
java.lang.IllegalArgumentException: Non-hex character in input
The code is working perfectly fine in Linux environment. So I was thinking it's because I'm using windows7 in my local environment. When I look into the internet I saw that's it's a encoding issue between linux and windows.
I tried setting up
JAVA_TOOL_OPTIONS -Dfile.encoding=UTF8
but haven't succeeded. Please help me out. Thanks in advance!
Most likely, when you login, events happen is such order:
Spring selects an entity from DB by username.
Spring must check inputted password for match with stored encoded password.
To check for a match, Spring uses PasswordEncoder, which you have most likely configured.
Your password encoder expects that stored encoded password is a hexidecimal char sequence (previously encoded by this PasswordEncoder). Thus, it tries to decode CharSequence into byte[], but fails (source).
The solution is to persist users with previously encoded password, e.g. by BCryptPasswordEncoder.
Answer Alex Derkach is right for me!
In my case i have DB with straight store password(develop) looks like User=roor, psw=root.
So when i comment(delete) .passwordEncoder(new StandardPasswordEncoder("53c433t")); ! its work
!!But is wrong, password must be stored in encrypted form!!!
A possible reason for this is mixing password encoders. There're different implementations of PasswordEncoder. And, for example, if you use SymmetricPasswordEncoder for encoding and StandardPasswordEncoder for decoding you may get this exception.
I have a jsp which has the option for uploading a file. In my case I have uploaded a file that has the name in combination of English and umlaut characters - that will be displayed in next jsp where it displays properly for example üß_file.xls when the same code display as ?_file.xls in the higher environment ie.,test environment I had tried three options:
encoded to UTF-8 in the encoding option as the first line in my jsp.
I have changed the html:form attribute (accept character set) to UTF-8.
I have included only SetCharacter Encoded Sevlet filter which is setting response content type to UTF-8 and request .set character Encoding to UTF-8. It includes the change in web.xml with the param to force the jsp patterns to UTF-8 Encoding type.
Please suggest me some solutions to solve this issue in test environment (where it works fine in DEV and local environments).
Have you checked the encoding of the servlet-container? E.g Tomcat might use the plattform (OS) encoding which might not be UTF-8.
I am using a java library (edtftpj) to FTP a file from a web app hosted of a tomcat server to an MVS system.
The FTP transfer mode is ASCII and transfer is done using FTP streams. The data from a String variable is stored into an MVS dataset.
The problem is all the ampersand characters get converted to & . I have tried various escape characters including \& , ^& and X'50' (hex value), but none of it helped.
Anyone has any idea how to escape the ampersands please?
Nothing in the FTP protocol would cause this encoding behavior.
Representing & as & is an XML based escaping representation. Other systems might use the same scheme, but as a standard, this is an XML standard encoding.
Something in the reading of the data and writing of the data thinks it should be escaping this information and is doing the encoding.
If anything on the MVS system is using Java it is probably communicating via SOAP with some other connector, which implies XML, which could be causing the escape sequence to be happening.
Either way, the FTP protocol itself part is not part of the problem, ASCII transfer should only encode things like line endings, & is already a valid ASCII character and would not be affected. It is the MVS system that is doing this escaping if anything.
Binary transfer is preferred in almost every case, since it doesn't do any interpretation or encoding of the raw bytes.
Using FTP in ASCII-mode to/from a MVS (z/OS) will always perform code page conversions (i.e ASCII <-> EBCDIC) for the data connection. Thus it's very important to setup the connection with the appropriate parameters depending on dataset type and page codes. Example:
site SBD=(IBM-037,ISO8859-1)
site TRAck
site RECfm=FB
site LRECL=80
site PRImary=5
site SECondary=5
site BLKsize=6233
site Directory=50
As alternative, use BINARY mode and manually perform the conversions with some of the standard tools or libraries on the receiving end.
Ref links:
1. Preset commands to tackle codepage problem.
2. Coverting ASCII to EBCDIC via FTP on MVS Host.
3. Transferring Files to and from MVS.
4. FTP code page conversion.
5. FTP File Transfer Protocol and Z/OS (pdf).
On the web site I am trying to help with, user can type in an URL in the browser, like following Chinese characters,
http://localhost:8080?a=测试
On server, we get
GET /a=%E6%B5%8B%E8%AF%95 HTTP/1.1
As you can see, it's UTF-8 encoded, then URL encoded. We can handle this correctly by setting encoding to UTF-8 in Tomcat.
However, sometimes we get Latin1 encoding on certain browsers,
http://localhost:8080?a=ß
turns into
GET /a=%DF HTTP/1.1
Is there anyway to handle this correctly in Tomcat? Looks like the server has to do some intelligent guessing. We don't expect to handle the Latin1 correctly 100% but anything is better than what we are doing now by assuming everything is UTF-8.
The server is Tomcat 5.5. The supported browsers are IE 6+, Firefox 2+ and Safari on iPhone.
Unfortunately, UTF-8 encoding is a "should" in the URI specification, which seems to assume that the origin server will generate all URLs in such a way that they will be meaningful to the destination server.
There are a couple of techniques that I would consider; all involve parsing the query string yourself (although you may know better than I whether setting the request encoding affects the query string to parameter mapping or just the body).
First, examine the query string for single "high-bytes": a valid UTF-8 sequence must have two or more bytes (the Wikipedia entry has a nice table of valid and invalid bytes).
Less reliable would be to look a the "Accept-Charset" header in the request. I don't think this header is required (haven't looked at the HTTP spec to verify), and I know that Firefox, at least, will send a whole list of acceptable values. Picking the first value in the list might work, or it might not.
Finally, have you done any analysis on the logs, to see if a particular user-agent will consistently use this encoding?