rendering the US unit separator on unix machine - java

I have a java application that feeds a file on a unix machine, each string contains multiple US unit separators,
Locally, when i run it on eclipse on a windows machine, it displays fine on the console:
1▼somedata▼somedata▼0▼635064▼0▼somedata▼6
But when i run the program from the unix machine, the content of the file appears as.
1â¼N/Aâ¼somedataoâ¼somedataâ¼somedata
Changing the LANG variable to any value in locale -a seems not to work.

looks like character set mismatch. On linux you most probably have UTF-8. With Java you usually get UTF-16. Try converting from UTF16 to UTF8 with iconv and see how it looks like on linux.
cat file | iconv -f UTF-16 -t UTF-8
But actually it would have been much worse if it was UTF-16. Maybe it is simply a font mismatch. But you can play with character encoding (see what source is and convert to utf-8) if that's the issue. Or maybe your source is UTF-8 and destination - some local encoding.
This makes sense because your special character appears as 2 characters in the UNIX machine. Which means source is pretty much likely UTF-8 and UNIX is using an encoding where each byte is a single character.

Related

which encoding shall I specify to `InputStreamReader()` and `OutputStreamWrier()` for Windows text file?

When using Java (in Linux or Windows) to read or write a Windows text file (where for example a new line character is encoded as /r/n), which encoding shall I specify to InputStreamReader() and OutputStreamWrier(), so that conversion between newline character in Java and /r/n in Windows text file (and possibly other conversions) can be done correctly?

Java - working with different cmd charsets

I want to read a file path from the user in java console application,
some of the file path may contain some Hebrew characters.
how can i read the input from the command line when i don't know the encoding charset?
I have been spending some time on the web and didn't succeed to find any relevant solution that will be dynamic for every platform.
*
Screen shot when running in console
If you are using windows you need to check the terminal encoding before to make sure that its encoding supports hebrew.
To do this just type chcp in the console
as output you should see chcp 28598
if you see diffrent number type chcp 28598
Now your console encoding is set to hebrew and you should be able to write the path in hebrew without getting any exception.

Cannot read special characters from the filename

I have a situation where linux mounted NAS includes filenames which has Scandinavian characters like ä, ö, å. When I list files with ls I see all those characters as question marks (?). If I run ls -b I will see encoded version of filename. Characters like this: \303\205
I need to read those files and their filenames from my Java code but I'm not able to. If I use File.listFiles to list files I'm getting question marks instead of correct characters. If I convert File to Path I'm getting exception:
java.nio.file.InvalidPathException: Malformed input or input contains unmappable characters
I' able to get rid of the exception, if I set Dsun.jnu.encoding=UTF-8 when running it, but then again I get question marks instead of ä,ö or å.
I tried to mount NAS different with settings like check=relaxed but not luck there.
All help is appreciated.
Ok, solved this one. If I login from the Linux to the server, which I use to run the code, it DOES NOT set LC_CTYPE, BUT if I login with my MAC it DOES set it UTF-8. So how application works on the server is dependent on the SSH client I use to run it....

java argument with greek characters in windows

I have created a simple .jar file which is taking as argument a string with greek characters and prints it in a file.
However, I have the following issue:
When I execute the jar file from my local windows machine, the string is properly passed as argument in the jar file and the output in the file contains the greek characters I inserted.
When I try to execute the same jar file in a windows VM, the greek characters are not properly encoded and the output in the file contains unreadable characters.
I have even set the command prompt in the VM in chcp 1253 and set an environmental variable as JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF-8 with no luck...
Any suggestion?
Running chcp 1253 sets your console codepage to Windows 1253, and yet you set Java to not use it...
If you are running your program via a batch script, save it as UTF-8 and add -Dfile.encoding=UTF-8 to parameters for the java command.
If you are running your program via the console, run chcp 65001 to switch the console to UTF-8. Also, you set the variable correctly, you can leave it that way, but you can also run Java with this option set explicitly:
chcp 65001
java -Dfile.encoding=UTF-8 -jar binks.jar
EDIT: If Windows is still complaining and/or messing stuff up, try changing 65001 to 1523 and UTF-8 to Windows-1253. You'll lose support for most of Unicode, but there's greater chance now it will work.

Junk character output even after encoding

So, I have basically been trying to use Spanish Characters in my program, but wherever I used them, Java would print out '?'.
I am using Slackware, and executing my code there.
I updated lang.sh, and added: export JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF-8
After this when I tried printing, it did not print the question marks, but other junk characters. I printed the default Charset on screen, and it has been successfully set, but it is not printing properly.
Help?
Thanks!
EDIT: I'm writing code in windows on NetBeans, and executing .class or .jar on slackware.
Further, I cannot seem to execute locale command. I get error "bash: locale: command not found".
This is what confuses me: When I echo any special characters on Slackware console, they are displayed perfectly, but when I run a java program that simply prints it's command line arguments (and I enter the special characters as Command Line input), it outputs garbage.
If you are using an ssh client such as PuTTY, check that it is using a UTF-8 charset as well.

Categories