So, I have basically been trying to use Spanish Characters in my program, but wherever I used them, Java would print out '?'.
I am using Slackware, and executing my code there.
I updated lang.sh, and added: export JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF-8
After this when I tried printing, it did not print the question marks, but other junk characters. I printed the default Charset on screen, and it has been successfully set, but it is not printing properly.
Help?
Thanks!
EDIT: I'm writing code in windows on NetBeans, and executing .class or .jar on slackware.
Further, I cannot seem to execute locale command. I get error "bash: locale: command not found".
This is what confuses me: When I echo any special characters on Slackware console, they are displayed perfectly, but when I run a java program that simply prints it's command line arguments (and I enter the special characters as Command Line input), it outputs garbage.
If you are using an ssh client such as PuTTY, check that it is using a UTF-8 charset as well.
Related
I want to read a file path from the user in java console application,
some of the file path may contain some Hebrew characters.
how can i read the input from the command line when i don't know the encoding charset?
I have been spending some time on the web and didn't succeed to find any relevant solution that will be dynamic for every platform.
*
Screen shot when running in console
If you are using windows you need to check the terminal encoding before to make sure that its encoding supports hebrew.
To do this just type chcp in the console
as output you should see chcp 28598
if you see diffrent number type chcp 28598
Now your console encoding is set to hebrew and you should be able to write the path in hebrew without getting any exception.
I am trying to print the "white smiling face" to the console window using the following line of code in Java:
System.out.println( '\u263A' );
I do not get Smiley but some other character that looks a little like a question mark.
I am running the Windows 7 Pro operating system using jdk and jre versions 1.8.0_66 Any hints as to why?
Note: I am using the Consolas font in the console window which maps the code to the ideograph according to the character map dialogue.
This is not really a problem in your code. As commenters have pointed out, there is a difference between writing a Unicode code point and how your applications or OS choose to render a sequence of bytes as a character. Here is what I get on Mac:
> javac TestWhiteSmilingFace.java && java TestWhiteSmilingFace
☺
The Windows console does not support Unicode output though. Instead, it operates on Windows Code Pages.
If you are willing to pipe output to a separate file and then open it in Notepad, then here is an approach that has worked successfully for me.
Start cmd.exe with the /U option. As discussed in cmd documentation, This option forces command output redirected to a file to be in Unicode.
Redirect the command output to a file, i.e. java TestWhiteSmilingFace > TestWhiteSmilingFace.txt.
Open the file in Notepad, i.e. notepad TestWhiteSmilingFace.txt.
This prior answer discusses the Windows console Unicode limitation in more detail and also suggests using the PowerShell Integrated Scripting Environment as a potential workaround.
Printing Unicode characters to the PowerShell prompt
I have a situation where linux mounted NAS includes filenames which has Scandinavian characters like ä, ö, å. When I list files with ls I see all those characters as question marks (?). If I run ls -b I will see encoded version of filename. Characters like this: \303\205
I need to read those files and their filenames from my Java code but I'm not able to. If I use File.listFiles to list files I'm getting question marks instead of correct characters. If I convert File to Path I'm getting exception:
java.nio.file.InvalidPathException: Malformed input or input contains unmappable characters
I' able to get rid of the exception, if I set Dsun.jnu.encoding=UTF-8 when running it, but then again I get question marks instead of ä,ö or å.
I tried to mount NAS different with settings like check=relaxed but not luck there.
All help is appreciated.
Ok, solved this one. If I login from the Linux to the server, which I use to run the code, it DOES NOT set LC_CTYPE, BUT if I login with my MAC it DOES set it UTF-8. So how application works on the server is dependent on the SSH client I use to run it....
I have created a simple .jar file which is taking as argument a string with greek characters and prints it in a file.
However, I have the following issue:
When I execute the jar file from my local windows machine, the string is properly passed as argument in the jar file and the output in the file contains the greek characters I inserted.
When I try to execute the same jar file in a windows VM, the greek characters are not properly encoded and the output in the file contains unreadable characters.
I have even set the command prompt in the VM in chcp 1253 and set an environmental variable as JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF-8 with no luck...
Any suggestion?
Running chcp 1253 sets your console codepage to Windows 1253, and yet you set Java to not use it...
If you are running your program via a batch script, save it as UTF-8 and add -Dfile.encoding=UTF-8 to parameters for the java command.
If you are running your program via the console, run chcp 65001 to switch the console to UTF-8. Also, you set the variable correctly, you can leave it that way, but you can also run Java with this option set explicitly:
chcp 65001
java -Dfile.encoding=UTF-8 -jar binks.jar
EDIT: If Windows is still complaining and/or messing stuff up, try changing 65001 to 1523 and UTF-8 to Windows-1253. You'll lose support for most of Unicode, but there's greater chance now it will work.
As described in questions, if I see a file in unix then I see special characters in it like ^M at the end of every line but if I see same file in eclipse than I do not see that special characters.
How can I remove those characters in the file, if am using eclipse for editing the file, do we have to make any specific changes in the eclipse preferences for the same ?
Any guidance would be highly appreciated.
Update:
Yes indeed it was carriage issue and following command helped me to get it sort out:
dos2unix file1.sh>file2.sh and file2.sh will be the file and it will not have any carriage values.
Possibly we can get warning like
could not open /dev/kbd to get keyboard type US keyboard assumed
could not get keyboard type US keyboard assumed but following command will suppress the warnings:
dos2unix -437 file1.txt>file2.txt
You have saved your text file as a DOS/Windows text file. Some Unix text editors do not interpret correctly DOS/Windows newline convention by default. To convert from Windows to Unix, you can use dos2unix, a command-line utility that does exactly that. If you do not have that available in your system, you can try with tr, which is more standard, using the following invocation:
tr -d '\r' < input.file > output.file
They are probably Windows carriage return characters. In Windows, lines are terminated with a carriage-return character followed by an end-of-line character. On Unix, only end-of-line characters are normally used, therefore many programs display the carriage return as a ^M.
You can get rid of them by running dos2unix on the files. You should also change your Eclipse preferences to save files with Unix end of lines.
Perhape this has suppressed UNIX warning message and worked creating the output file:
$ dos2unix -437 file.txt > file2.txt
You can remove those using dos2unix utility on a linux or unix machine. The syntax is like this dos2unix filename.
This are windows new line chars. You can follow steps shown in this post to have correct this issue.