Compiling error with different jdk : unmappable character for encoding [duplicate] - java

I have a Java project and I'm using Apache Maven. All this time I was using Maven Java compiler plugin with parameters source=1.5 and target=1.5 defined in pom.xml file. Since I changed it to source=1.6 and target=1.6 I'm getting the following error:
XXXXXXXX.java:[54,27] unmappable character for encoding UTF-8
I've been testing different configurations and I turned showWarnings to value true so I could see that with value 1.5 for source and target this is a warning and not an error.
I need to change the Java compiler configuration anyway. Does anybody know why is this so and how can I solve this problem without editing all Java source files (there are hundreds of files with this issue now)?

My question is: why is this an error with source=1.6 and target=1.6
and it's a warning with source=1.5 and target=1.5?
Short anwser, because they said so:
-source 1.6 This is the default value. No language changes were
introduced in Java SE 6. However, encoding errors in source files are
now reported as errors, instead of warnings, as previously.
#DaveG concerns are valid, and you should try to:
Change the file encoding of your source files
find/replace those chars with your IDE

Related

Prevent IntelliJ from marking errors for specific file extensions

IntelliJ has been indicating errors in a README.md file, how can I disable error checking for specific files or even specific file formats?
This is a bug in IntelliJ IDEA:
IDEA-220938 False positive class or interface for Java in Markdown
For the time being, you can disable error checking in code fences:

List all Java errors in JSP files

I used old Eclipse (Helios) and old Tomcat (5.5) for a large web project. After updating to Tomcat 6, this code stopped working:
short foo = 3;
Integer bar = foo;
Apparently, this is invalid code according to the Java language specification[1], and there was a bug in the old Eclipse compiler so it didn't report it. New Eclipse (Kepler) reports it as an error.
I'm not quite sure why it stopped working with new Tomcat since it is using the same Java compiler as the old Tomcat, but the code is invalid and I want to fix it throughout the project.
First I tried validating the entire project in new Eclipse so it would list all .jsp files with this error. However, this validation in Eclipse doesn't seem to work very well since sometimes it detects several (existing) errors in a file and sometimes reports no errors in the same file (without changes, 10 seconds later).
Next thing I tried was to import project to NetBeans (7.4) and try to list those errors here. When I open a file with error, it detects it: "incompatible types: short cannot be converted to Integer". However, when I list all errors in the "Action Items" list, I can't find those errors (although I set the filter to include compiler errors).
I thought that listing all Java errors in all JSP files in a project would be easy, but turned out that it wasn't. How can I do it?
[1] Widening and boxing with java
The solution to this problem is to compile all JSP files in the project (i.e. generate Java files for them) and then inspect errors for generated files.
NetBeans has option to pre-compile all JSP files, but this didn't work for me because it stops after it hits first Java file with errors (maybe there is a way to circumvent this?).
Another solution might be to configure Apache Maven to build entire project, but I didn't try this, because a co-worker came up with a nice quick-n-dirty solution:
Generate a wget request for every JSP file in project and run all these requests. It doesn't matter that wget can't really access the pages since it isn't logged in, it just 'touches' them and forces the Tomcat to generate Java files.
Something like this (linux/cygwin):
find jsp -name '*.jsp' -printf 'http://localhost:8080/App/%p\n' > tmp/urls
wget -q --proxy=off --spider -i tmp/urls

Why do I get an unmappable character for encoding UTF-8 when I changed maven java compiler plugin from 1.5 to 1.6?

I have a Java project and I'm using Apache Maven. All this time I was using Maven Java compiler plugin with parameters source=1.5 and target=1.5 defined in pom.xml file. Since I changed it to source=1.6 and target=1.6 I'm getting the following error:
XXXXXXXX.java:[54,27] unmappable character for encoding UTF-8
I've been testing different configurations and I turned showWarnings to value true so I could see that with value 1.5 for source and target this is a warning and not an error.
I need to change the Java compiler configuration anyway. Does anybody know why is this so and how can I solve this problem without editing all Java source files (there are hundreds of files with this issue now)?
My question is: why is this an error with source=1.6 and target=1.6
and it's a warning with source=1.5 and target=1.5?
Short anwser, because they said so:
-source 1.6 This is the default value. No language changes were
introduced in Java SE 6. However, encoding errors in source files are
now reported as errors, instead of warnings, as previously.
#DaveG concerns are valid, and you should try to:
Change the file encoding of your source files
find/replace those chars with your IDE

é becomes &195;#169 and then becomes é How do I fix this encoding issue?

I have a java file in Eclipse that is in UTF-8 and has some strings containing accents.
In the java file itself, the accent is written and saved as é .
In the xml that is generated using velocity the é becomes é
In the pdf that is generated using fop and and an xsl template, the output is displayed as é
So this is probably an encoding issue and everything should be in UTF-8. What's weird is that locally in my eclipse environment (windows) where I run the application, the whole process works and the correct accents é are displayed in the pdf.
However when the application is built with maven and deployed to a (unix environment) I see the problem described above.
Perhaps Eclipse is compiling the file with a different javac command line than Maven.
When you compile Java, you have to tell the compiler the encoding of the source files (if they contain non-ASCII characters and the default doesn't work).
javac -encoding utf8 MyCode.java
I think the way to fix this in Maven is to add this to your pom.xml file:
<project>
...
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
...
</project>
(I got that from a Maven FAQ about a slightly different issue.)
You could instead avoid the encoding issue entirely by using ugly Unicode escape sequences in your Java file. é would become \u00e9. Worse for humans, easier for the toasters. (As Perlis said, "In man-machine symbiosis, it is man who must adjust: The machines can't.")

java.lang.ClassFormatError: Extra bytes at end of class file

I'm getting an odd error when I try and run this program. The class compiles fine into multiple .class files and I compiled it last week (before editing it) just fine. But now, I see this:
Exception in thread "main" java.lang.ClassFormatError: Extra bytes at the end of class file blah/hooplah/fubar/nonsense/IndexId$Transaction
From what I've looked up, Java 6 build 1.5 could fix it since it allows extra bytes at the end of class files (I think), but I would much rather use build 1.6.
I'm editing on Windows and then FTP-ing the .java files over to an OpenVMS machine where I then compile them. after compiling I move the .class file into a directory created from exploding the previous jar file and then re-jar.
Any clear ideas on how this happened or how to fix it?
This is indeed disallowed as per VM Spec 4.9.1:
The class file must not be truncated or have extra bytes at the end.
This can occur if there's an incompatibility in Java compiler and Java runtime used. Verify both versions and make sure that you compile for the right runtime versions. I.e. the compiled class can be used with same or newer runtime version, but not always with older runtime versions. Check the versions using java -version and javac -version.
Another common cause is that the file get corrupted during file transfer (FTP) between different machines. This transfer should be done in binary mode rather than text mode.
Another possible cause is a hardware error, e.g. corrupt harddisk/file/memory. Try recompiling or another machine.
To clarify: this happens after you've cleaned out all old .class files and recompiled on the same machine?
Or are you compiling on one machine and then copying the files to another? If that's the case, then it's likely that your file transfer software is corrupting the files (Windows <-> Linux is a common culprit, most often by adding/removing a 0x0D byte, but occasionally by adding a 0x1A DOS EOF marker).
I suspect that if you check your process, you'll find that somewhere you're modifying the files outside of Java. There's no reason -- even version changes -- for a file produced by a valid Java compiler to have extra bytes at the end.
The problem was solved by removing all Line Feeds from the .java file and properly renaming it(OpenVMS defaults to all lower case unless told not to)
Sadly a failure on my part by not testing between each but at least it works.
In short:
-Line Feeds are bad AND Name files properly (Java standards not OS standards)
I have encountered that exception during development only. It seems to me that Eclipse's ECJ (Eclipse Luna) induces that behaviour. For me a clean build solved the issue.
I had similar problem. I just tried to write one class on my office PC and transfer to our client server to test something< because there were not JDK on that machine. I used the same version of java on both machines but after a transfer I got that Exception.
I tried to use archiver before transferring and it helped.
Try
İntellij idea -> Settings - > Build Execution,Deployment -> Settings ->
Compiler - Java Compiler -> default is eclipse change it Javac (Use Compiler)
Build Tools -> Maven -> İmporting and uncheck Detect compiler automatically.
Good Luck.

Categories