I am new to the Stack Overflow forum. I have a question in remediating the fortify scan issues.
HP Fortify scan reporting the Resource Injection issue for following code.
String testUrl = "http://google.com";
URL url = null;
try {
url = new URL(testUrl);
} catch (MalformedURLException mue) {
log.error("MalformedUrlException URL " + testUrl + " Exception : " + mue);
}
In the above code fortify showing Resource injection in line => url = new URL(testUrl);
I have done following code changes for URL validation using ESAPI to remediate this issue,
String testUrl = "http://google.com";
URL url = null;
try {
String canonURL = ESAPI.encoder().canonicalize(strurl, false, false);
if(ESAPI.validator().isValidInput("URLContext", canonURL, "URL", canonURL.length(), false)) {
url = new URL(canonURL);
} else {
log.error("In Valid script URL passed"+ canonURL);
}
} catch (MalformedURLException mue) {
log.error("MalformedUrlException URL " + canonURL + " Exception : " + mue);
}
However, still Fortify scan reporting as en error. It is not remeditaing this issue. Anything am doing wrong?
Any solution will help lot.
Thanks,
Marimuthu.M
I think that the real issue here is not that the URL may be somehow malformed, but, that the URL may not reference a valid site. More specifically, if I, the bad guy, am able to cause your URL to point to my web site, then you obtain data from my location that is not tested and I can return data that may be used to compromise your system. I might use that to say return a record for "bob the bad guy" that makes bob look like a good guy.
I suspect that in your code you do not set a hard coded value in a string, since this is usually described with words such as
When an application permits a user input to define a resource, like a
file name or port number, this data can be manipulated to execute or
access different resources.
(see https://www.owasp.org/index.php/Resource_Injection)
I think that the proper response will be some combination of:
Do not get the result from the user, but, use the input to choose from your own internal list.
Argue that the value came from a trusted source. For example, read from a strictly controlled database or configuration file.
You do not need to remove the warnings, you need to demonstrate that you understand the risk and indicate why it is OK to use the value in your case.
boolean isValidInput(java.lang.String context,
java.lang.String input,
java.lang.String type,
int maxLength,
boolean allowNull)
throws IntrusionException
type filed in isValidInput function defines a Regular expression or pattern to match with your testUrl.
Like:
try {
ESAPI.validator().getValidInput("URI_VALIDATION", requestUri, "URL", 80, false);
} catch (ValidationException e) {
System.out.println("Validation exception");
e.printStackTrace();
} catch (IntrusionException e) {
System.out.println("Inrusion exception");
e.printStackTrace();
}
It will pass if requestUri matches pattern defined in validation.properties under Validator.URL and its length is less than 80.
Validator.URL=^(ht|f)tp(s?)\:\/\/0-9a-zA-Z(:(0-9))(\/?)([a-zA-Z0-9\-\.\?\,\:\'\/\\\+=&%\$#_])?$
This is piggybacking on Andrew's answer, but the problem Fortify is warning you of is user control of a URL. If your application later decides to make connections to that website, and it is untrusted, this is an issue.
If this is an application where you care more about sharing public URIs, than you'll have to accept the risk, and make sure users are properly trained on the inherent risk, as well as make sure if you redisplay those URLs, that someone doesn't try to embed malicious data.
Related
I'm using a Jave program to get NSE share price data from NSE's website like this for example:
url = new URL("https://archives.nseindia.com/archives/equities/bhavcopy/pr/PR071122.zip");
f = new File("NSEData.zip");
try {
FileUtils.copyURLToFile(url, f);
} catch (Exception e) {
}
The above code works for dates where market data exists, like 07/11/22 . However, where data does not exist, like on 08/11/22, the url is broken and the copyURLToFile line gets stuck indefinitely during runtime (replacing 071122 with 081122 in the url/code above will cause it to get stuck). Is there an easy way to get the program to recognize that the url for a certain date is broken (eg. https://archives.nseindia.com/archives/equities/bhavcopy/pr/PR081122.zip) and therefore ignore and continue past the try block without getting stuck?
My current workaround is to check whether a certain date is a market holiday using a DayOfWeek check as well as a HashSet containing a list of public holidays, but this is not perfect.
So, basically your URL is returning 500 error upon requesting for invalid date. You can simply use the another method available in FileUtils
https://commons.apache.org/proper/commons-io/javadocs/api-2.5/org/apache/commons/io/FileUtils.html#copyURLToFile(java.net.URL,%20java.io.File,%20int,%20int)
Example code : (Adjust timeouts as per your requirement)
var url = new URL("https://archives.nseindia.com/archives/equities/bhavcopy/pr/PR081122.zip");
var f = new File("NSEData.zip");
try {
FileUtils.copyURLToFile(url, f, 5000, 5000);
} catch (Exception e) {
}
I am using PDFBox to determine pdf file is password protected or not.
this is my code:
boolean isProtected = pdfDocument.isEncrypted();
My file properties is in sceenshot.
Here i am getting isProtected= true even i can open it without password.
Note: this file has Document Open password : No and permission password : Yes.
Your PDF has an empty user password and a non empty owner password. And yes, it is encrypted. This is being done to prevent people to do certain things, e.g. content copying.
It isn't a real security; it is the responsibility of the viewer software to take care that the "forbidden" operations aren't allowed.
You can find a longer (and a bit amusing) explanation here.
To see the document access permissions, use PDDocument.getCurrentAccessPermission().
In 2.0.*, a user will be able to view a file if this call succeeds:
PDDocument doc = PDDocument.load(file);
If a InvalidPasswordException is thrown, then it means a non empty password is required.
I am posting this answer because elsewhere on Stack Overflow and the web you might see the suggested way to check for a password protected PDF in PDFBox is to use PDDocument#isEncrypted(). The problem we found with this is that certain PDFs which did not prompt for a password were still being flagged as encrypted. See the accepted answer for one explanation of why this is happening, but in any case we used the followed pattern as a workaround:
boolean isPDFReadable(byte[] fileContent) {
PDDocument doc = null;
try {
doc = PDDocument.load(fileContent);
doc.getPages(); // perhaps not necessary
return true;
}
catch (InvalidPasswordException invalidPasswordException) {
LOGGER.error("Unable to read password protected PDF.", invalidPasswordException);
}
catch (IOException io) {
LOGGER.error("An error occurred while reading a PDF attachment during account submission.", io);
}
finally {
if (!Objects.isNull(doc)) {
try {
doc.close();
return true;
}
catch (IOException io) {
LOGGER.error("An error occurred while closing a PDF attachment ", io);
}
}
}
return false;
}
If the call to PDDocument#getPages() succeeds, then it also should mean that opening the PDF via double click or browser, without a password, should be possible.
I know that better methods of URL validation exist and worse methods might be common that this example. But can someone tell me what is probably wrong with the following URL validation code when the url = "Some random english sentence" ?
I see that the validation fails. Dont know why.
/**
* Checks if url is ok
* THIS METHOD DOESNT SEEM TO WORK WELL
*
* #param url
* #return True if url is ok, False otherwise
*/
static public boolean isUrlOk(String url) {
try {
URL urlObject = new URL(url);
String host = urlObject.getHost();
return true;
} catch (Exception e) {
return false;
}
}
The problem: It sometimes returns true for random sentences.
modify the catch part to add e.printStacktrace() to get the details of why it fails.
If you are trying with "Some random english sentence" it will fail with no protocol specified.
According to the java.net.URL API doc at http://docs.oracle.com/javase/7/docs/api/java/net/URL.html#URL(java.lang.String):
MalformedURLException - if no protocol is specified, or an unknown protocol is found, or spec is null.
Since no scheme was specified, the exception was thrown.
Hey Hi Friends I am created one j2me app. it runs perfectly in Emulator but in Mobile it showing error like java.lang.nosuchfielderror:No such field HEADERS.[[Ljava/lang/String;.
Why this happening with mobile, it runs good in emulator......
Please help me to remove this error......
public String connectPhoneName() throws Exception{
String url = "http://122.170.122.186/Magic/getPhonetype.jsp";
String phoneType;
if ((conn = connectHttp.connect(url, HEADERS)) != null) {
if ((in = connectHttp.getDataInputStream(conn)) != null) {
byte[] data = connectHttp.readDATA(in, 100);
phoneType = new String(data);
System.out.println("DATA : " + phoneType);
} else {
throw new Exception("ERROR WHILE OPENING INPUTSTREAM");
}
} else {
throw new Exception("COULD NOT ESTABLISH CONNECTION TO THE SERVER");
}
return phoneType;
}
In this code i have used HEADERS.
It looks like your app is using some (I guess) or static final or final field of some library class that does not exist in the profile of Java ME your mobile device implements.
But I can't figure out where that field comes from. Perhaps you should search your codebase for use of "HEADER" as an identifier ...
If the HEADER field is properly declared in your codebase (your MagiDEF interface) and the code you showed is using the HEADER from that interface, then you must have something wrong with your build or deployment process. Specifically, you are not deploying the version of MagiDEF that your code (above) has been compiled against. Maybe you've got an old version of something in some JAR file?
Basically, the error indicates that you have a binary incompatibility between some of the classes / interfaces that make up your app.
My application processes URLs entered manually by users. I have discovered that some of malformed URLs (like 'http:/not-valid') result in NullPointerException thrown when connection is being opened. As I learned from this Java bug report, the issue is known and will not be fixed. The suggestion is to use java.net.URI, which is "more RFC 2396-conformant".
Question is: how to use URI to work around the problem? The only thing I can do with URI is to use it to parse string and generate URL. I have prepared following program:
import java.net.*;
public class Test
{
public static void main(String[] args)
{
try {
URI uri = URI.create(args[0]);
Object o = uri.toURL().getContent(); // try to get content
}
catch(Throwable e) {
e.printStackTrace();
}
}
}
Here are results of my tests (with java 1.6.0_20), not much different from what I get with java.net.URL:
sh-3.2$ java Test url-not-valid
java.lang.IllegalArgumentException: URI is not absolute
at java.net.URI.toURL(URI.java:1080)
at Test.main(Test.java:9)
sh-3.2$ java Test http:/url-not-valid
java.lang.NullPointerException
at sun.net.www.ParseUtil.toURI(ParseUtil.java:261)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:795)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:726)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1049)
at java.net.URLConnection.getContent(URLConnection.java:688)
at java.net.URL.getContent(URL.java:1024)
at Test.main(Test.java:9)
sh-3.2$ java Test http:///url-not-valid
java.lang.IllegalArgumentException: protocol = http host = null
at sun.net.spi.DefaultProxySelector.select(DefaultProxySelector.java:151)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:796)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:726)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1049)
at java.net.URLConnection.getContent(URLConnection.java:688)
at java.net.URL.getContent(URL.java:1024)
at Test.main(Test.java:9)
sh-3.2$ java Test http:////url-not-valid
java.lang.NullPointerException
at sun.net.www.ParseUtil.toURI(ParseUtil.java:261)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:795)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:726)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1049)
at java.net.URLConnection.getContent(URLConnection.java:688)
at java.net.URL.getContent(URL.java:1024)
at Test.main(Test.java:9)
You can use appache Validator Commons ..
UrlValidator urlValidator = new UrlValidator();
urlValidator.isValid("http://google.com");
http://commons.apache.org/validator/
http://commons.apache.org/validator/api-1.3.1/
If I run your code with the type of malformed URI in the bug report then it throws URISyntaxException. So the suggested fix fixes the reported error.
$ java -cp bin UriTest http:\\\\www.google.com\\
java.lang.IllegalArgumentException
at java.net.URI.create(URI.java:842)
at UriTest.main(UriTest.java:8)
Caused by: java.net.URISyntaxException: Illegal character in opaque part at index 5: http:\\www.google.com\
at java.net.URI$Parser.fail(URI.java:2809)
at java.net.URI$Parser.checkChars(URI.java:2982)
at java.net.URI$Parser.parse(URI.java:3019)
at java.net.URI.(URI.java:578)
at java.net.URI.create(URI.java:840)
Your type of malformed URI is different, and does not appear to be a syntax error.
Instead, catch the null pointer exception and recover with a suitable message.
You could try and be friendly and check whether the URI starts with a single slash "http:/" and suggest that to the user, or you can check whether the hostname of the URL is non-empty:
import java.net.*;
public class UriTest
{
public static void main ( String[] args )
{
try {
URI uri = URI.create ( args[0] );
// avoid null pointer exception
if ( uri.getHost() == null )
throw new MalformedURLException ( "no hostname" );
URL url = uri.toURL();
URLConnection s = url.openConnection();
s.getInputStream();
} catch ( Throwable e ) {
e.printStackTrace();
}
}
}
Note that even with the approaches proposed in the other answers, you wouldn't get validation right, since java.net.URI adheres to RFC 2396, which is notably outdated. By using java.net.URI, you'll get exceptions for URLs that today are valid for all web browsers.
In order to solve these issues, I wrote a library for URL parsing in Java: galimatias. It performs URL parsing the same way web browsers do (adhering to the WHATWG URL Specification).
In your case, you can write:
try {
URL url = io.mola.galimatias.URL.parse(url).toJavaURL();
} catch (GalimatiasParseException e) {
// If this exception is thrown, the given URL contains a unrecoverable error. That is, it's completely invalid.
}
As a nice side-effect, you get a lot of sanitization that you won't get with java.net.URI. For example, http:/example.com will be correctly parsed as http://example.com/.