Defensively checking inputs in client. Is this a good practice? [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Is it a good practice to defensively check for null/empty inputs in client ? In the server this check is happening of whether or not input is null and an exception is thrown accordingly, should client also make a check in order to avoid a call to the webservice ?

Under the best circumstances, it is a performance improvement, and nothing else.
Under the worst circumstances, the client side checking can drift away from what the server accepts, actually introducing bugs due to inconsistent deployments.
In either case, you don't typically have control over the client environment, so you cannot assume the client-side check was performed. Malicious users can inject their own client-side code which will permit non-valid inputs to be sent to the server, so server-side checking is still strongly required.
I would recommend that you do client-side checks, but I would also recommend that you take the care to ensure that your client-side checks are synchronized with your server-side checks, such that your client doesn't start filtering inputs in a different manner than your server would. If that becomes too problematic, error on making the server side checking correct. It's the only real defense point.

It's good practice to do whatever you need to do to protect your server, whatever that may be.
Always do checking server side, you never know where data is going to come from.
Do checking client side if you have some reason for notifying the user of their mistake before sending data to a server. For example, a client-side validation of an integer input can, e.g., update a warning label as the user is typing without requiring round-trip validation to the server. Client-side checks are essentially a first line of action for displaying clear validation errors to the user, but really they are nothing more than UI performance improvements. If you don't want to do that, then you don't need to do that. If you only want to do that for certain values, you only need to do that for certain values.
Perhaps your server already generates reasonable information about validation errors, in which case you could display those to the client. It really depends on your situation and needs.
For example, lets say the client displays a series of dialogs asking for input before finally sending a request to the server. It's irritating for the user if they aren't notified of an invalid input until after they go through the entire series of dialogs. This is a good case for client-side validation at each step of the input.
Note that the cost of client-side validation is that you need to make sure to maintain it to match the actual server-side rules if they change.
It's also good practice to think a little about your specific requirements and choose an appropriate course of action to make sure those requirements are met, rather than asking vague questions about generic, situation-agnostic "good practice".
Personally, I try my best to have server-side validation report useful information, and I don't do any initial client-side validation. I then add client-side validation later, after higher priority work is complete and after determining that the UX would clearly benefit from it.

Yes, in order to keep the bandwidth and the server load as low as possible, you should always add client-side validation as well. Even a thin-client can do easy validations like null/empty-checks.
If you have some complex validation depending on many different inputs (cross-validation) or maybe complicated checksum calculations, you might skip the client-side validation and do it only on server side.
Server side validation is always needed though, because as you can see, the client cannot be trusted if you would now decide to not validate.

Related

Is there any way to disable browser back button using java code [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Is there any way to disable browser back button using Java code? Javascript is not reliable in all browsers.
Your both assumptions are wrong.
1) No one can disable browser back button (of course excluding the vendors :)).
2) Java plays on server side. Javascript plays on client side.
You might want browser onunload function to prompt user.
#Derek did a demo for what I mean : http://jsfiddle.net/DerekL/LZCj7/show/
Simple answer: Yes (for certain definitions of disable).
You are completely free to exercise whatever forms of navigation control on your website, and create a series of once-only urls, which must be accessed in a specific order, thus rendering the back button useless. (you could even cause re-visiting these urls to return to a pre-defined homepage)
Common misconceptions:
You can use javascript to control a browser on a client. - You can't, there's no two ways about it, the javascript is out of your control and can be modified by a 7year-old (this is a conservative estimate based on experience, not expectation.),
Preventing backwards navigation is always hacky and/or bad. Certain things should really attempt to do this better - Ever done an online quiz, or memory game?
Solution:
please note this will not disable the button, and will instead invalidate requests made to a 'previous' url
Include a key in every request (which changes for every subsequent request), and is associated with the HttpSession, this could then be included in form submission - bear in mind, someone who knows what they're doing can still extract this and use it to travel backwards, so it is also worthwhile ensuring that your key can only be used for a specific subset of pages from your entire site (those allowed). Many easy ways to do this, personally I am a fan of primes and hashes.
also note, refreshing a page with this could cause you grief if you have not considered your desired behaviour. Do so, and implement it.
You shouldn't try to disable the back button, as your website shouldn't extend beyond the limits of the viewport, but you should rather try to change your approach. I'm pretty sure that what you want to acheive can be done in another way! Why don't you tell us more about it?
JavaScript was designed to act on DOM elements, and unfortunately, the back button isn't part of them. Also, unlike what you think, JavaScript is reliable, and although the result may slightly vary from one browser to another, there are some libraries that are able to tackle this problem.
As for Java, you might be thinking of applets... But it's still not the right way to go, and in terms of cross-browser compatibility, the situation is much worse than it is with JavaScript.
So, in a nutshell, YES, there might be some workarounds to prevent the use of the back button, but NO, you should'nt try to do it, because it's considered a bad practice.
EDIT:
Here is a snippet of JavaScript (not Java) ode that can prevent the previous page from loading:
<script>
function preventBackButton(){window.history.forward();}
setTimeout("preventBackButton()", 0);
window.onunload=function(){null};
</script>
But remember, this is NOT a good practice. You should NOT use it. Why don't you tell us what you want to achieve so we can help you do it the right way?

Why does images.google.com GET requests have such an un-readable form?

Particularly, what are all the dots and numbers at the end for.
Here is an example:
https://www.google.com/search?site=&tbm=isch&source=hp&biw=1366&bih=673&q=kale&oq=kale&gs_l=img.3..0l10.403.1120.0.1352.4.4.0.0.0.0.407.543.0j1j4-1.2.0....0...1ac.1.32.img..2.2.542.vC-f2Kfx-2E
It is a GET variables value, but why such a strange un-human readable syntax?
I assume they are using PHP or Java on the back-end.
What you are seeing is internal computer data, not exactly intended for normal human consumption, but there for a good reason. Also perhaps you are thinking, why would anyone want these ugly internal details displayed on the average user's screen?
When HTTP was invented the thought was that GET requests should be stateful, in other words, if I copy a URI from my browser and email it to you, and you browse to it, then you should see exactly what I saw. To make this work the GET data needed to be in the URI and not hidden from view. Thus the dirty details you are seeing. Back in the day they were thinking of simple GET queries, for example: http://www.somedomain.com/Search?Find=FooBar
However, as software has evolved more data needs to be passed with GET requests and unfortunately it is all visible in the URI. (Note that this also becomes a minor security hole because the average user can see some of the internals of web page production and easily tamper with it.)
What is needed is a hidden data passing method for GET type queries to clean up URIs when it is not necessary for these details to be present. A proposal for such an improvement to HTTP is in the process of being considered. It would involve adding a new method to HTTP similar to GET but with hidden data passing like POST.

Is it good practice to rely on domain constraints for validation

Is it good practice to validate user input using the domain constraints such as email(unique:true) then rely on a message.properties input such as className.email.unique=Email address already in use to create an error message. Or is it better practice to have some client side validation or some check being carried in a web service before trying to persist to the domain?
It common practise to use both client and server sides.
Client side validation adds convenience to the user and can reduce bandwidth or improve the work flow but it isn't 100% reliable.
Client side validation has significant aesthetic appeal as well as being able to alert users of mistakes before the post operation, it will look better but and be nice for users but won't stop bad inputs, it is purely an aesthetic choice for improving how the user interacts with the page and hopefully reducing the bandwidth of sending multiple bad inputs before getting it right.
The source of a page can be edited locally in order to disable or bypass even the most well formed validation and to completely suppress it, so nothing you can do on the client side will be able to stop a determined user from making a mess of your system.
This means you also need to have good server side validation, it is good practise to try and protect yourself against injections and other sorts of nonsense users can intentionally or accidentally pull off, especially since you are out on the web. Reducing the points of failure by having both validations is the preferred way because they both add value.
You should look into using CommandObjects on your controller action when accepting request payload.
http://grails.org/doc/latest/guide/single.html#commandObjects
Command Objects allow you to put validation rules/constraints on the request payload. Now this is good because you apply new constraints which are specific to payload request from web without causing it to hit your logic. A cool feature is you can inherit domain constraints.
#grails.validation.Validateable
class LoginCommand {
String username
String password
static constraints = {
username(blank: false, minSize: 6)
password(blank: false, minSize: 6)
}
}

HATEOAS and dynamic discovery of API

The HATEOAS principle "Clients make state transitions only through actions that are dynamically identified within hypermedia by the server"
Now I have a problem with the word dynamically, though I guess it's the single most important word there.
If I change one of my parameters from say optional to mandatory in the API, I HAVE to fix my client else the request would fail.
In short, all HATEOAS does is give the server side developer extreme liberty to change the API at will, at the cost of all clients using his/her API.
Am I right in saying this, or am I missing something like versioning or maybe some other media-type than JSON which the server has to adopt?
Any time you change a parameter from optional to mandatory in an API, you will break consumers of that API. That it is a REST API that follows HATEOAS principles does not change this in any way. Instead, if you wish to maintain compatibility you should avoid making such changes; ensure that any call made or message sent by a client written against the old API will continue to function as expected.
On the other hand, it is also a good idea to not write clients to expect the set of returned elements to always be identical. They should be able to ignore additional information given by a server if the server chooses to provide it. Again, this is just good API design.
HATEOAS isn't the problem. Excessively rigid API expectations are the problem. HATEOAS is just part of the solution to the problem (as it potentially relieves clients from having to know vast amounts about the state model of the service, even if it doesn't necessarily make it straight-forward).
Donal Fellows has a good answer, but there's another side to the same coin. The HATEOAS principle doesn't have anything to say itself about the format of your messages (other parts of REST do); instead, it means essentially that the client should not try to know which URI's to act upon out of band. Instead, the server should tell the client which URI's are of interest via hyperlinks (or forms/templates which construct hyperlinks). How it works:
The client starts at state 0.
The client requests a well-known resource.
The server's response moves the client to a new state N. There may be multiple states achievable at this point depending on the response code and payload.
The response includes links (or forms/templates) which tell the client, in band, the set of potential next states.
The client selects one of the potential next states by issuing a method on a URI.
Repeat 3 through 5 to states N+1 and beyond until the client's application needs are met.
In this fashion, the server is free to change the URI that moves the client from state N to state N+1 without breaking the client.
It seems to me that you misunderstood the quoted principle. Your question suggests that you think about the resources and that they could be "dynamically" defined. Like a mandatory property added to certain resource type at the application runtime. This is not what the principle says and this was correctly pointed out in other answers. The quoted principle says that the actions within the hypermedia should be dynamically identified.
The actions available for a given resource may change in time (e.g. because someone added/removed a relationship in the meantime) and there may be different actions available for the same resource but for different users (e.g. because users have different authorization levels). The idea of HATEOAS is that clients should not have any assumptions about actions available for certain resource at any given time. The client should identify available actions each time it reads that resource.
Edit: The below paragraph may be wrong. See the comments for discussion about it.
On the other hand clients may have expectation for the data available in the resource. Like that a book resource must have a title and that it there may be links to the book's author or authors. There is no way of avoiding the coupling introduced by these assumptions but both service providers and clients should use backward-compatibility and versioning techniques to deal with it.

Is there a way to limit the number of AJAX calls in the browser that remain open?

I have a software design question on what's the best way to handle a client javascript program that relies in multiple (but mostly consecutive, not simultaneous), short-lived AJAX calls to the server as a response to user interaction [in my particular case, it will be a facebook-GAE/J app, but I believe the question is relevant to any client(browser)/server design].
First, I asked this question: What is the life span of an ajax call? . Based on BalusC answer (I encourage it to read it there), the short answer is "that's up to the browser". So, right now I do not have really control of what's happening after the server sent the response.
If the main use for an AJAX call is to retrieve data just once from the server, is it possible to manually destroy it? Would xhr1.abort() do that?
Or, the best choice is leave it like that? Would manually closing each connection (if even possible) add too much overhead to each call?
Is it possible to manually set the limit per domain?
And last (but not least!), should I really worry about this? What would be a number of calls large enough to start delaying the browser (specially some IE browsers with the leak bug that BalusC mentioned in the other question? Please, bear in mind that this is my first javascript/java servlets project.
Thank you in advance
The usage paradigm for XHR is that you don't have to worry about what happens to the object -- the browser's engine takes care of that behind the scenes for you. So I don't see any point in attempting to "improve" things manually. Browser developers are certainly aware that 99.9999% of JS programmers do not do that, so they have not only taken it into account but probably optimized for that scenario as well.
You should not worry about it unless and until you have a concrete problem in your hands.
As for limiting the number of AJAX calls per domain (either concurrent outstanding calls, or total calls made, or any other metric you might be interested in), the solution would be the venerable CS classic: add another layer of abstraction.
In this case, the extra layer of abstraction would be a function through which all AJAX calls would be routed through; you can then implement logic that tracks the progress of each call (per domain if you want it to) and rejects or postpones incoming calls based on that state. It won't be easy to get it correctly, but it's certainly doable.
However, I suggest also not worrying about this unless and until you have a concrete problem in your hands. :)
Update:
Browsers do enforce their own limits on concurrent AJAX calls; there's a very good question about that here: How many concurrent AJAX (XmlHttpRequest) requests are allowed in popular browsers?
Also, as T. J. Crowder mentions in the comments: make sure you are not keeping references to XHR objects when you are done with them, so that they can be garbage collected -- otherwise, you are creating a resource leak yourself.
Second update:
There is a good blog post about reusing XHR here -- it's actually the start of a chain of relevant posts. On the down side, it's dated and it doesn't come to any practical conclusion. But it covers the mechanics of reusing XHR well.
If the main use for an AJAX call is to retrieve data just once from the server, is it possible to manually destroy it? Would xhr1.abort() do that?
It only aborts the running request. It does not close the connection.
Or, the best choice is leave it like that? Would manually closing each connection (if even possible) add too much overhead to each call?
Not possible. It's the browser's responsibility.
Is it possible to manually set the limit per domain?
Not possible from the server side on. This is a browser specific setting. Best what you could to is to ask in some page dialog the enduser to change the setting if not done yet. But this makes after all no sense, certainly not if the enduser does totally not understand the rationale behind this.
And last (but not least!), should I really worry about this? What would be a number of calls large enough to start delaying the browser (specially some IE browsers with the leak bug that BalusC mentioned in the other question? Please, bear in mind that this is my first javascript/java servlets project.
Yes, you should certainly worry about browser specific bugs. You want your application to work without issues, do you? Why wouldn't you just use an existing ajax library like jQuery? It has already handled all nasty bugs and details under the covers for you (which is many more than only MSIE memory leaking). Just call $.ajax(), $.get(), $.post() or $.getJSON() and that's it. I wouldn't attempt to reinvent the XHR handling wheel when you're fairly new to the materials. You can find some jQuery-Servlet communication examples in this answer.

Categories