What do you use for client to server communication with GWT? - java

GWT RPC is proprietary but looks solid, supported with patterns by Google, and is mentioned by every book and tutorial I've seen. Is it really the choice for GWT client/server communcation? Do you use it and if not why and what you chose? I assume that I have generic server application code that can accommodate for RPC, EJBs, web services/SOAP, REST, etc.
Bonus question: any security issues with GWT RPC I need to be aware of?

We primarily use three methods of communications:
GWT-RPC - This is our primary and prefered mechanism, and what we use whenever possible. It is the "GWT way" of doing things, and works very well.
XMLHttpRequest using RequestBuilder - This is typically for interaction with non-GWT back ends, and we use this mainly to pull in static web content that we need during runtime (something like server side includes). It is useful especially when we need to integrate with a CMS. We wrap our RequestBuilder code in a custom "Panel" (that takes a content URI as its constructor parameter, and populates itself with the contents of the URI).
Form submission using FormPanel - This also requires interaction with a non-GWT back end (custom servlet), and is what we currently use to do cross site communications. We don't really communicate "cross site" per se, but we do sometimes need to send data over SSL on a non-SSL page, and this is the only way we've been able to do it so far (with some hacks).

The problem is that you are on a web browser so any non-http protocol is pretty much not guaranteed to work (might not get through a proxy).
What you can do is isolate the GWT-RPC stuff in a single replaceable class and strip it off as soon as possible.
Personally I'd just rely on transferring a collection of objects with the information I needed encoded inside the collection--that way there is very little RPC code because all your RPC code ever does is "Collection commands=getCollection()", but there would be a million other possibilities.
Or just use GWT-RPC as it was intended, I don't think it's going anywhere.

Related

How to execute DELETE and PUT methods in Java Servlets

I'm looking for any way to run PUT and DELETE methods for Java Servlets. In general, I have a simple application with web control (WAR) connected with EJB project that controls a simple list of libraries.
For each of CRUD actions I would like to use different different method:
Creating by POST,
Reading by GET,
Update by PUT,
Delete by DELETE
However create I make by simple form with post method, read (list) is executed by simple entering GET in the URL address (without form), but I have no idea how could I use put and delete for their purposes.
Unfortunately online resources I found didn't answer that question, but I'm just curious how I could use id.
I assume removing by e.g. /Servlet/:libraryId (without any body), update by the same URL schema (with updated schema). doPut() and doDelete() should just run proper actions in the EJB.
There's no way on the HTML level to create a DELETE request in a form submit (it's always GET or POST, except you start to write your own browser that handles the kind of non standard HTML you proposed), all you can do is to submit a value (taken from a hidden field, radio button, whatever) that expresses which action should be taken in the doPost() or doGet() handler on the server.
But then there's nothing special about DELETE on the http protocol level. The whole request is just a sequence of bytes, where the first few define the method (PUT, GET, DELETE, PATCH...), followed by space etc.
As far as your server is concerned you could even use web sockets to accept connections and create responses to self defined fantasy methods. If you're using servlets, you have doPut(), doGet() and doDelete() available anyway. To handle other http methods you'd have to override the service() method.
On the client side Angular's HttpClient allows for all three methods that you need, you could use jquery's ajax, or there's again the web socket approach on the client too, as long as the browser plays along.
If it is a Java client, again there are web sockets (but you'd want to send some valid http request instead of "Hello", as done in the example¹.), HttpUrlConnection and the Apache HttpClient, at a first glance.
As for testing, there are browser extensions available that let you compose requests other than PUT and GET, SoapUi as a tool is very popular, IntelliJ even has a http client, that allows you to type the complete request as plain text.
¹ If you have a hand crafted server that agrees to such a "Hello" protocol, that's fine too. Its just not http.
Since there was quite some confusion in the question, as it turned out, I'd like to promote the idea some more to write your own Http client/server with webSockts. You have ready made examples for the socket stuff, so that's easy.
But it's more interesting then, that you're at the raw tcp/ip level that http is built on, all this framework-redirection (servlets, whatever) drops away and you're left with a single text basically (well, a byte stream), that's the http request. All the magic is gone, things are obvious and simple, just as specified.

Cross JVM instrumentation

I'm spending sometime with DynaTrace.
I'm impressed by its feature related to cross jvm instrumentation.
In simple words, DynaTrace is able to instrument Java code creating trace with some statistical information. This is nothing new.
There is a feature really interesting: when a call to an external JVM is execute, DynaTrace is able to link this new trace to the caller one (i.e. remote session bean, web services, remote RMI and so on).
How could it be possible?
I'm not able to immagine how to implement this feature? Any ideas?
Thank you
Dynatrace actually doesnt rely on information from beans. As you correctly said in your questions - we are using Byte Code Instrumentation such as other tools in the market as well. We instrument key methods of certain frameworks, e.g: Servlet, Axis, JMS, JDBC, ...
In the scenario where you make a call from one JVM to another using e.g: HTTP-based communication we instrument both the sending side of the HTTP Request as well as the receiving side on the other JVM. On the sending side we attach an additional HTTP Header with the ID of the current PurePath. PurePath is our patent technology. So - every PurePath (=every single transaction) gets a unique ID. This ID "travels" with the request, e.g: we put it on the HTTP Request as an HTTP HEader. ON the receiving side - your second JVM - we inspect that HTTP HEader and therefore know that all the data we collect belongs to that PurePath. This allows us to do real end-to-end tracing without relying on things like Beans or without correlating this data based on e.g: timestamps
Makes sense?
If you have more questions let me know. I also recorded some videos and put on YouTube to explain the technology and the product itself: http://bit.ly/dttutorials
This information is normally extracted using MXBeans. Such beans provide a standard API for accessing standard runtime information. Similarly, such applications often scan the class loaders for specific classes and extract relevant information by hard-coded access. This is why less popular solutions are often not supported by monitoring tools.

php and java json-rpc implementations compatible and with ssl support

I'm working for a company and they require me to code an Android application that will handle sensitive data stored on a centralized database (MySQL), so I need to code a web service to communicate the Android app with the DB, and since in that connection will flow sensitive data I need to use a secure connection for that.
So, I decided to use json-rpc for the commucation, that's why I'm looking for two json-rpc implementations: the server-side in PHP (actually the hosting that they have is running PHP 4.3), and client side in Java (Compatible with Android) compatible between them and with SSL support.
I'm sure someone might make some suggestions, but this probably isn't the best venue to get help with library selection.
If, on the other hand, you ask concrete questions about certain libraries/implementations you might get intelligent answers. A common problem with that approch is that the asker doesn't know what questions to ask! (And that leads to re-inventions of wheels and commonly, poor decisions around security).
The JSON-RPC (1.0/1.1/2.0) group of protocols are extremely simple. There are hundreds of PHP implementations. I don't know much about evaluating Android libraries but this prior SO question gives a suggested client-side implementation for Android.
I'll primarily deal with the server-side part of your question by giving you concrete answers you can ask about a particular implementation you might be evaluating.
Considerations for the server-side PHP JSON-RPC implementation:
How will you handle transport authentication?
How will you handle RPC-method authorization?
How will you handle transport security?
Will you need to handle positional/named/both style method arguments
Will you need to handle/send notification messages?
Will you need to handle batch requests? Will you need to send batch responses?
How does the PHP implementation behave under a variety of exception/warning conditions?
How does the PHP implementation advise against or mitigate the PHP array set/list duality?
Will you need to implement SMD?
Will your web-service also need to perform other HTTP-level routing? (e.g: to static resources)
How will you handle transport authentication?
Assuming you use HTTP as the transport-layer protocol, how will you authenticate HTTP requests? Will you use password/HTTP-auth/certificate/SRP/other authentication? Will you continue to trust an open socket? Will you use cookies to extend trust after authentication?
These aren't really JSON-RPC related questions, but many server-side implementations make all sorts of assumptions about how you'll do authentication. Some just leave that entirely up to you. Your choice of library is likely be greatly informed by your choices here.
How will you handle RPC-method authorization?
You might decide that once a user is authenticated that they're allowed to call all exposed methods. If not, then you need a mechanism to allow individual users (or groups of them) to execute some methods but not others.
While these concerns live inside the JSON-RPC envelope, they're not directly addressed by the JSON-RPC protocol specifications.
If you do need to implement a method-"authorization" scheme, you might find that some server implementations make this difficult or impossible. Look for a library that exposes hooks or callbacks prior to routing JSON-RPC requests to individual PHP functions so that you can inspect the requests and combine them with authentication-related information to make decisions about how/whether to handle those requests. If the library leaves you to your own devices here, you'll likely have to duplicate lots of user/method checking code across your JSON-RPC exposed PHP functions.
How will you handle transport security?
Assuming you use HTTP as your transport-level protocol, the obvious answer to this is TLS. All the usual SSL/TLS security concerns apply here and many SE/SO questions already deal with these.
Since you control the client-side implementation, you should at least consider:
Choosing a strong cypher & disallowing poor cyphers
Dealing (or not dealing!) with re-negotiation in a sane way
Avoiding known-problematic implementations/features (e.g: OpenSSL 1.0.1-1.0.1f, heartbeat[CVE-2014-160]).
Will you need to handle positional/named/both style method arguments
Some JSON-RPC server-side implementations prefer that the params argument of the JSON-RPC envelope looks like params: [412, "something"] (positional), while others prefer that the params argument of the JSON-RPC envelope looks like params: { "type": "something", "num_events": 412 } (named). Still others can handle either style without issue. Your choice of client & server libraries should be informed by this issue of 'compatibility'.
Will you need to handle/send notification messages?
Notification messages are sent by the client or the server to the opposite party in order to communicate some updated/completed state over time. The sending party (ostensibly) shouldn't expect a response to a "notification" message.
Most JSON-RPC implementations use HTTP(S) for their transport protocol; as HTTP doesn't implement bi-directional asynchronous communications, you can't prooperly implement "notification" messages.
As your client side will be Android, you could use plain JSON text over TCP sockets as the transport protocol (rather than HTTP); as TCP does allow bi-directional asynchronous communications (once the socket is established), then you can properly implement "notification" messages.
Some libaries implement client-to-server notifications over HTTP transport by enqueing notification messages and then piggy-backing the notifications on request-style messages (possibily using "batch" requests -- see next question). Similarly some libraries implement server-to-client notifications over HTTP transport by piggy-backing the notifications on response-style messages (possibly using "batch" responses).
Still other libaries make use of HTTP by using WebSockets as their transport protocol. As this does allow bi-directional (essentially) asynchronous communications, then these libaries can properly implement "notification" messages.
If you don't need this you'll have significantly more choice in your selection of transport protocols (and therefore implementations).
Will you need to handle batch requests? Will you need to send batch
responses?
Sometimes it is desirable to send/handle groups of requests/responses/notifications in a single request/response (so called "batch" messages). The id argument of the JSON-RPC envelope is used to distinguish requests/responses in a batch.
If you don't need this you'll have significantly more choice in your selection of implementations ("batch messages" is the most commonly ommitted feature of JSON-RPC implementations).
How does the PHP implementation behave under a variety of exception / warning conditions?
PHP has many ways of propagating error conditions to the programmer and also has different types of error condition. Does the server-side implementation handle 'thrown' exceptions and PHP-level errors in a consistent manner? Is the error_reporting configuration appropriate (and does the library change it locally) ? Will the library interact poorly with debugging libraries (e.g: xdebug)?
Does the server-side implementation properly isolate JSON-RPC level errors from HTTP level errors? (i.e: does it prevent errors/exceptions inside a request lifecycle from becoming an HTTP-level error like 500: Internal Server Error).
Does the server-side implementation properly interact with your authentication & authorization mesures? (i.e: it might be desirable to promote related errors to 401: Unauthorized and 403: Forbidden HTTP statuses respectively).
Does the server-side implementation 'leak' sensitive information about your implementation or data-sources when delivering errors, either via HTTP or JSON-RPC?
These are pretty important questions to ask of a JSON-RPC webservice that will be used in a security minded setting. It's pretty hard to evaluate this for a given implementation by reading its documentation. You'll likely need to:
delve into its source code to look at the error handling strategies employed, and
perform extensive testing to avoid leaks and undesired propagation of error conditions.
How does the PHP implementation advise against or mitigate the PHP array set/list duality?
PHP is a crappy language. There. I said it :-)
One of the common issues JSON-RPC implementors deal with is how to properly map JSON arrays and JSON objects to language-native datastructures. While PHP uses the same data structure for both indexed and associative arrays, most languages do not. That includes JavaScript, whose language features/limitations informed the JSON (and therefore JSON-RPC) specifications.
As in PHP there is no way to distingish an empty indexed array from an empty associative array, and as various naughty things can be done to muddy existing arrays (e.g: set associative key on existing indexed array) various solutions have been proposed to deal with this issue.
One common mitigation is that the JSON-RPC implementation might force the author to cast all intended-associative arrays to (object) before returning them, and then reject at run-time any (implicity or intended) indexed arrays that have non-confirming keys or non-sequential indexes. Some other server-side libraries force authors to annotate their methods such that the intended array semantics are known at 'compile' time; some try to determine typing through automatic introspection(bad bad bad). Still other sever-side libraries leave this to chance.
The mitigations present/missing in the server-side PHP JSON-RPC implementation will probably be quite indicative as to its quality :-)
Will you need to implement SMD?
SMD is a not-very-standardized extension to JSON-RPC "-like" protocols to allow publishing of WSDL-like information about the webservice endpoint and the functions (no classes, although conventions for namespacing can be found out there) it exposes.
Sometimes 'typing' and 'authorization' concerns are layered with the class/function "annotations" used to implement SMD.
This 'standard' is not well supported in either client or server implementations :-)
Will your web-service also need to perform other HTTP-level routing?
(e.g: to static resources)
Some server-side implementations make all sorts of assumptions about how soon during the HTTP request cycle they'll be asked to get involved. This can sometimes complicate matters if the JSON-RPC implementation either implements the HTTP server itself, or constrains the content of all received HTTP messages.
You can often work around these issues by proxying known-JSON-RPC requests to a separate web server, or by routing non-JSON-RPC requests to a separate web-server.
If you need to serve JSON-RPC and non-JSON-RPC resources from the same webserver, ask yourself if the server-side JSON-RPC implemenation makes this impossible, or whether it makes you jump through hoops to work around it.

Client or server side invocation to google API?

I'm writing a web application with GWT and I've to call google calendar's API to retrieve some information. I now have this dilemma: Is it better to use a client side invocation (using javascript or gwt-gdata library) or using the standard google library for java to call the service at server side and then passing all the data to the client via an async call?? I'm not able to understand pros and cons of the two approaches... In particular, I need to call several time the calendar API to retrieve events and let users add new ones, etc.
Can you help me?
I would call it from the server-side. Why ?
it means your client-side view code is dedicated to providing a view only. You're not confusing matters by calling multiple services, and you're enforcing separation of concerns.
you can make use of strategies such as caching on the server side.
Check the performance of using the server-side library. I found with the search library, that the round-trip time from client to server and back to the client was too slow.

Java HTTP Proxy

I am working on a project where we'd like to pull content from one of our legacy applications, BUT, we'd like to avoid showing the "waiting for www.somehostname.com/someproduct/..." to the user.
We can easily add another domain that points to the same server, but we still have the problem of the someproduct context root in the url. Simply changing the context root is not an option since there are hundreds of hard coded bits in the legacy app that refer to the existing context root.
What I'd like to do is be able to send a request to a different context root (Say /foo/bar.do), and have it actually go to /someproduct/bar.do, (but without a redirect, so the browser still shows /foo/bar.do).
I've found a few URL rewriting options that do something similar, but so far they seem to all be restricted to catching/forwarding requests only to/from the same context root.
Is there any project out there that handles this type of thing? We are using weblogic 10.3 (on legacy app it is weblogic 8). Ideally we could host this as part of the new app, but if we had to, we could also add something to the old app.
Or, is there some completely different solution that would work better that we haven't though of?
Update: I should mention that we already originally suggested using apace with mod_rewrite or something similar, but management/hosting are giving the thumbs down to this solution. :/
Update 2 More information:
The places where the user is able to see the old url / context root have to do with pages/workflows that are loaded from the old app into an iframe in the new app.
So there is really nothing special about the communication between the two apps that client could see, it's plain old HTTPS handled by the browser.
I think you should be able to do this using a fairly simple custom servlet.
At a high level, you'd:
Map the servlet to a mapping like /foo/*
In the servlet's implementation, simply take the request's pathInfo, and use that to make a request to the legacy site (using HttpUrlConnection or the Apache Commons equivalent).
Pipe the response to the client (some processing may be necessary to handle the headers).
Why not front weblogic with Apache.
This is a very standard setup and will bring lots of other advantages also. URL rewriting in apache is extremely well supported and the documentation is excellent.
Don't be put off by the setup it's fairly simple and you can run apache on the same box if necessary.
Using Restlet would allow you to do this. The Redirector object can be used. See here and here for example.
If you instead serve out a JSP page you can use the tag to do the request server side.
Then the user will not even know that the resource was external.
http://java.sun.com/products/jsp/syntax/1.2/syntaxref1214.html
a bit more context for the API the client is working against would help here to give a solution that could work. Are you trying to provide a complete new API totally different from the legacy Java EE app? What artifact is serving the API (Servlet, EJB, REST service)?
If you have the API provided by a different enterprise application then I suppose you simply use a Pojo class to work as a gateway to the legacy app wich of cause can then be reachable via another context root than the new service app. This solution would assume you know all legacy API methods and can map them to the calls for the new API.
For a generic solution where you don't have to worry about what methods are called. I am curious if the proxy approach could really work. Would the user credentials also be served correctly to the legacy system by URL re-writing? Would you have to switch to a different user for the legacy calls instead of using the origin caller? Is that possible with URL re-writing. Not sure if that could work in a secure context.
Maybe you can provide a bit more information here.

Categories