I'm working for a company and they require me to code an Android application that will handle sensitive data stored on a centralized database (MySQL), so I need to code a web service to communicate the Android app with the DB, and since in that connection will flow sensitive data I need to use a secure connection for that.
So, I decided to use json-rpc for the commucation, that's why I'm looking for two json-rpc implementations: the server-side in PHP (actually the hosting that they have is running PHP 4.3), and client side in Java (Compatible with Android) compatible between them and with SSL support.
I'm sure someone might make some suggestions, but this probably isn't the best venue to get help with library selection.
If, on the other hand, you ask concrete questions about certain libraries/implementations you might get intelligent answers. A common problem with that approch is that the asker doesn't know what questions to ask! (And that leads to re-inventions of wheels and commonly, poor decisions around security).
The JSON-RPC (1.0/1.1/2.0) group of protocols are extremely simple. There are hundreds of PHP implementations. I don't know much about evaluating Android libraries but this prior SO question gives a suggested client-side implementation for Android.
I'll primarily deal with the server-side part of your question by giving you concrete answers you can ask about a particular implementation you might be evaluating.
Considerations for the server-side PHP JSON-RPC implementation:
How will you handle transport authentication?
How will you handle RPC-method authorization?
How will you handle transport security?
Will you need to handle positional/named/both style method arguments
Will you need to handle/send notification messages?
Will you need to handle batch requests? Will you need to send batch responses?
How does the PHP implementation behave under a variety of exception/warning conditions?
How does the PHP implementation advise against or mitigate the PHP array set/list duality?
Will you need to implement SMD?
Will your web-service also need to perform other HTTP-level routing? (e.g: to static resources)
How will you handle transport authentication?
Assuming you use HTTP as the transport-layer protocol, how will you authenticate HTTP requests? Will you use password/HTTP-auth/certificate/SRP/other authentication? Will you continue to trust an open socket? Will you use cookies to extend trust after authentication?
These aren't really JSON-RPC related questions, but many server-side implementations make all sorts of assumptions about how you'll do authentication. Some just leave that entirely up to you. Your choice of library is likely be greatly informed by your choices here.
How will you handle RPC-method authorization?
You might decide that once a user is authenticated that they're allowed to call all exposed methods. If not, then you need a mechanism to allow individual users (or groups of them) to execute some methods but not others.
While these concerns live inside the JSON-RPC envelope, they're not directly addressed by the JSON-RPC protocol specifications.
If you do need to implement a method-"authorization" scheme, you might find that some server implementations make this difficult or impossible. Look for a library that exposes hooks or callbacks prior to routing JSON-RPC requests to individual PHP functions so that you can inspect the requests and combine them with authentication-related information to make decisions about how/whether to handle those requests. If the library leaves you to your own devices here, you'll likely have to duplicate lots of user/method checking code across your JSON-RPC exposed PHP functions.
How will you handle transport security?
Assuming you use HTTP as your transport-level protocol, the obvious answer to this is TLS. All the usual SSL/TLS security concerns apply here and many SE/SO questions already deal with these.
Since you control the client-side implementation, you should at least consider:
Choosing a strong cypher & disallowing poor cyphers
Dealing (or not dealing!) with re-negotiation in a sane way
Avoiding known-problematic implementations/features (e.g: OpenSSL 1.0.1-1.0.1f, heartbeat[CVE-2014-160]).
Will you need to handle positional/named/both style method arguments
Some JSON-RPC server-side implementations prefer that the params argument of the JSON-RPC envelope looks like params: [412, "something"] (positional), while others prefer that the params argument of the JSON-RPC envelope looks like params: { "type": "something", "num_events": 412 } (named). Still others can handle either style without issue. Your choice of client & server libraries should be informed by this issue of 'compatibility'.
Will you need to handle/send notification messages?
Notification messages are sent by the client or the server to the opposite party in order to communicate some updated/completed state over time. The sending party (ostensibly) shouldn't expect a response to a "notification" message.
Most JSON-RPC implementations use HTTP(S) for their transport protocol; as HTTP doesn't implement bi-directional asynchronous communications, you can't prooperly implement "notification" messages.
As your client side will be Android, you could use plain JSON text over TCP sockets as the transport protocol (rather than HTTP); as TCP does allow bi-directional asynchronous communications (once the socket is established), then you can properly implement "notification" messages.
Some libaries implement client-to-server notifications over HTTP transport by enqueing notification messages and then piggy-backing the notifications on request-style messages (possibily using "batch" requests -- see next question). Similarly some libraries implement server-to-client notifications over HTTP transport by piggy-backing the notifications on response-style messages (possibly using "batch" responses).
Still other libaries make use of HTTP by using WebSockets as their transport protocol. As this does allow bi-directional (essentially) asynchronous communications, then these libaries can properly implement "notification" messages.
If you don't need this you'll have significantly more choice in your selection of transport protocols (and therefore implementations).
Will you need to handle batch requests? Will you need to send batch
responses?
Sometimes it is desirable to send/handle groups of requests/responses/notifications in a single request/response (so called "batch" messages). The id argument of the JSON-RPC envelope is used to distinguish requests/responses in a batch.
If you don't need this you'll have significantly more choice in your selection of implementations ("batch messages" is the most commonly ommitted feature of JSON-RPC implementations).
How does the PHP implementation behave under a variety of exception / warning conditions?
PHP has many ways of propagating error conditions to the programmer and also has different types of error condition. Does the server-side implementation handle 'thrown' exceptions and PHP-level errors in a consistent manner? Is the error_reporting configuration appropriate (and does the library change it locally) ? Will the library interact poorly with debugging libraries (e.g: xdebug)?
Does the server-side implementation properly isolate JSON-RPC level errors from HTTP level errors? (i.e: does it prevent errors/exceptions inside a request lifecycle from becoming an HTTP-level error like 500: Internal Server Error).
Does the server-side implementation properly interact with your authentication & authorization mesures? (i.e: it might be desirable to promote related errors to 401: Unauthorized and 403: Forbidden HTTP statuses respectively).
Does the server-side implementation 'leak' sensitive information about your implementation or data-sources when delivering errors, either via HTTP or JSON-RPC?
These are pretty important questions to ask of a JSON-RPC webservice that will be used in a security minded setting. It's pretty hard to evaluate this for a given implementation by reading its documentation. You'll likely need to:
delve into its source code to look at the error handling strategies employed, and
perform extensive testing to avoid leaks and undesired propagation of error conditions.
How does the PHP implementation advise against or mitigate the PHP array set/list duality?
PHP is a crappy language. There. I said it :-)
One of the common issues JSON-RPC implementors deal with is how to properly map JSON arrays and JSON objects to language-native datastructures. While PHP uses the same data structure for both indexed and associative arrays, most languages do not. That includes JavaScript, whose language features/limitations informed the JSON (and therefore JSON-RPC) specifications.
As in PHP there is no way to distingish an empty indexed array from an empty associative array, and as various naughty things can be done to muddy existing arrays (e.g: set associative key on existing indexed array) various solutions have been proposed to deal with this issue.
One common mitigation is that the JSON-RPC implementation might force the author to cast all intended-associative arrays to (object) before returning them, and then reject at run-time any (implicity or intended) indexed arrays that have non-confirming keys or non-sequential indexes. Some other server-side libraries force authors to annotate their methods such that the intended array semantics are known at 'compile' time; some try to determine typing through automatic introspection(bad bad bad). Still other sever-side libraries leave this to chance.
The mitigations present/missing in the server-side PHP JSON-RPC implementation will probably be quite indicative as to its quality :-)
Will you need to implement SMD?
SMD is a not-very-standardized extension to JSON-RPC "-like" protocols to allow publishing of WSDL-like information about the webservice endpoint and the functions (no classes, although conventions for namespacing can be found out there) it exposes.
Sometimes 'typing' and 'authorization' concerns are layered with the class/function "annotations" used to implement SMD.
This 'standard' is not well supported in either client or server implementations :-)
Will your web-service also need to perform other HTTP-level routing?
(e.g: to static resources)
Some server-side implementations make all sorts of assumptions about how soon during the HTTP request cycle they'll be asked to get involved. This can sometimes complicate matters if the JSON-RPC implementation either implements the HTTP server itself, or constrains the content of all received HTTP messages.
You can often work around these issues by proxying known-JSON-RPC requests to a separate web server, or by routing non-JSON-RPC requests to a separate web-server.
If you need to serve JSON-RPC and non-JSON-RPC resources from the same webserver, ask yourself if the server-side JSON-RPC implemenation makes this impossible, or whether it makes you jump through hoops to work around it.
Related
I have to write code to automatically create a JIRA based on some action performed in my workplace. The solution that my manager proposed is to create a JIRA creation agent. We are using REST architecture.
Last time I wrote a client. Now I have to write an agent. What I don't understand is the key and more like the technical difference between the two. Like how exactly these are different as for someone with very less experience with REST I feel hard to understand the core difference.
Do I have to code them in a different style? or what are some good practices to write these kinds of code?
I tried reading different blogs and related posts but couldn't find anything satisfactory to point out the differences.
This may be semantically different based on your company's internal linguistics, but typically it is as follows:
REST Server is the software which provides the API which is exposed
REST Client is the software which uses the REST Server's API to make requests and get the resulting information (usually JSON). This is more of an interface to make the requests
REST Agent uses the REST Client to make the requests but actually uses the resulting JSON and processes it to perform some sort of action
However colloquially people use REST Client and REST Agent interchangeably. The main thing is delineation of who is providing information with API and who is making requests for information through an API.
EDIT: In order to clarify in your case the agent would be making a request through the API but would most likely be a PUT or POST request to create a JIRA issue.
I'm writing a server-client architecture based game in Java.
For design reasons, I would like to use asynchronous calls for passing client actions to the server, and also asynchronous callbacks for passing the result(s) of said actions back to the client. Asynchronous calls allow buffering of client actions. Queued buffering allows simple, basically one threaded processing of client actions.
At the moment, my server and client code is pretty symmetric. They create a registry, then export and bind themselves.
Asynchronicity is achieved by buffering the incoming actions or results in a ConcurrentLinkedQueue. Actual processing is done by a thread running at regular intervals.
However, this current architecture does not work when clients are firewalled or behind a NAT. In this case the server simply can not reach clients to push results to them.
Furthermore, in this current architecture the server does not know which client sent a given action, unless a redundant layer of authentication or session handling is introduced. This allows forged actions and cheating.
I've been thinking about possible solutions but haven't found a proper one:
Client pull instead of server push. There could be a method on the server that the clients call periodically to fetch their results. However, this approach seems very ugly, it introduces additional delays, bandwidth and timing issues. Does not solve action forgery either. Direct notifications are also very much preferred.
TCP connections by themselves allow bidirectional communication, and can definitely identify clients, so RMI or JRemoting might be hacked to support it, but I'm don't know how, and I'm not aware of any existing solution.
Message passing. I'm not sure whether message passing frameworks support authentication / sessions or client identification. I'd definitely lose remote methods though.
I believe the correct solution would be to find a remote method invocation framework that supports all of the above.
So in a nutshell, I'm searching for a way to:
call the server asynchronously or pass a message to it
call the client asynchronously or pass a message to it, even behind firewall or NAT
identify the client sending the action
preferably be able to call methods, not just pass messages
keep the ability to easily test it with JUnit and Mockito (multiple clients per machine)
Are there any remote method invocation frameworks with support for these? Which is the best?
I don't know why you would insist on using a RMI or anything similar, as it is by definition unidirectional. But I had to learn a similar lesson...for one of my client-server systems, I implemented something similar to what you have now, using RMI and long-polls. That turned out to be a horrible mess, that just getting worse and worse.
Then I found out about the wonderful world of publish-subscribe frameworks. These are a natural way to build a client-server application without the need to implement a lot of your own plumbing. Moreover, these frameworks support things like auto keepalives, time syncing, session authentication and permissions, and tons of other stuff that you wouldn't want to implement yourself.
For my project, I ripped out all of my own work and replaced it with CometD, which supports both Java and browser (Javascript) clients, and couldn't be happier. It would certainly support all your needs - asynchronous communication initiated from either side, client identification (and many other features), and clients behind NAT would not be a problem once a connection is established. Easy to write tests too, and the whole framework has been scaled up to be able to handle 100k clients, which would be impossible for RMI.
I would strongly suggest that you consider dropping the requirement to be able to call methods remotely. Methods are inherently one-sided, but they still require a call and return. It's much better to design your system with event-driven programming.
Update: I've since moved to the world of web apps, specifically using Meteor.
Before we develop our custom solution, I'm looking for some kind of library, which provides:
Non-blocking queue of HTTP requests
with these attributes:
Persisting requests to avoid it's loss in case of:
network connectivity interruption
application quit, forced GC on background app
etc..
Possibility of putting out all these fields:
Address
Headers
POST data
So please, is there anything usable right know, what could save us whole day on developing this?
Right now we don't need any callbacks on completed request and neither saving result data, as there won't be such.
In my humble opinion, a good and straightforward solution would be to develop your own layer (which shouldn't be so complicated) using a sophisticated framework for connection handling, such as Netty https://netty.io/ , together with a sophisticated framework for asynchronous processing, such as Akka http://akka.io/
Let's first look inside Netty support for http at http://static.netty.io/3.5/guide/#architecture.8 :
4.3. HTTP Implementation
HTTP is definitely the most popular protocol in the Internet. There are already a number of HTTP implementations such as a Servlet container. Then why does Netty have HTTP on top of its core?
Netty's HTTP support is very different from the existing HTTP libraries. It gives you complete control over how HTTP messages are exchanged at a low level. Because it is basically the combination of an HTTP codec and HTTP message classes, there is no restriction such as an enforced thread model. That is, you can write your own HTTP client or server that works exactly the way you want. You have full control over everything that's in the HTTP specification, including the thread model, connection life cycle, and chunked encoding.
And now let's dig inside Akka. Akka is a framework which provides an excellent abstraction on the top of Java concurrent API, and it comes with API in Java or Scala.
It provides you a clear way to structure your application as a hierarchy of actors:
Actors communicate through message passing, using immutable message so that you have not to care about thread-safety
Actors messages are stored in message boxes, which can be durable
Actors are responsible for supervising their children
Actors can be run on one or more JVM and can communicate using a wide numbers of protocols
It provides a lightweight abstraction for asynchronous processing , Future, which is easier to use then Java Futures.
It provides other fancy stuff such as Event Bus, ZeroMQ adapter, Remoting support, Dataflow concurrency, Scheduler
Once you become familiar with the two frameworks, it turns out that what you need can easily be coded through them.
In fact, what you need is an http proxy coded in Netty, that upon a request receival sends immediately a message to an Akka Actor of type FSM (http://doc.akka.io/docs/akka/2.0.2/java/fsm.html) which using a durable mailbox (http://doc.akka.io/docs/akka/2.0.2/modules/durable-mailbox.html )
Here is a link to open-source library that was a Master Thesis of a student at Czech Technical University in Prague. It is very large and powerful library and mainly focuses on location. The good thing about it, though, is that it omitted the headers and other -ish that REST has.
It is the latest fork and hopefully it will give you at least inspiration for "own" solution.
how about those concurrent collections:
http://mcg.cs.tau.ac.il/projects/concurrent-data-structures
i hope that the license is ok .
You'll want to have a look to these to posts. (added at the end of the document)
Very basically an approach that works in a proficient way for me is to separate requests from the queue and the executor.
Requests are executed as Runnables or Callables. Inherit from them to create different kind of requests to your API or service. Set them up there adding headers and or body prior to to executing them.
Enqueue those requests in a queue (choose which fits better for you - I'd say LinkedBlockingQueue will make the job) linked to an executor from within a bound service and calling them from your activity or any other scope. If you don't need to get responses and callbacks you can avoid using Guava for listening to futures or create your own callbacks.
I'll stay tuned. If you need more depth I can post some specific pieces of code. There's the source of a basic example in the first link though.
http://ugiagonzalez.com/2012/08/03/using-runnables-queues-and-executors-to-perform-tasks-in-background-threads-in-android/
http://ugiagonzalez.com/2012/07/02/theres-life-after-asynctasks-in-android/
Update:
You can create another queue for those requests that were impossible to execute.
One approach that comes to my mind would be to add all your failed requests to the retry queue. The retry queue would be trying to re-run these tasks while the phone still thinks that there's any kind of internet connection available. In the request object you can set a max number of retrials and compare it to a currentRetry number increasing it in every retrial.
Mmm this might be interesting. I'll definitely think about including that in my library.
I am a student building a http proxy server. I want to cache those requests that are frequently accessed. May I get any idea about this? Especially in java.
To figure out what you need to implement, read and understand the HTTP specification. Focus particularly on the sections on how a proxy is supposed to behave.
You could possibly base part of the implementation on the Apache HttpClient library, but I have a feeling that the APIs will prove to be unsuitable for the proxy server use-case.
I'd also like to point out that a more practical way to implement an HTTP proxy server would be to simply deploy an existing server like Squid.
GWT RPC is proprietary but looks solid, supported with patterns by Google, and is mentioned by every book and tutorial I've seen. Is it really the choice for GWT client/server communcation? Do you use it and if not why and what you chose? I assume that I have generic server application code that can accommodate for RPC, EJBs, web services/SOAP, REST, etc.
Bonus question: any security issues with GWT RPC I need to be aware of?
We primarily use three methods of communications:
GWT-RPC - This is our primary and prefered mechanism, and what we use whenever possible. It is the "GWT way" of doing things, and works very well.
XMLHttpRequest using RequestBuilder - This is typically for interaction with non-GWT back ends, and we use this mainly to pull in static web content that we need during runtime (something like server side includes). It is useful especially when we need to integrate with a CMS. We wrap our RequestBuilder code in a custom "Panel" (that takes a content URI as its constructor parameter, and populates itself with the contents of the URI).
Form submission using FormPanel - This also requires interaction with a non-GWT back end (custom servlet), and is what we currently use to do cross site communications. We don't really communicate "cross site" per se, but we do sometimes need to send data over SSL on a non-SSL page, and this is the only way we've been able to do it so far (with some hacks).
The problem is that you are on a web browser so any non-http protocol is pretty much not guaranteed to work (might not get through a proxy).
What you can do is isolate the GWT-RPC stuff in a single replaceable class and strip it off as soon as possible.
Personally I'd just rely on transferring a collection of objects with the information I needed encoded inside the collection--that way there is very little RPC code because all your RPC code ever does is "Collection commands=getCollection()", but there would be a million other possibilities.
Or just use GWT-RPC as it was intended, I don't think it's going anywhere.