Just looking for confirmation that the GWT RPCs play well with app-engine's new multi-tenancy support based on name-spaces.
The two technologies are very compatiable. I have been using them together for a long time. In fact there are some syntatic sugar that makes them easy to be used together, such are RPC calls. Plus now your server and client code are in the same language. Using eclipse and the setup that google developed for eclipse and GWT/AppEngine I have all the code in one project with a client, server and shared code. The client and Shared code gets compiled with GWT and the server and shared code get compiled for AppEngine.
I hope this helps.
https://developers.google.com/web-toolkit/doc/1.6/tutorial/appengine
Michael
You can try to use multi-tenancy in your GWT-RPC, but it will not be just configuration. You can use per-user multi-tenancy setting NameSpace using filter, but do not forget that if you start AsyncTasks from gwt-rpc calls you will not have valid user in tasks requests so you will need to figure out how to choose namespace in tasks.
As long as your server-side implementations correctly handle the multi-tenancy, you'll be fine.
There is no additional load to using multi-tenancy with GWT-RPC, but there is additional load using multi-tenancy and you won't be able to avoid that with GWT-RPC.
Related
We have this Java codebase that has a good amount of business rules written in Drools. I have been tasked with designing and recommending and alternative cloud-based rules engine, so that other services and applications within the company can utilize it too. Here is the high level plan:
Perform a "Lift and shift" by decoupling the rule execution from the java code base
Create a containerized rules service that takes in an input via HTTP or a message queue and returns output, or perform some actions (Send notifications, queue something, etc)
Host it on Azure or GCP
I'm trying to create a baby POC. I need some help with some of the implementation details. For example, would creating a .NET REST endpoint and then passing in the data to the drools Java container be a feasible idea? Or would it be simpler to just create simple Java REST endpoint that uses Drools behind the scenes?
Any tips or examples of this would be highly appreciated, as I don't want to re-invent the wheel!
Drools has a native build in REST web service that can be embedded in Java Containers (JBoss, Tomcat).
This framework is the KIE Server and can be activated to host build in Drools Process/Rules.
https://docs.jboss.org/drools/release/7.69.0.Final/drools-docs/html_single/#_ch.kie.server
There are some docker images that contains default KIE Server that you can use and deploy your rules to.
Ex : https://hub.docker.com/r/jboss/kie-server/
Hope this helps,
Best,
Emmanuel
Or would it be simpler to just create simple Java REST endpoint that uses Drools behind the scenes?
You might want to consider using Kogito for your DRL rules, instead of having to deploy a containerised Kie Server.
Then, to have a Docker image generated easily with Kogito-on-Quarkus, it's enough to add the Quarkus' JIB extension to your Kogito-on-Quarkus app.
In a web application I am developing, I am using a third party Java library (JPL) that uses JNI to connect to an external application: a Prolog engine.
For the nature of my problem, I need to have one Prolog engine per http session. But as far as I know the library I am using only let me work with one Prolog engine per java VM.
In order to solve this issue I came up with the idea of trying to configure JBoss to launch a new process (instead of just a new thread) per each http session, a bit like CGI where normally one process is started per http request.
In this way, certain servlets could use the required JNI based library without having to worry about synchronization issues in its side, since as I expect (and hope not be wrong about that), each of them will have an independent Prolog engine with different state (e.g., different asserted Prolog facts).
Is possible to configure JBoss (or other servlet container) in this way? Any feedback or pointer will be highly appreciated!.
To my knowledge this is not possible. However looking at the documentation http://www.swi-prolog.org/packages/jpl/java_api/high-level_interface.html#Multi-Threaded%20Queries the only problem seems to be that you can have only one open query per VM.
I am working on a desktop Java application that is supposed to connect to an Oracle database via a proxy which can be a Servlet or an EJB or something else that you can suggest.
My question is that what architecture should be used?
Simple Servlets as proxy between client and database, that connects to the database and sends results back to the client.
An enterprise application with EJBs and remote interfaces to access the database
Any other options that I haven't thought of.
Thanks
Depending on how scalable you want the solution to be, you can make a choice.
EJB (3) can make a good choice but then you need a full blown app server.
You can connect directly using jdbc but that will expose url of db (expose as in every client desktop app will make a connection to the DB. you can not pool, and lose lot of flexibilities). I would not recommend going this path unless your app is really a simple one.
You can create a servlet to act as proxy but its tedious and not as scalable. You will have to write lot of code at both ends
What i would recommend is creating a REST based service that performs desired operations on the DB and consume this in your desktop app.
Start off simple. I would begin with a simple servlet/JDBC-based solution and get the system working end-to-end. From that point, consider:
do you want to make use of conenction pooling (most likely). Consider C3P0 / Apache DBCP
do you want to embrace a framework like Spring ? You can migrate to this gradually, and start with using the servlet MVC capabilities, IoC etc. and use more complex solutions as you require
Do you want to use an ORM ? Do you have complex object graphs that you're persisting/querying, and will an ORM simplify your development ?
If you do decide to take this approach, make sure your architecture is well-layered, so you can swap out (say) raw JDBC in favour of an ORM, and that your development is test-driven, such that you have sufficient test cases to confirm that your solution works whilst you're performing the above migrations.
Note that you may never finalise on a solution. As your requirements change, and your application scales, you'll likely want to swap in/out the technology most suitable for your current requirements. Consequently the architecture of your app is more important than the particular toolset that you choose.
Direct usage of JDBC through some ORM (Hibernate for example) ?
If you're developing a stand-alone application, better keep it simple. In order to use ORM or other frameworks you don't need a J2EE App Server (and all the complexity it takes with it).
If you need to exchange huge amounts of data between the DB and the application, just forget about EJBs, Servlets and Web Services, and just go with Hibernate (or directly with plain old JDBC).
A REST based Web Services solution may be good, as long as you don't have complex data, and high numbers (try to profile how long does it takes to actually unmarshal SOAP messages back and to java objects).
I have had a great deal of success with using Spring-remoting and a servlet based approach. This is a great setup for development as well, since you can easily test your code without deploying to an web container.
You start by defining a service interface to retrieve/store your data (POJO's).
Create the implementation, which can use ORM, straight JDBC or some pooling library (container provided or 3rd party). This is irrelevant to the remote deployment.
Develop your application which uses this service directly (no deployment to a server).
When you are satisfied with everything, wrap your implementation in a war and deploy with the Spring DispatcherServlet. If you use maven, it can be done via the war plugin
Configure the desktop to use the service via Spring remoting.
I have found the ability to easily develop the code by running the service as part of the application to be a huge advantage over developing/debugging something running on a server. I have used this approach both with and without an EJB, although the EJB was still accessed via the servlet in our particular case. Only service to service calls used the EJB directly (also using Spring remoting).
I'm starting to study GWT now, and have a very general question, I could maybe teach myself with a little more experience, but I don't want to start it wrong, so I decided to ask you.
I always develop using JSF, having separate packages for beans, controllers and managedbeans.
However, as the GWT uses RPC, I will not have managedbeans, right?
So, GWT automatically handles user session for me, or do I have to do it myself?
What is the best package structure for the project?
It is best to use RPC, or create a webservice and access the webservice in GWT?
It's hard to host the application on a tomcat server?
Is there a test saying which server is faster for GWT?
Thank you.
However, as the GWT uses RPC, I will not have managedbeans, right?
True, GWT RPC uses POJOs.
So, GWT automatically handles user session for me, or do I have to do it myself?
GWT is pure AJAX APP - client code (normally) runs in one browser window (similar to gmail) and does not reload the web page. This means that the application state is always there - no need for sessions (as a means of saving state). You still might need sessions for user authentication, but this is usually handled by servlet container.
What is the best package structure for the project?
Three packages: client, server and shared. Client for GWT client code, server for server (also RPC) code and shared for POJOs that are used by both client and server.
It is best to use RPC, or create a webservice and access the webservice in GWT?
Go with GWT-RPC or (better, newer) with RequestFactory.
It's hard to host the application on a tomcat server?
It's straightforward: GWT client code is compiled to JS/html and is hosted as any static content. RPC server code is just Servlets - normal web.xml registration.
Is there a test saying which server is faster for GWT?
No clue, but IMHO does not matter, because most of the latency will come from database and network.
Also have a look at http://code.google.com/p/gwt-platform/
This framework is really great and follow all suggested best practices(e.g. MVP) by google and give you as well great support for gin, gwt dispatcher, website crawling, history with tokens, code splitting via gwt async etc.
If you want to set up a good project structure try to use the maven gwt plugin(http://mojo.codehaus.org/gwt-maven-plugin/) it helps you a lot with setting up an initial structure and manage your build process.
our product is built on a client-server architecture, with the server implemented in Java (we are using POJO's with Spring framework). We have two API levels on the server:
the external API, which uses REST web services - useful for external clients and integrations with other servers.
the internal API, which uses pure Java classes - useful for the actual code inside (as many times the business logic invokes an API call) and for integration with plusins developed inside out company and deployed as parts of our product. The external REST API also uses the internal API.
We implemented permission checking (using Spring security) in the internal API because we wanted to control access at the lowest API level.
But here comes the problem: there are some operations defined on the API level that are regarded as forbidden for a currently logged user, but which should be performed smoothly by the server itself. For example, deleting some entity could be forbidden for the user, but the server might want to delete this entity as a side effect of some other operation performed by the user and we want this to be allowed.
So what is the best approach for allowing the server to perform an operation (in some kind of super-user mode) that might be forbidden for the actual logged-in user?
As I see it we have several options each of which have its pros and cons:
Implement permission checking in external level API (REST) - bad because plugins will bypass permissions checks.
Turn off permission checking for the current thread after the request was granted - too dangerous, we might allow too many server actions that should be forbidden.
Explicitly ask the internal API level to perform the operation in the privileged mode (just like PrivilegedAction in java security framework) - too verbose.
As none of the above approaches is ideal, I wonder what is the best-practice approach for this problem?
Thanks.
Security is applied at the bounds of a module. If I understand you, your system applies security on two levels of abstraction of the (roughly) same API. It sounds complex, as you have to make a double security check on the whole two APIs.
Consider migrating the REST needed methods from the internal API to the external one, and deleting security stuff in the internal API.
external API will manage security for external clients (at the boundaries of your app)
internal API will be strictly reserved for internal app and plugin use (and you would happy hack it, as no external clients are bounded to it)
Do you really need to control the plugin's permissions to your application logic ? Is there a good reason for it ? Plugins are developped by your company, after all. Maybe a formal document explaining to plugin's developpers what should not be done, or a safety test suite validation for the plugin (e.g. assert plugin does not call "this" method) will do the job either.
If you still need to consider these plugins as "untrusted", add the methods they need to your external API (on your app boundary) and create specific security profile for each use: "restProfile", "clientProfile" & "pluginProfile". Each will have specific rights on your external API methods.
It sounds like you need two levels of internal API, one exposed to plugins and one not.
The best way of enabling that would be using OSGi (or Spring Modules). It allows you to explicitly state which packages and classes can be accessed by other modules (ie REST modules and plugin modules). Those would be the exposed level of your new internal API and you would use Spring Security to further restrict access selectively. The internal packages and classes would contain the methods which did all the low level stuff (like deleting entities) and you wouldn't be able to call them directly. Some of the exposed API would just duplicate the internal API with a security check, but that would be ok.
The problem with the best way is that Spring Modules strikes me as still a bit too immature even to put into a new webapp project. There's no way I'd want to shoehorn it into an old project.
You could probably achieve something similar using Spring Security and AspectJ, but it strikes me that the performance overhead would be prohibitive.
One solution that would be quite cool if you could re-architect your system would be to take tasks requiring security elevation offline, or rather make them asynchronous. Using Quartz and/or Apache Camel (or a proper ESB) you could make the "delete my account" method create an offline task that can at a future date be executed as an atomic unit of work with admin priveliges. That means you can cleanly do your security checks for the user requesting account deletion in a completely separate thread to where the deletion actually takes place. This would have the advantage of making the web thread more responsive, although you'd still want to do somethings immediately to preserve the illusion that the requested action had been completed.
If you're using Spring, you may as well utilize it fully. Spring offers AOP that allows you to use interceptors and perform these cross-system checks, and in the event of an unauthorized action, prevent the action.
You can read more about this in Spring's online documentation here.
Hope this helps...
Yuval =8-)