Netbeans Remote In Project (EJB) - java

This is so confusing, when I make a new Session Bean in Netbeans, it has an option for making local interface, and a remote. If I choose the remote however, a list comes up with existing Java SE, and ME projects. If I choose any of them (by the way one of them I intended to use as a GUI client for the beans later). It makes the session beans in my Enterprise Application, but it adds the remote class in the Java SE project. I don't get it what is this about?

If you create an stateless EJB sesion bean, it can implement a local and/or a remote interface.
In your case I suggest using the remote interface, because one of your clients will be remote: A command-line stand-alone application is typically not run on the application server host (in contrast to a servlet which normally resides in the same JVM).
Try to design the remote interface in a way that one user interaction results in one call to the EJB layer. If your stand-alone application runs fast enough, the servlet is probably fast enough as well. If you do it the other way round, you possibly will find out that the interface design which works for a servlet is not suitable for remote access.
There are some semantic and technical differences between the local and the remote interface: Using the first means that the interface is designed to be used only from localhost (so many calls do not really affect performance). Local beans only live in one JVM. It affects the behaviour regarding call by value/reference (see Why do we need separate Remote and Local interfaces for EJB 3.0 session beans).
Regarding the NetBeans screenshots:
The (remote) client has the remote interface in its scope. That's OK, as it does not need to see anything else.
On the server/EJB side, there is everything else: The EJB, and the local interface. The only thing which seems to be missing is the remote interace, as the EJB implements it as well. There is possibly some NetBeans IDE magic behind it, but the server side needs the remote interface as well.

EJBs are meant to be components that "execute business logic". They can be used locally (called by other components in the app server itself) or remotely by a client (such as an applet, a desktop application).
In the later case, you access the EJBs through the remote interface (which makes sense that it should be included with the client, doesn't it?) plus some configuration that tells the client how to connect to the server that actually executes those beans.

Here's the issue, and I have struggled mightily with this: When creating EJB's from entity classes, NetBeans give you the ability to create remote interfaces. This is something that I am doing because I will have a number of stand-alone clients accessing these ejb's. When you tell NetBeans you want to create the remote interfaces, it forces you to place them in another project. That's all well and good, until you define your implementation, which looks something like this:
public class DataFacade extends AbstractFacade<MyEntityClass> implements DataFacadeRemote {
...
}
Now, when you try to deploy your ejb jar to Glassfish, it claims there are no ejb's in your jar, and rightly so, because it doesn't know anything about the interface DataFacadeRemote. What I did to solve this round-robin of problems is to create a library jar, which I copied to the glassfish server manually. Then when I deploy the ejb jar, it happily finds the library jar and understands the facade classes (ejbs). This is also a monumental pain because it's necessary to copy the library jar any time you have a change to the interface or entity classes, and that might require restarting the app server to reload the library, as it doesn't seem to happen automatically.
If I'm also doing things incorrectly, I'd appreciate a nudge in the right direction.

Related

Communication between WARs within the same tomcat server - JNDI vs REST API

I have a requirement, where a front-end application (written in spring MVC) needs to communicate with another backend application. Both the applications are going to be WAR running within the same tomcat instance. For understanding purpose, lets name it frontend.war and backend.war.
I have gone through many posts across various forum, and found many different strategies, some of them are as below:
1) Using EJB - Ruled out, EJB's are maintenance overhead and we have no plan to create a dedicated EAR to accomplish this; because we have plan to add more different forntend wars (application modules) which will communicate to same backend.war.
2) Using JNDI : Looks promising, but it needs to have one war to know about the 'interface' being exposed by 2nd war, its signature. So, it is making it tightly coupled with each other. Future change in the service contract can become nightmare.
3) Using REST API : This looks an ideal approach, with only one caveat that the communication is over HTTP call, hence it could be slow.
Other approaches like common parentContext (in Spring). ContextSwitching within application does have their own issues.
I am getting inclined to use REST API approach for this solution; as it is cleaner and easy to maintain. Further the http protocol is mature and has lots of know-how available for future development.
My query:
A) Is it possible to make a tomcat aware that a particular webservice call is indeed a call on the application running same JVM/Server (kind of 'internal'); rather than an 'external' webservice call?
B) If I use url like 'http://localhost:8080/rest/...' (note that backend.war is not intended for external world, so a domain name is not needed) ; will it do the trick?
I am looking for an approach, which gives me performance of JNDI (communication within same JVM) and flexibility of REST (You can change anything, anytime as long as public URLs are intact).
If you have thousand of war, maybe try the Enterprise service bus approach. WSO2 would be a good candidate. You could always change your entry point definition while keeping the backend intact.
Added benefit: your war can be deployed on multiple server and / or moved, but you keep only an entry point; only one address to change.
Create a jar file of the common functions, package them up as a dependcy to both projects - a service layer !
Alternatively, use rest and stick on different tomcat instances/servers - microservices!
I would use any "remote invocation" approach like Java RMI or CORBA. The latter applies also outside the Java world. Those have some benefits over others: they use TCP but not HTTP, therefore are lighter, serialize objects instead of creating new objects (like json or others). Additionally, I think RMI is simple to understand and use quickly.

Spring as standalone or on Tomcat?

I am seeking for the advantages of having Spring deployed on Tomcat rather then have it out side of any application server container.
My project doesn't require any web support.
It does requires technologies like transactions management, DB pool, JMX, low latency and more common java-ee technology.
So why would I use tomcat anyway? if it's just because of the reason of having DB POOL, I could implement it myself. I am looking for low latency solution.
Again, my project is a total backend no need of any web support.
So what do I miss here?
What do you actually mean by "more common Java EE technology"?
If it's "just a back end", what is the front end? How will applications talk to the back end?
If there's no need for a web interface, there's no advantage to using a web container.
If you have complex transaction management needs, need message queues, etc. that may be easier to set up under an application server (as opposed to a web container) because there are existing admin/management interfaces. All those may also be set up on their own, but can be more of a pain--using Spring may mitigate that pain somewhat.
The need for "more common Java EE technology", however, makes me a little nervous about implementing a standalone app, though. App containers have all that "common Java EE technology" built-in, tested, and functional. If you're bolting a variety of packages together to give you "common Java EE technology", without using a common Java EE app container, it's likely easier to just use an app container, which also gives you the benefit of providing normalized access to your services from a variety of sources.
If your app is not a web app, you can use any of the non-web specific application contexts listed under All Known Implementing Classes here. You can then init the context from a main method in a runnable jar.
If you don't need web support, you don't have to use tomcat or any other app server. Spring will provide you with most of the features you need. For connection pool, there are many options available such as c3p0 & apache dbcp. You can use one of them.
The only thing you have to worry about is a clean shutdown of your process. You can do that by implementing your own shutdown hook.
One of the reasons to deploy the application in tomcat is that it will provide you all of the connection burden, thread management and so on. Nothing that you could not implement yourself. But bear in mind that tomcat is robust, and they already deal with all of the troubles of implement that logic.
Besides of that there is little point in use an application container (if you think that not having to develop and maintain that amount of code is easy).
You shouldn't use tomcat or anything else. Spring is container already. Init spring in one simple thread, makes sure it has proper clean up flow. and that's all. I used to work on several server side integration application which do allot, communicate over different protocols to other server, and everything was easily done with out Web Containers or J2ee Application Servers. Spring have support for almost everything, sometimes with 3d party libs(caching, transactions, pools, etc ....) Simplified version could be like :
...
pubcic static void main (String args[]){
Server.server = new Server(...);
server.initSpringContext()
server.keepAlive();
server.cleanupResources();
}
..
abstract class Server{
abstract void initSpring();
abstract void cleanUpResources();
abstract void shutdown(){
this.state = STOP;
};
public void keepAlive()
while(state!=STOP){
sleep(1000)
}
}

How to easily maintain the RMI interfaces across multiple applications like server and client?

Any one dealing with the RMI would certainly have come across this dilemma of how to easily maintain the interfaces to objects providing remote method invocation service to other client applications. Whenever we decide to have a minor change in the method declaration or adding/deleting methods declared in the interface, we have to manually replicate the change in all the clients that would be using that interface for accessing RMI service from a remote server.
Think about having a downloadable (Serializable) agent that has a more stable interface used by the client, and that uses the remote interface to do its job. You can use the codebase feature to ensure its availability to all clients. The agent needs to contain the stub. You can bind the agent to the Registry, or return it from some other remote method.
Or, use JWS to distribute new versions of the clients.
Or, design your remote interfaces more stably so they don't have to change -:)
One of the good workaround I came up with is to
put all the interfaces provided by the RMI server in a separate
project which will pack itself into a jar file when built.
Then just add that jar file as dependency or in the
classpath of the server application which is meant to provide the
RMI service as well as to any of the client applications that
want to use those interfaces for invoking remote methods.
This will ease the task of maintaining RMI interfaces by updating them at just one place. Extra effort of changing method signature in some interface will be limited to changing the application code which calls that method.

Non-container based java remoting?

We're trying to design a new addition to our application. Basically we need to submit very basic queries to various remote databases accessed over the internet and not owned or controlled by us.
Our proposal is to install a small client app on each of the foreign systems, tiered in 2 basic layers, 1 that is tailored to the particular database its talking to, to handle the actual query in SQL or whatever, the other tier would be the communication tier to handle incoming requests and send back responses. This communication interface would be the same over all of the foreign systems, ie all requests and responses have the same structure.
In terms of java remoting I guess this small client app would be the 'server' and our webapp (normally referred to as the server) is the 'client'.
I've looked at various java remoting solutions (Hessian, Burlap, RMI, SOAP/REST WebServices). However am I correct in thinking that with all of these the 'server' must run in a container, ie in a tomcat/jetty etc instance?
I was really hoping to avoid having to battle all the IT departments controlling the foreign systems to get them to install very much. The whole idea is that its thin/small/easy to install/pain free. Are there any solutions that do not require running in a container / webserver?
The communication really is the smallest part of this design, no more than 10 string input params (that have no meaning other than to the db) and one true/false output. There are no complex object models required. The only complexity would be from security/encryption etc.
I wamly suggest somethig based on Jetty, the embedded HTTP server. You package a simple runnable JAR with dependency JARs into a ZIP file, add a startup script, and you have your product. See for example here.
I often use Sprint-Remoting in my projects and here you find a description how to use without a container. The guy is starting the jetty from within his application:
http://forum.springsource.org/showthread.php?12852-HttpInvoker-without-web-container
http://static.springsource.org/spring/docs/2.0.x/reference/remoting.html
Regards,
Boskop
Yes, most of them runs a standard servlet container. But containers like Jetty have very low footprint and you may configure and run Jetty completely out of your code while you stay with servlet standards.
Do not fail to estimate initial minimal requirements that may grow with project enhancement over time. Then have a standard container makes things much more easier.
As you have tagged this question with [rmi], RMI does not require any form of container. All you need is the appropriate TCP ports to be open.

What is the purpose of home/remote interfaces EJB 3.1

I have recently started reading about Java EE6 and in the examples I follow I need to make remote interface. What is the purpose of this? I also read about home interfaces, but I don't understand. I have never done enterprise programming before so I can't relate it to something else either. Could someone explain me these interfaces?
Basically by declaring the #Local #Remote interfaces you specify which methods should be available for the remote clients and which for the local beans in the same JVM.
Home interface is used to allow a remote client to create, find, and remove EJB objects.
You can easily find that information on the official documentation pages, for example for EJBHome, or nice overview for local and remote here
I highly recommend reading EJB book by Bill Burke, Richard Monson-Haefel for starters.
Every Session bean has to implement Interface and Bean Class. When User requests then JNDI helps to lookup which Interface is required for that Request. When your EJBs are deployed in same EAR then you should prefer Local Interface.
If EJBs are in same EAR then Remote Interface should be used.
This Interface will call business logic which resides in Bean Class. Home Interface creates and finds EJB objects for Remote Interface. So first you should create Home Interface with create method and then Remote Interface.

Categories