There are three types of classes:
Handler
Service
DAO
A handler handles an incoming command an calls a services. The services does caching and calls other services or a DAO. Services and DAOs are singletons.
Is there a way to create a custom warning if a DAO is used in one of the handlers?
You could try playing with some custom checkstyle rules but... you should rather try to it properly ;)
Create 3 modules (projects in Eclipse). Let Handlers know only about the Services (add a proper dependency to Handlers project) and let Serices know about DAOs (add a DAO project to Services dependencies). This way you will never make a mistake :]
Related
I am new to guice and my background is JakartaEE. So I have a conceptual question about the binding of beans.
As far as I understand, in Guice I have to implement my AbstractModule where I bind my concrete implementations. Than I am able to inject those classes into other classes. Which means, I need to programmatically bind everything in my Module.
In JakaratEE we have the CDI Container which automatically scans all jar files for CDI beans and binds them automatically. This means I can add an external .jar file with CDI beans or Obsever Beans that automatically can react on specific CDI Events of my main application, without the need to register a ActionHandler programmatically in my main application. I can just fire the CDI event and all Listeners in any existing Jar file will be triggered by the Jakarta CDI Container.
It looks to me that Guice is not providing such a mechanism? Or is there a similar way that I can extend the main application without the need to compile it with the specific extension?
I use Activiti 6.x as my workflow engine, each service task node should bind to a class which implement JavaDelegate interface. Since there're more than 200 services in my project, if one class only contains one service, I have to create more than 200 classes, it's not sensible. Is there a way that I can put multiple services in one class? Thanks!
Look in the docs (activiti.org).. but i think that you can also use beans from spring.. that means that you can point to different methods of the same bean.
In the repository layer I define normal data operations such as inserting, finding and so on. I am using Spring and I have the #Repository annotation above the class. Is it bad to use this class directly in the #Controller class for instance? Should all repositories always have a service layer that just delegate to the repository layer?
It totally depends on your choice. In Spring Roo, you don't just skip the Repository or Service layer, but use a Rich Domain Model where you have the data access logic in the domain itself. Frameworks like Groovy on Grails use a single Repository layer. So i think its OK to use it directly in the Controller.
I similarly asked what use are EJBs? An answer said this:
your [service layer] typically orchestrates application of business logic but more often than not does not actually perform it, it is usually a very thin wrapper.
The service-layer methods can define the operations that are transactions, and which therefore have annotations to get AOP transaction support added automatically for you. See also an answer to a related question, which says:
Spring's idiom would recommend having a service interface that knows about units of work and a persistence interface that deals with relational databases. The methods in the service interface should map closely to your use cases. The service implementation knows about all the model and persistence packages and classes it needs to accomplish the goals of the use case.
I have rather big set of services registered with registerService. For simplicity let's assume they are lookup by some property name. So pair of invocation is straightforward (I use pseudocode for property spec):
context.registerService(
IMyService.getClass().getName(), myServiceInst, {"name"="a"})
After that on client side:
context.getServiceReferences(IMyService.getClass().getName(), {"name"="a"})
For some reason I cannot register all possible combinations of name. Is it possible to intercept all OSGi queries so I could create services on the fly when they are queried?
I would like have basic solution that works on all layers of OSGi - it mean that code above and code with (for example) Declarative Service will work the same way.
Take a look at Service Hooks in the core specification. They allow you to find out who is waiting for what services. Notice that this might imply parsing the filter if you're interested in what properties they're waiting for.
I think you have a couple of options:
Option 1:
If you need only one Service object by client bundle (where the client bundle identifies the key-value pairs) consider using http://www.osgi.org/javadoc/r4v43/core/org/osgi/framework/ServiceFactory.html. I think the javadoc is pretty self explaining and you can find easily usage samples in google. In this case you have to implement ServiceFactory and you have to use that one in Declarative Services (please correct me if I have not used declarative services only blueprint)
Option 2:
Create your services with the help of ConfigAdmin. You create a configuration with your client bundle and your service provider bundle will catch that and export the necessary service. After the service is provided you can catch the new service registration with the client. You can find nice doc at http://felix.apache.org/site/apache-felix-config-admin.html. Well in case of this option you will be able to get more services by client bundles but I do not think you can use this with Declarative Services (You must catch the configuration changes programmatically).
Option 3:
Instead of registering IMyService register IMyServiceFactory as an OSGi service. that has a createService(name) function. In this case in the client bundles you have to take care of the lifecycles of your IMyService objects (if no more IMyService is used you can "unget" IMyServiceFactory).
I am planning an application that must provide services that are very much like those of a Java EE container to third party extension code. Basically, what this app does is find a set of work items (currently, the plan is to use Hibernate) and dispatch them to work item consumers.
The work item consumers load the item details, invoke third party extension code, and then if the third party code did not fail, update some state on the work item and commit all work done.
I am explicitly not writing this as a Java EE application. Essentially, my application must provide many of the services of a container, however; it must provide transaction management, connection pooling and management, and a certain amount of deployment support. How do I either A) provide these directly, or B) choose a third party library to provide them. Due to a requirement of the larger project, the extension writers will be using Hibernate, if that makes any difference.
It's worth noting that, of all of the features I've mentioned, the one I know least about is transaction management. How can I provide this service to extension code running in my container?
Hi I recommend using the Spring Framework. It provides a nice way to bring together a lot of the various services you are talking about.
For instance to address your specific needs:
Transaction Management/Connection pooling
I built a spring-based stand-alone application that used Apache commons connection pooling. Also I believe spring has some kind of transaction mgmt built in.
Deployment support
I use ant to deploy and run things as a front-loader. It works pretty well. I just fork a seperate process using ant to run my Spring stand-alone app.
Threading.
Spring has support for Quartz which deals well with threads and thread pools
DAO
Spring integrates nicely with Hibernate and other similar projects
Configuration
Using its xml property definitions -- Spring is pretty good for multiple-environment configuration.
Spring does have transaction management. You can define a DataSource in your application context using Apache DBCP (using a org.apache.commons.dbcp.BasicDataSourceorg.springframework.jdbc.datasource.DataSourceTransactionManager for the DataSource. After that, any object in your application can define its own transactions programatically if you pass it the TransactionManager, or you can use AOP interceptors on the object's definition in your application context, to define which methods need to be run inside a transaction.
Or, the easier approach nowadays with Spring is to use the #Transactional annotation in any method that needs to be run inside a transaction, and to add something like this to your application context (assuming your transactionManager is named txManager):
<tx:annotation-driven transaction-manager="txManager"/>
This way your application will easily accept new components later on, which can have transaction management simply by using the #Transactional annotation or by directly creating transactions through a PlatformTransactionManager that they will receive through a setter (so you can pass it when you define the object in your app context).
You could try Atomikos TransactionsEssentials for Java transaction management and connection pooling (JDBC+JMS) in a J2SE environment. No need for any appservers, and it is much more fun to work with ;-)
HTH
Guy