Where to put business logic in Eclipse RCP program - java

I'm writing a small application in RCP to wrap around the business logic in another (non-RCP) simulation library. I can access and use the library fine from any of my plugins, but I don't know where I should put the instance of the Simulation library so that, say, one of the command handlers can make calls to it.
From reading the docs it sounds like I should be storing 'global' information like this in the workbench - but I still don't really understand how to do that.
Help?

First, the business layer (BL) can and should reside in its' own plugin. That will provide decent decoupling between the layers.
Second, you should carefully decide what the interface should be and which classes are exposed. Ideally, you should mostly expose interfaces and data objects.
Finally, decide how the "hand shake" works. E.g., how to obtain the initial interface to the BL. Since it is a Plugin, it could have an Activator which loads it. You could add a method in the activator which returns the BL interface.
If you are looking for something more decoupled, you could create an extension point or deploy the BL as an OSGi service, but that's a bit of an overkill for you need.

If I understand you correctly, I see two ways:
Store the instance in the model plug-in itself, using ‘SimulationFactory.getInstance(String myAppId)‘. The passed String is a constant in you app that is always used, when obtaining the reference.
Define a new class e.g. GlobalAccess in you app that is initilized with an instance of your model and has some getter (whether you use a single instance again or only provide public static methods is a matter of taste).
The seocond way is similar to some classes in eclipse like platfom or platformui, where you can obtain initial references and navigate through the workbench.
edit
i just found a tutorial that might help you:
Passing Data between Plug-ins

Related

Using non-created classes in current class

Scenario
This question may be a question about conventions, but Java might have a built-in way to do this. I'm explaining my problem with a scenario:
We are three people working on a project, and we're all doing different parts, and working on different git branches, all of which will be needed in the end project.
My part of the program runs the TUI (let's call the class Startmenu), which requires to run functions from an instance of the Database class. In my switch cases, I know the future code from the other branch will allow me to simply run db.printElements(), as an example.
Problem
Nevertheless, this is the problem: I cannot define Database db; in the class structure, nor can I assign my Startmenu() constructor to take a Database db as an input such as Startmenu(Database db), because it does not yet exist.
In practice, how do I solve this issue? Currently, I'm commenting out the parts that require parts of the other code, and replace it with poisoned code instead, as a placeholder. This doesn't seem like the best idea.
I know a solution is to create the Database class, with empty functions for those functions I will be needing right now, but this will mess with git instead.
tl; dr: How can I prepare my own files to use code that does not yet exist, which will appear "magically" by other people over time?
All components in your project should have specified an interface to exchange information across layers and other Java components during the design phase.
You can early commit and share these interfaces, so other colleagues can provide their own testing implementations or mock behaviours.

Domain Driven Design - testability and the "new" keyword

I have been trying to follow a domain driven design approach in my new project. I have always generally used Spring for dependency injection, which nicely separates my application code from the construction code, however, with DDD I always seem to have one domain object wanting to create another domain object, both of which have state and behaviour.
For example, given a media file, we want to encode it to a different format - the media asset calls on a transcode service and receives a callback:
class MediaAsset implements TranscodingResultListener {
private NetworkLocation permanentStorage;
private Transcoder transcoder;
public void transcodeTo(Format format){
transcoder.transcode(this,format);
}
public void onSuccessfulTranscode(TranscodeResult result){
Rendition rendition = new Rendition(this, result.getPath(), result.getFormat());
rendition.moveTo(permanentStorage);
}
}
Which throws two problems:
If the rendition needs some dependencies (like the MediaAsset requires a "Transcoder") and I want to use something like Spring to inject them, then I have to use AOP in order for my program to run, which I don't like.
If I want a unit test for MediaAsset that tests that a new format is moved to temporary storage, then how do I do that? I cannot mock the rendition class to verify that it had its method called... the real Rendition class will be created.
Having a factory to create this class is something that I've considered, but it is a lot of code overhead just to contain the "new" keyword which causes the problems.
Is there an approach here that I am missing, or am I just doing it all wrong?
I think that the injection of a RenditionFactory is the right approach in this case. I know it requires extra work, but you also remove a SRP violation from your class. It is often tempting to construct objects inside business logic, but my experience is that injection of the object or a objectfactory pays off 99 out of 100 times. Especially if the mentioned object is complex, and/or if it interacts with system resources.
I assume your approach for unit testing is to test the MediaAsset in isolation. Doing this, I think a factory is the common solution.
Another approach is to test the whole system (or almost the whole system). Let your test access the outer interface[1] (user interface, web service interface, etc) and create test doubles for all external systems that the system accesses (database, file system, external services, etc). Then let the test inject these external dependencies.
Doing this, you can let the tests be all about behaviour. The tests become decoupled from implementation details. For instance, you can use dependency injection for Rendition, or not: the tests don't care. Also, you might discover that MediaAsset and Rendition are not the correct concepts[2], and you might need to split MediaAsset in two and merge half of it with Rendition. Again, you can do it without worrying about the tests.
(Disclaimer: Testing on the outer level does not always work. Sometimes you need to test common concepts, which requires you to write micro tests. And then you might run into this problem again.)
[1] The best level might actually be a "domain interface", a level below the user interface where you can use the domain language instead of strings and integers, and where you can talk domain actions instead of button clicks and focus events.
[2] Perhaps this is actually your problem: Are MediaAsset and Rendition the correct concepts? If you ask your domain expert, does he know what these are? If not, are you really doing DDD?

The missing "framework level" access modifier

Here's the scenario. As a creator of publicly licensed, open source APIs, my group has created a Java-based web user interface framework (so what else is new?). To keep things nice and organized as one should in Java, we have used packages with naming convention
org.mygroup.myframework.x, with the x being things like components, validators, converters, utilities, and so on (again, what else is new?).
Now, somewhere in class org.mygroup.myframework.foo.Bar is a method void doStuff() that I need to perform logic specific to my framework, and I need to be able to call it from a few other places in my framework, for example org.mygroup.myframework.far.Boo. Given that Boo is neither a subclass of Bar nor in the exact same package, the method doStuff() must be declared public to be callable by Boo.
However, my framework exists as a tool to allow other developers to create simpler more elegant R.I.A.s for their clients. But if com.yourcompany.yourapplication.YourComponent calls doStuff(), it could have unexpected and undesirable consequences. I would
prefer that this never be allowed to happen. Note that Bar contains other methods that are genuinely public.
In an ivory tower world, we would re-write the Java language and insert a tokenized analogue to default access, that would allow any class in a package structure of our choice to access my method, maybe looking similar to:
[org.mygroup.myframework.*] void doStuff() { .... }
where the wildcard would mean any class whose package begins with org.mygroup.myframework can call, but no one else.
Given that this world does not exist, what other good options might we have?
Note that this is motivated by a real-life scenario; names have been changed to protect the guilty. There exists a real framework where peppered throughout its Javadoc one will find public methods commented as "THIS METHOD IS INTERNAL TO MYFRAMEWORK AND NOT
PART OF ITS PUBLIC API. DO NOT CALL!!!!!!" A little research shows these methods are called from elsewhere within the framework.
In truth, I am a developer using the framework in question. Although our application is deployed and is a success, my team experienced so many challenges that we want to convince our bosses to never use this framework again. We want to do this in a well thought out presentation of the poor design decisions made by the framework's developers, and not just as a rant. This issue would be one (of several) of our points, but we just can't put a finger on how we might have done it differently. There has already been some lively discussion here at my workplace, so I wondered what the rest of the world would think.
Update: No offense to the two answerers so far, but I think you've missed the mark, or I didn't express it well. Either way allow me to try to illuminate things. Put as simply as I can, how should the framework's developers have refactored the following. Note this is a really rough example.
package org.mygroup.myframework.foo;
public class Bar {
/** Adds a Bar component to application UI */
public boolean addComponentHTML() {
// Code that adds the HTML for a Bar component to a UI screen
// returns true if successful
// I need users of my framework to be able to call this method, so
// they can actually add a Bar component to their application's UI
}
/** Not really public, do not call */
public void doStuff() {
// Code that performs internal logic to my framework
// If other users call it, Really Bad Things could happen!
// But I need it to be public so org.mygroup.myframework.far.Boo can call
}
}
Another update: So I just learned that C# has the "internal" access modifier. So perhaps a better way to have phrased this question might have been, "How to simulate/ emulate internal access in Java?" Nevertheless, I am not in search of new answers. Our boss ultimately agreed with the concerns mentioned above
You get closest to the answer when you mention the documentation problem. The real issue isn't that you can't "protect" your internal methods; rather, it is that the internal methods pollute your documentation and introduce the risk that a client module may call an internal method by mistake.
Of course, even if you did have fine grained permissions, you still aren't going to be able to prevent a client module from calling internal methods---the jvm doesn't protect against reflection based calls to private methods anyway.
The approach I use is to define an interface for each problematic class, and have the class implement it. The interface can be documented solely in terms of client modules, while the implementing class can provide what internal documentation you desire. You don't even have to include the implementation javadoc in your distribution bundle if you don't want to, but either way the boundary is clearly demarcated.
As long as you ensure that at runtime only one implementation is loaded per documentation-interface, a modern jvm will guarantee you don't suffer any performance penalty for using it; and, you can load harness/stub versions during testing for an added bonus.
The only idea that I can think in order to supply this missing "Framework level access modifier" is CDI and a better design.
If you have to use a method from very different classes and packages in various (but few) situations THERE WILL BE certainly a way to redesign those classes in order to make those methods "private" and inacessible.
There is no support in Java language for such kind of access level (you would like something like "internal" with namespace). You can only restrict access to package level (or the known inheritance public-protected-private model).
From my experience, you can use Eclipse convention:
create a package called "internal" that all class hierarchy (including sub-packages) of this package will be considered as non-API code and could be changed anytime with no guarantee for your users. In that non-API code, use public methods whenever you like. Since it is only a convention and it is not enforced by the JVM or Java compiler, you cannot prevent users from using the code, but at least let them know that these classes were not meant to be used by 3rd parties.
By the way, in Eclipse platform source code, there is a complex plugin model that enforces you not to use internal code of other plugins by implementing custom class loader for each plugin that prevents loading classes that should be "internal" in these plugins.
Interfaces and dynamic proxies are sometimes used to make sure you only expose methods that you do want to expose.
However that comes at a fairly hefty performance cost, if your methods are called very often.
Using the #Deprecated annotation might also be an option, although it won't stop external users invoking your "framework private" methods, they can't say they hadn't been warned.
In general I don't think you should worry about your users deliberately shooting themselves in the foot too much, so long as you made it clear to them that they shouldn't use something.

Dependency Injection in every aspect of a spring app?

I am taking a look into Spring as a web framework, however I am needing a bit of help getting my head around DI.
The concept of objects getting constructed in the container on run time is such a new concept.
I am just wondering how this will reflect in a big application, would I have some modules doing work that are more highly coupled or should every object be initialised at runtime?
It all seems a little intensive to me, I mean say for example I have a CSV file data mining application that removes the data per row - each rows data is encapsulated in one of my own CSVRow objects for processing or whatever. These objects are instantiated whenever an Excel file maybe uploaded to the server. I don't know how many I will need to create?
I seem to be getting a bit lost, any clarity, an overview or some guidance would be much appreciated.
Thanks in advance!
I'll try to put it simply:
use dependency injection for stateless classes that have logic (business logic, persistence logic, front-end logic)
use new for value objects
Broadly speaking, an application is made up of a collection of classes that implement the business logic.
Normally each object is responsible to obtain references of the objects it needs (and this object's dependencies).
I think it is obvious that this leads to:
1) tightly coupled classes
2) code hard to test since each object instantiates specific classes it depends on and if there needs to be a change, the code must be modified.
So using Dependency Injections the objects do not instantiate the dependent objects themselves but an "external component" provides the dependencies at the object creation time i.e. injects the dependencies into the objects.
So in your example, the idea is that you can have for example a CsvRow object instantiated by Spring (along with all its dependencies) and get an object whenever needed. It is also possible to switch to for example CsvRow2 object (another implementation) by just changing your configuration
You don't need to use DI for your CSV row abstraction. Once you get the file, when you start parsing it, your code can create the CSVRow things as it goes. You don't need to wire them up.
You certainly could if you wanted to. You would grab your applicationContext and just get the beans by name. You would want to do this if the CsvRow had dependencies that you wanted Spring to manage for you.
I think of Spring as a way to create "singletons". When I want to guarantee there's only one instance of a class in the application, use Spring to create it. But, instead of being a traditional singleton with a static INSTANCE field or similar, it's a POJO with whatever constructors / setters you need. Spring creates the instance at runtime for you and makes sure that creation only happens once.

need design/pattern/structure help on coding up a java 'world'

I've always wanted to write a simple world in Java, but which I could then run the 'world' and then add new objects (that didn't exist at the time the world started running) at a later date (to simulate/observe different behaviours between future objects).
The problem is that I don't want to ever stop or restart the world once it's started, I want it to run for a week without having to recompile it, but have the ability to drop in objects and redo/rewrite/delete/create/mutate them over time.
The world could be as simple as a 10 x 10 array of x/y 'locations' (think chessboard), but I guess would need some kind of ticktimer process to monitor objects and give each one (if any) a chance to 'act' (if they want to).
Example: I code up World.java on Monday and leave it running. Then on Tuesday I write a new class called Rock.java (that doesn't move). I then drop it (somehow) into this already running world (which just drops it someplace random in the 10x10 array and never moves).
Then on Wednesday I create a new class called Cat.java and drop that into the world, again placed randomly, but this new object can move around the world (over some unit of time), then on Thursday i write a class called Dog.java which also moves around but can 'act' on another object if it's in the neighbour location and vice versa.
Here's the thing. I don't know what kinda of structure/design I would need to code the actual world class to know how to detect/load/track future objects.
So, any ideas on how you would do something like this?
I don't know if there is a pattern/strategy for a problem like this, but this is how I would approach it:
I would have all of these different classes that you are planning to make would have to be objectsof some common class(maybe a WorldObject class) and then put their differentiating features in a separate configuration files.
Creation
When your program is running, it would routinely check that configuration folder for new items. If it sees that a new config file exists (say Cat.config), then it would create a new WorldObject object and give it features that it reads from the Cat.config file and drops that new object into the world.
Mutation
If your program detects that one of these item's configuration file has changed, then it find that object in the World, edit its features and then redisplay it.
Deletion
When the program looks in the folder and sees that the config file does not exist anymore, then it deletes the object from the World and checks how that affects all the other objects.
I wouldn't bet too much on the JVM itself running forever. There are too many ways this could fail (computer trouble, unexepected out-of-memory, permgen problems due to repeated classloading).
Instead I'd design a system that can reliably persist the state of each object involved (simplest approach: make each object serializable, but that would not really solve versioning problems).
So as the first step, I'd simply implement some nice classloader-magic to allow jars to be "dropped" into the world simulation which will be loaded dynamically. But once you reach a point where that no longer works (because you need to modify the World itself, or need to do incompatible changes to some object), then you could persist the state, switch out the libraries for new versions and reload the state.
Being able to persist the state also allows you to easily produce test scenarios or replay scenarios with different parameters.
Have a look at OSGi - this framework allows installing and removing packages at runtime.
The framework is a container for so called bundles, java libraries with some extra configuration data in the jars manifest file.
You could install a "world" bundle and keep it running. Then, after a while, install a bundle that contributes rocks or sand to the world. If you don't like it anymore, disable it. If you need other rocks, install an updated version of the very same bundle and activate it.
And with OSGi, you can keep the world spinning and moving around the sun.
The reference implementation is equinox
BTW: "I don't know what kinda of structure/design" - at least you need to define an interface for a "geolocatable object", otherwise you won't be able to place and display it. But for the "world", it really maybe enough to know, that "there is something at coordinates x/y/z" and for the world viewer, that this "something" has a method to "display itself".
If you only care about adding classes (and not modifying) here is what I'd do:
there is an interface Entity with all business methods you need (insertIntoWorld(), isMovable(), getName(), getIcon() etc)
there is a specific package where entities reside
there is a scheduled job in your application which every 30 seconds lists the class files of the package
keep track of the classes and for any new class attempt to load class and cast to Entity
for any newlly loaded Entity create a new instance and call it's insertIntoWorld().
You could also skip the scheduler and automatic discovery thing and have a UI control in the World where from you could specify the classname to be loaded.
Some problems:
you cannot easily update an Entity. You'll most probably need to do some classloader magic
you cannot extend the Entity interface to add new business bethod, so you are bound to the contract you initially started your application with
Too long explanation for too simple problem.
By other words you just want to perform dynamic class loading.
First if you somehow know the class name you can load it using Class.forName(). This is the way to get class itself. Then you can instantiate it using Class.newInstance(). If you class has public default constructor it is enough. For more details read about reflection API.
But how to pass the name of new class to program that is already running?
I'd suggest 2 ways.
Program may perform polling of predefined file. When you wish to deploy new class you have to register it, i.e. write its name into this file. Additionally this class has to be available in classpath of your application.
application may perform polling of (for example) special directory that contains jar files. Once it detects new jar file it may read its content (see JarInputStream), then call instantiate new class using ClaasLoader.defineClass(), then call newInstane() etc.
What you're basically creating here is called an application container. Fortunately there's no need to reinvent the wheel, there are already great pieces of software out there that are designed to stay running for long periods of time executing code that can change over time. My advice would be to pick your IDE first, and that will lead you someways to what app container you should use (some are better integrated than others).
You will need a persistence layer, the JVM is reliable but eventually someone will trip over the power cord and wipe your world out. Again with JPA et al. there's no need to reinvent the wheel here either. Hibernate is probably the 'standard', but with your requirements I'd try for something a little more fancy with one of the graph based NoSQL solutions.
what you probably want to have a look at, is the "dynamic object model" pattern/approach. I implemented it some time ago. With it you can create/modify objecttypes at runtime that are kind of templates for objects. Here is a paper that describes the idea:
http://hillside.net/plop/plop2k/proceedings/Riehle/Riehle.pdf
There are more papers but I was not able to post them, because this is my first answer and I dont have enough reputation. But Google is your friend :-)

Categories