Migration from WLI to Human workflow - java

while migration from weblogic WLI worflow to BPEL Human workflow, what is the option we have for jcx file for interacting with database.
any one please refer any document...
example; in my existing application we are selecting some values from database, in BPEL how we will achieve the same..
I am begineer in BPEL..
I have created a BPEL proces and data adapter inside that, now i want to execute that data adapter from my custom java code, is there any way to do the same.. pelase guide...
thanks

What versions are you working with? It is useful if you add more details about your set up.
Taking a wild guess, here is something that might help you:
Controls are exported as partner-links. The operations for this partner-link are derived from the methods in the control JCX file. Each method parameter is treated as a separate input message part; the name of the part is the same as the name of the parameter. The output message is determined from the return type of the control method. It has a single part called parameters, since a method has a single return type with no name.
http://download.oracle.com/docs/cd/E13214_01/wli/docs85/bpel/export.html#1061022
EDIT:
After a bit of research, I understand that you are on WLI 8.x. The link above should help you if you are facing problems exporting your JPD.
The alternate approach would be to import your 8.x project to 10g3 project, and export it from there. In this approach, you can generate BPEL2.0 compliant workflows. Warning: this is a one-time import, and the project will not be accessible via earlier WLI versions. So, try on a copy.
The second part of your question is not clear. Invoking controls from your Java code would be the same as invoking a web service. The WLI controls that are EJB calls/transformations get converted into web service portTypes. You can consume these web services from your Java application (eg., using Axis.)
Eg: If I am trying to convert a JPD SomeWorkflow.jpd, and if my JPD (WLI 8.x) had a control
/**
* #common:control
*/
private com.appmills.someapp.controls.TestDBCtrl dbctrl;
Or, alternatively with 10g3
#Control()
private com.appmills.someapp.controls.TestDBCtrl dbctrl;
The export creates three files SomeWorkflow.bpel, SomeWorkflow.wsdl and SomeWorkflow_ctrl.wsdl
The generated code would be:
<plnk:partnerLinkType name="com.appmills.someapp.controls.TestDBCtrl">
<plnk:role name="control">
<plnk:portType name="ctrl:com.appmills.someapp.controls.TestDBCtrlPT"
xmlns:ctrl="http://www.bea.com/workshop/bpel/ctrl"/>
</plnk:role>
</plnk:partnerLinkType>
EDIT 2:
The generated WSDL for controls (in the above example SomeWorkflow_ctrl.wsdl) does not contain <binding> or <service> tags. These are left out for you to define. The assumption is that you have these available somewhere, and have to simply wire them in.
As you might be aware of, the JCX equivalents in Oracle-SOA are JCAs. There is no direct export-import between WLI and Oracle-SOA. This means that there could be varying amount of efforts based on your current code complexity and your migration plan.
In my opinion, for JDBC Controls specifically, the simplest solution is to rewrite them as Database adapters.

Related

Best way to manage a Configuration File?

A colleague and I were discussing best practices for managing a Configuration file, and we wanted to get some further feedback from others.
Our goal is for the configuration-file to specify what action should be taken, when certain events occur.
The 2 options that we are debating:
In the config-file, specify the class-path of the class, which implements the action to be taken (eg: "ActionToTake" : "com.company.publish.SendEveryoneAnEmailClass").
Inside the code, when this event is encountered, we can then do Class.forName(config.ActionToTake).newInstance().run() in order to invoke the specified action.
In the config-file, specify in a human-readable-phrase, the action that should be taken (eg: "ActionToTake" : "SendEveryoneAnEmail"). Inside the code, when this event is encountered, we can then parse config.ActionToTake, and perform a mapping that translates this to action implementation (eg: new SendEveryoneAnEmailClass().run())
We are currently a very small team, and the only people reading/using this config file currently, is our team of software devs. But it's unclear if this will continue to be true in future.
Reasoning behind option 1: Anyone reading the config file will explicitly and immediately know what class will get invoked, and where it's implemented. This also allows for the action-class to be implemented/imported from a completely separate JAR file, without recompiling/changing our code.
Reasoning behind option 2: The config file should be a high level description of user-intent, and should not contain implementation details such as specific class names & package paths. Refactoring of class/package names can also be done without having to make config file changes.
Thoughts on which of these 2 design philosophies is preferred for configuration files?
1st option's advantage is, as jas noticed, the ability to 'link' code in the future. It's a real advantage only if you sell/distribute your software as a closed sourced package or if you plan to hot-swap behavior on production. You've already pointed out the cons - refactoring
2nd option:
It won't help you with refactoring. If you change your action from SendEmail to BringBeer but you leave the string send email then you failed.
Readability. send-everyone-an-email is as good as SendEveryoneAnEmail. Every developer will know what will happen. It can't be confused with LaunchRockets. Your code can find class based on some text, not necessarily the full qualified name. Your code can assume that Actions are in some specific package unless explicitly provided. And that is a way to combine both options.
Consider also another possibility: do the configuration in code. If you don't want to recompile the package, you can use scripting language (groovy). It lets you create very readable dsl, and you will have refactoring.

XPages getEffectiveUserName()

I'm using an XPage as an agent (XAgent) which makes an SSJS call into some Java classes stored as Java design elements. I want the processes which are instigated by the XPage to be in the context of the user I'm currently signed into the browser as. However everything seems to be running as me, I guess based on the last signature on the XPage?
For example, in my custom classes the following returns my name when I need it to be returning the user's name:
DominoUtils.getCurrentSession().getEffectiveUserName()
When using old school Domino agents, the effective username is determined by the "Run as Web User" or "Run on behalf of" fields in the agent properties.
Is it possible to achieve the same functionality when using an XPage?
To investigate you have a number of moving parts:
Add to the XAgent (not your Java code) a print statement with session.getUserName() and session.getEffectiveUsername()
Check your DominoUtils if there is a sessionAsSigner hidden in it
if 1 works, but not 3, consider dependency injection: instead of getting the session from DominoUtils hand it over as parameter from the XAgent to the Java class
Let us know how it goes
In my scenarios I could solve most of the requirements with either:
session.getEffectiveUserName()
or:
context.getUser().getFullName()
There are situations where you need to encapsulate this with:
session.createName(string):NotesName
to get the NotesName-Object representation.

Executable pipe and filter graph in Java

I am currenty writing my master's thesis about monitoring of distributed systems. For this purpose I have designed a framework that can record monitoring data and analyze this data in a series of filters (pipes and filter style). It is based on the Kieker monitoring framework.
You can connect the different filters to each other by subscribing to an output port, like so:
DurationFilter durationFilter = new DurationFilter();
Timeline timeline = new Timeline(...);
durationFilter.getOutputPort().subscribe(timeline);
This mechanism is provided by the Kieker framework which I am using.
To run an analysis the user currently has to connect the filters manually by writing out the code. What I want to do now is write a tool with a GUI that makes it easier to create a configuration (set of filters, input files and the connections). Ideally the user could do it like in a UML editor, creating boxes (filters) and connecting them with lines (connections) and set the parameters for input (input files) etc.
These configurations then need to be executed, meaning I need a mapping from the graph to the code to the java code. That was my idea so far. First off: do you think this approach is the right one for this task?
In my research I found the framework JHotDraw which has a lot of the features I just mentioned. With JHotDraw I can create the visual elements (Figures) on a drawing area (DrawingEditor) including a set of tools to create, edit and connect the elements. This I have done and it's pretty straightforward. A bonus is the undo/redo functionality of JHotDraw.
Now my problem: I am not sure how I am supposed to get from the graphical representation in the editor to the java code. What I have is the V-part of the MVC pattern which the framework supposedly uses. The Figures are the view. But where does the model go and how does it integrate into the framework? I am thinking that for every element that is displayed in the DrawingEditor I will have to have a corresponding model that stores the data for the element. A FilterModel would have attributes like input data types (which data can it process), output ports and their data types (what kind of data does it create) and the type of the filter (corresponds to the Java class). Those are necessary to check if one filter can connect to another and to execute the whole thing in the end.
Not sure if I am making myself clear. If anything is unclear please ask.
we are currently working on a web-based UI for Kieker. This will allows users to define and execute Kieker pipe-and-filter graphs. If you are still interested in this, feel free to contact us. You'll find our contact information at kieker.sf.net/support/. Also, I'd be interested in what you're doing in your thesis ;-).
Regards, André

Where to put business logic in Eclipse RCP program

I'm writing a small application in RCP to wrap around the business logic in another (non-RCP) simulation library. I can access and use the library fine from any of my plugins, but I don't know where I should put the instance of the Simulation library so that, say, one of the command handlers can make calls to it.
From reading the docs it sounds like I should be storing 'global' information like this in the workbench - but I still don't really understand how to do that.
Help?
First, the business layer (BL) can and should reside in its' own plugin. That will provide decent decoupling between the layers.
Second, you should carefully decide what the interface should be and which classes are exposed. Ideally, you should mostly expose interfaces and data objects.
Finally, decide how the "hand shake" works. E.g., how to obtain the initial interface to the BL. Since it is a Plugin, it could have an Activator which loads it. You could add a method in the activator which returns the BL interface.
If you are looking for something more decoupled, you could create an extension point or deploy the BL as an OSGi service, but that's a bit of an overkill for you need.
If I understand you correctly, I see two ways:
Store the instance in the model plug-in itself, using ‘SimulationFactory.getInstance(String myAppId)‘. The passed String is a constant in you app that is always used, when obtaining the reference.
Define a new class e.g. GlobalAccess in you app that is initilized with an instance of your model and has some getter (whether you use a single instance again or only provide public static methods is a matter of taste).
The seocond way is similar to some classes in eclipse like platfom or platformui, where you can obtain initial references and navigate through the workbench.
edit
i just found a tutorial that might help you:
Passing Data between Plug-ins

Apache beehive and localizing default pager for DataGrid

I am trying to localize strings created by Apache Beehive and netui default pager.
I'd like to translate language of this output.
Page 1 of 3 First / Previous Next / Last
My .jsp code looks something like this
<netui-data:dataGrid name="searchResultsGrid" dataSource="pageInput.someData">
<netui-data:header>
...
</netui-data:header>
<netui-data:rows>
...
</netui-data:rows>
<netui-data:configurePager pagerFormat="firstPrevNextLast" pageAction="refresh" pageSize="10"
disableDefaultPager="false" />
</netui-data:dataGrid>
I already found keys I should use in translation, but how I configure message properties file(s) that the pager should use?
As far as I can tell, IDataGridMessageKeys is merely an exposed internal interface, not really meant to be used by end-users. As a guess, I'd wager they had to expose it as public from within that package in order for other packages to be able to use it, and couldn't find another way (with their build scheme) to safely pass it to the rest of the code.
(Of course, I might be wrong. I'll try rooting through the source some more.)

Categories