According to the whitepaper on the jBPM page [1], jBMP can be easily used in a standalone app. However I could not find any information about how to actually do this. I want to create a simple java app (maybe with SWT), which displays a process with jBPM. The user should then be able to modify the applications behavior, by modifying the jBPM diagram. For this purpose I also have to integrate some eclipse components I think.. any ideas how this works?
[1] http://www.jboss.com/pdf/jbpm_whitepaper.pdf
Before you start, you may want also to see if Roamflow meets your needs as it seems to be a standalone jBPM Eclipse/RCP based viewer/editor.
Otherwise you should know how to build eclipse plug-ins, or get the book I found useful for most eclipse plugin/SWT development needs, "Eclipse Building Commercial-Quality Plug-ins", published by eclipse and Addison-Wesley. Also, I am not going to sit down and write you a test app, you need to understand the fundamentals anyhow.
By stand alone they mean run in any old JVM with the right libraries. It does need to be deployed in a J2EE container, viewed through the web, etc.
Look at the source code for the jBPM eclipse plug-in as it has the features you are looking for right? An SWT/eclipse based to that displays jBPM. This includes checking for extension points that jBPM may install which you can use to build your eclipse plug-in with quickly. For example: The jBPM editor code, here. Or how to serialize, here, re-use.
Here is the critical SWT/drawing, a critical line is converting the jBPM into and SWT thing "g = new SWTGraphics(gc);". This seems to generate an image from a jBPM model.
protected void writeImage() {
SWTGraphics g = null;
GC gc = null;
Image image = null;
LayerManager lm = (LayerManager)getGraphicalViewer().getEditPartRegistry().get(LayerManager.ID);
IFigure figure = lm.getLayer(LayerConstants.PRINTABLE_LAYERS);
try {
Rectangle r = figure.getBounds();
image = new Image(Display.getDefault(), r.width, r.height);
gc = new GC(image);
g = new SWTGraphics(gc);
g.translate(r.x * -1, r.y * -1);
figure.paint(g);
ImageLoader imageLoader = new ImageLoader();
imageLoader.data = new ImageData[] {image.getImageData()};
imageLoader.save(getImageSavePath(), SWT.IMAGE_JPEG);
refreshProcessFolder();
} finally {
//SNIP
}
}
Learn from the plug-in's plugin.xml, src located here in this case. For example, here is the jBPM adding it's view to eclipse:
point="org.eclipse.ui.views" ... view class="org.jboss.tools.flow.jpdl4.view.DetailsView"...
This may be one extension you want to copy as it seems to stand up the "view".
This will help you understand how they construct the pieces of their eclipse based app. You can search for these classes in your work space and view the source code if you installed the developer versions on the JBPM plug-ins.
Decide if you need to hack apart the app parts built as GMF (Graphical Modeling Framework) stuff, like the model, behavior of the view/diagram and the different edit/drawing parts. Don't mess with this unless you have too. However, understanding GMF plug-ins or looking that the examples will help you understand what jBPM plug-in pieces you might need to use, especially if editing is needed.
Roll the pieces into a plug-in, remembering to reuse what pieces (plugins/pluglets) you can from the jBPM project. May sure to build your eclipse plugin as an RCP or Rich Client... (Note jBPM does not currently have a RCP, per post) so that it can as a eclipse standalone application for easier deployment to people who do not have eclipse tool knowledge.
Let me know if this gets you headed down the correct path.
Yes it's possible to run the jbpm process engine completely standalone, as a simple java program.
No J2EE container needed. At least this is the case with jbpm 4.4
As far as the code requirement goes,
setup your jbpm database schema
add the following jars from the jbpm distribution library to the applications classpath:
antlr-runtime.jar
antlr.jar
dom4j.jar
hibernate-core.jar
javassist.jar
jbpm.jar
slf4j-api.jar
slf4j-jdk14.jar
slf4j-log4j12.jar
commons-collections.jar
jta.jar
juel-api.jar
juel-engine.jar
juel-impl.jar
mail.jar
AND the necessary JDBC drivers for the DB that you are using.
and your standalone class looks like this:
package test.ayusman;
import java.util.HashMap;
import java.util.Map;
import org.jbpm.api.Configuration;
import org.jbpm.api.ExecutionService;
import org.jbpm.api.ProcessEngine;
import org.jbpm.api.ProcessInstance;
import org.jbpm.api.RepositoryService;
public class ProcessDeployer {
// Create a process engine that can be used for all other work...
// ProcessEngine is starting point for all other application.
private static ProcessEngine jbpmProcessEngine = new Configuration()
.setResource("jbpm.cfg.xml").buildProcessEngine();
// private static Logger logger = Logger.getLogger(JBPMDao.class);
private static RepositoryService repositoryService = null;
private static ExecutionService executionService = null;
private static ProcessInstance pi = null;
private static String processInstanceState;
private static String processInstanceId;
public static void main(String[] args) {
try {
ProcessDeployer ejm = new ProcessDeployer();
//Start the process...
ejm.startProcess();
//Analyze process... just a small fancy method
ejm.analyzeProcess();
} catch (Exception e) {
e.printStackTrace();
}
}// End of main()
void startProcess() throws Exception
{
repositoryService = jbpmProcessEngine.getRepositoryService();
executionService = jbpmProcessEngine.getExecutionService();
//NOTE: The example assumes that the process definition file name is: your_process.jpdl.xml
processInstanceId = repositoryService.createDeployment().addResourceFromClasspath("your_process.jpdl.xml").deploy();
//NOTE: The jpdl file has key name as "Java"
pi = executionService.startProcessInstanceByKey("Java");
System.out.println("Obtained processInstanceId is: "+processInstanceId);
}
void analyzeProcess() throws Exception
{
processInstanceState = pi.getState();
System.out.println("processInstanceState is: "+processInstanceState);
System.out.println("processInstanceId is: "+processInstanceId);
}
}// End of class ProcessDeployer
Note that when you are running the process engine from with in the SWT application, the process engine resides on the same JVM as the SWT, so make sure that you allocate enough space.
Hope this helps :-)
Related
I'm writing a Dataflow pipeline that should do 3 things:
Reading .csv files from GCP Storage
Parsing the data to BigQuery campatible TableRows
Writing the data to a BigQuery table
Up until now this all worked like a charm. And it still does, but when I change the source and destination variables nothing changes. The job that actually runs is an old one, not the recently changed (and committed) code. Somehow when I run the code from Eclipse using the BlockingDataflowPipelineRunner the code itself is not uploaded but an older version is used.
Normally nothing wrong with the code but to be as complete as possible:
public class BatchPipeline {
String source = "gs://sourcebucket/*.csv";
String destination = "projectID:datasetID.testing1";
//Creation of the pipeline with default arguments
Pipeline p = Pipeline.create(PipelineOptionsFactory.fromArgs(args).withValidation().create());
PCollection<String> line = p.apply(TextIO.Read.named("ReadFromCloudStorage")
.from(source));
#SuppressWarnings("serial")
PCollection<TableRow> tablerows = line.apply(ParDo.named("ParsingCSVLines").of(new DoFn<String, TableRow>(){
#Override
public void processElement(ProcessContext c){
//processing code goes here
}
}));
//Defining the BigQuery table scheme
List<TableFieldSchema> fields = new ArrayList<>();
fields.add(new TableFieldSchema().setName("datetime").setType("TIMESTAMP").setMode("REQUIRED"));
fields.add(new TableFieldSchema().setName("consumption").setType("FLOAT").setMode("REQUIRED"));
fields.add(new TableFieldSchema().setName("meterID").setType("STRING").setMode("REQUIRED"));
TableSchema schema = new TableSchema().setFields(fields);
String table = destination;
tablerows.apply(BigQueryIO.Write
.named("BigQueryWrite")
.to(table)
.withSchema(schema)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withoutValidation());
//Runs the pipeline
p.run();
}
This problem arose because I've just changed laptops and had to reconfigure everything. I'm working on a clean Ubuntu 16.04 LTS OS with all the dependencies for GCP development installed (normally). Normally everything is configured quite well since I'm able to start a job (which shouldn't be possible if my config is erred, right?). I'm using Eclipse Neon btw.
So where could the problem lie? It seems to me that there is a problem uploading the code, but I've made sure that my cloud git repo is up-to-date and the staging bucket has been cleaned up ...
**** UPDATE ****
I never found what was exactly going wrong but when I checked out the creation dates of the files in my deployed jar, I indeed saw that they were never really updated. The jar file itself had however a recent timestamp which made me overlook that problem completely (rookie mistake).
I eventually got it all working again by simply creating a new Dataflow project in Eclipse and copying my .java files from the broken project into the new one. Everything worked like a charm from then on.
Once you submit a Dataflow job, you can check which artifacts were part of the job specification by inspecting the files that are part of the job description which is available via DataflowPipelineWorkerPoolOptions#getFilesToStage. The code snippet below gives a little sample of how to get this information.
PipelineOptions myOptions = ...
myOptions.setRunner(DataflowPipelineRunner.class);
Pipeline p = Pipeline.create(myOptions);
// Build up your pipeline and run it.
p.apply(...)
p.run();
// At this point in time, the files which were staged by the
// DataflowPipelineRunner will have been populated into the
// DataflowPipelineWorkerPoolOptions#getFilesToStage
List<String> stagedFiles = myOptions.as(DataflowPipelineWorkerPoolOptions.class).getFilesToStage();
for (String stagedFile : stagedFiles) {
System.out.println(stagedFile);
}
The above code should print out something like:
/my/path/to/file/dataflow.jar
/another/path/to/file/myapplication.jar
/a/path/to/file/alibrary.jar
It is likely that the resources part of the job that your uploading are out of date in some way containing your old code. Look through all the directories and jar parts of the staging list and find all instances of BatchPipeline and verify their age. jar files can be extracted using the jar tool or any zip file reader. Alternatively use javap or any other class file inspector to validate that the BatchPipeline class file lines up with the expected changes you have made.
I have the following program which adds a method to itself when run. But I have to refresh it every time using the F5 button or the refresh option.
Is there a way I could code the refresh in the program itself so that it refreshes itself after the modification? The project I am working on is a Java application and not an eclipse plugin so as far as I know the refreshLocal() method can't be used.
public class Demo {
public static void main(String[] args) throws IOException, CoreException {
File file = new File("/home/kishan/workspace/Roast/src/Demo.java");
if (file.exists()) {
JavaClassSource javaClass = Roaster.parse(JavaClassSource.class,
file);
javaClass.addMethod().setPublic().setStatic(true)
.setName("newMethod").setReturnTypeVoid()
.setBody("System.out.println(\"newMethod created\");")
.addParameter("String[]", "stringArray");
FileWriter writer = new FileWriter(file);
writer.write(javaClass.toString());
writer.flush();
writer.close();
}
}
}
I have tried using the refreshLocal() method defined in the eclipse JDT but since my project is a Java application the ResourcePlugin.getWorkspace() method does not work giving me a "workspace closed" error. Any suggestion is appreciated.
You see, eclipse runs your Java class within its own dedicated JVM. Thus there is no direct programmatic way of enforcing a refresh within eclipse.
You could check this older question; maybe that could lead to a reasonable workarounds.
On the other hand you might step back and ask yourself why exactly you want to achieve that. Your workflow simply doesn't make much sense when looking at it; as in: when generating code that way, shouldn't that generated code better go in its own specific place?
If you intend to "generate" code frequently to then continue to use it in eclipse; well, that somehow smells like a strange idea.
Eclipse has "Refresh using native hooks or polling" which might might help.
You can find it under Window > Prefrences > General > Workspace.
See On Eclipse, what does "Preferences -> General -> Workspace -> Refresh using native hooks or polling" do?
I currently know Java and Ruby, but have never used JRuby. I want to use some RAM- and computation-intensive Java code inside a Rack (sinatra) web application. In particular, this Java code loads about 200MB of data into RAM, and provides methods for doing various calculations that use this in-memory data.
I know it is possible to call Java code from Ruby in JRuby, but in my case there is an additional requirement: This Java code would need to be loaded once, kept in memory, and kept available as a shared resource for the sinatra code (which is being triggered by multiple web requests) to call out to.
Questions
Is a setup like this even possible?
What would I need to do to accomplish it? I am not even sure if this is a JRuby question per se, or something that would need to be configured in the web server. I have experience with Passenger and Unicorn/nginx, but not with Java servers, so if this does involve configuration of a Java server such as Tomcat, any info about that would help.
I am really not sure where to even start looking, or if there is a better way to be approaching this problem, so any and all recommendations or relevant links are appreciated.
Yes, a setup it's possibile ( see below about Deployment ) and to accomplish it I would suggest to use a Singleton
Singletons in Jruby
with reference to question: best/most elegant way to share objects between a stack of rack mounted apps/middlewares? I agree with Colin Surprenant's answer, namely singleton-as-module pattern which I prefer over using the singleton mixin
Example
I post here some test code you can use as a proof of concept:
JRuby sinatra side:
#file: sample_app.rb
require 'sinatra/base'
require 'java' #https://github.com/jruby/jruby/wiki/CallingJavaFromJRuby
java_import org.rondadev.samples.StatefulCalculator #import you java class here
# singleton-as-module loaded once, kept in memory
module App
module Global extend self
def calc
#calc ||= StatefulCalculator.new
end
end
end
# you could call a method to load data in the statefull java object
App::Global.calc.turn_on
class Sample < Sinatra::Base
get '/' do
"Welcome, calculator register:#{App::Global.calc.display}"
end
get '/add_one' do
"added one to calculator register, new value:#{App::Global.calc.add(1)}"
end
end
You can start it in tomcat with trinidad or simply with rackup config.ru but you need:
#file: config.ru
root = File.dirname(__FILE__) # => "."
require File.join( root, 'sample_app' ) # => true
run Sample # ..in sample_app.rb ..class Sample < Sinatra::Base
something about the Java Side:
package org.rondadev.samples;
public class StatefulCalculator {
private StatelessCalculator calculator;
double register = 0;
public double add(double a) {
register = calculator.add(register, a);
return register;
}
public double display() {
return register;
}
public void clean() {
register = 0;
}
public void turnOff() {
calculator = null;
System.out.println("[StatefulCalculator] Good bye ! ");
}
public void turnOn() {
calculator = new StatelessCalculator();
System.out.println("[StatefulCalculator] Welcome !");
}
}
Please note that the register in here is only a double but in your real code you can have a big data structure in your real scenario
Deployment
You can deploy using Mongrel, Thin (experimental), Webrick (but who would do that?), and even Java-centric application containers like Glassfish, Tomcat, or JBoss. source: jruby deployments
with TorqueBox that is built on the JBoss Application Server.
JBoss AS includes high-performance clustering, caching and messaging functionality.
trinidad is a RubyGem that allows you to run any Rack based applet wrap within an embedded Apache Tomcat container
Thread synchronization
Sinatra will use Mutex#synchronize method to place a lock on every request to avoid race conditions among threads. If your sinatra app is multithreaded and not thread safe, or any gems you use is not thread safe, you would want to do set :lock, true so that only one request is processed at a given time. .. Otherwise by default lock is false, which means the synchronize would yield to the block directly.
source: https://github.com/zhengjia/sinatra-explained/blob/master/app/tutorial_2/tutorial_2.md
Here are some instructions for how to deploy a sinatra app to Tomcat.
The java code can be loaded once and reused if you keep a reference to the java instances you have loaded. You can keep a reference from a global variable in ruby.
One thing to be aware of is that the java library you are using may not be thread safe. If you are running your ruby code in tomact, multiple requests can execute concurrently, and those requests may all access your shared java library. If your library is not thread safe, you will have to use some sort of synchronization to prevent multiple threads accessing it.
I would like to read a pom.xml in Java code. I wonder if there is a library for that, so I can have an iterator for different sections, e.g., dependenes, plugins, etc. I want to avoid to build a parser by hand.
You can try MavenXpp3Reader which is part of maven-model. Sample code:
MavenXpp3Reader reader = new MavenXpp3Reader();
Model model = reader.read(new FileReader(mypom));
Firstly, I'm assuming you are not already running inside a Maven plugin, as there are easier ways to achieve that with the available APIs there.
The MavenXpp3Reader solution posted earlier will allow you to read the POM easily, however does not take into account inheritance of the parent and interpolation of expressions.
For that, you would need to use the ModelBuilder class.
Use of this is quite simple, for example from Archiva is this code fragment:
ModelBuildingRequest req = new DefaultModelBuildingRequest();
req.setProcessPlugins( false );
req.setPomFile( file );
req.setModelResolver( new RepositoryModelResolver( basedir, pathTranslator ) );
req.setValidationLevel( ModelBuildingRequest.VALIDATION_LEVEL_MINIMAL );
Model model;
try
{
model = builder.build( req ).getEffectiveModel();
}
catch ( ModelBuildingException e )
{
...
}
You must do two things to run this though:
instantiate and wire an instance of ModelBuilder including its private fields
use one of Maven's resolvers for finding the parent POMs, or write your own (as is the case in the above snippet)
How best to do that depends on the DI framework you are already using, or whether you want to just embed Maven's default container.
This depends on what you're trying to achieve. If you just want to treat it as an XML with embedded XML files, go with suggestions already offered.
If you are looking to implement some form of Maven functionality into your app, you could try the new aether library. I haven't used it, but it looks simple enough to integrate and should offer Maven functionality with little effort on your part.
BTW, this library is a Maven 3 lib, not Maven 2 (as specified in your tag). Don't know if that makes much difference to you
I use Launch4j as a wrapper for my Java application under Windows 7, which, to my understanding, in essence forks an instance of javaw.exe that in turn interprets the Java code. As a result, when attempting to pin my application to the task bar, Windows instead pins javaw.exe. Without the required command line, my application will then not run.
As you can see, Windows also does not realize that Java is the host application: the application itself is described as "Java(TM) Platform SE binary".
I have tried altering the registry key HKEY_CLASSES_ROOT\Applications\javaw.exe to add the value IsHostApp. This alters the behavior by disabling pinning of my application altogether; clearly not what I want.
After reading about how Windows interprets instances of a single application (and a phenomenon discussed in this question), I became interested in embedding a Application User Model ID (AppUserModelID) into my Java application.
I believe that I can resolve this by passing a unique AppUserModelID to Windows. There is a shell32 method for this, SetCurrentProcessExplicitAppUserModelID. Following Gregory Pakosz suggestion, I implemented it in an attempt to have my application recognized as a separate instance of javaw.exe:
NativeLibrary lib;
try {
lib = NativeLibrary.getInstance("shell32");
} catch (Error e) {
Logger.out.error("Could not load Shell32 library.");
return;
}
Object[] args = { "Vendor.MyJavaApplication" };
String functionName = "SetCurrentProcessExplicitAppUserModelID";
try {
Function function = lib.getFunction(functionName);
int ret = function.invokeInt(args);
if (ret != 0) {
Logger.out.error(function.getName() + " returned error code "
+ ret + ".");
}
} catch (UnsatisfiedLinkError e) {
Logger.out.error(functionName + " was not found in "
+ lib.getFile().getName() + ".");
// Function not supported
}
This appears to have no effect, but the function returns without error. Diagnosing why is something of a mystery to me. Any suggestions?
Working implementation
The final implementation that worked is the answer to my follow-up question concerning how to pass the AppID using JNA.
I had awarded the bounty to Gregory Pakosz' brilliant answer for JNI that set me on the right track.
For reference, I believe using this technique opens the possibility of using any of the APIs discussed in this article in a Java application.
I don't have Windows 7 but here is something that might get you started:
On the Java side:
package com.stackoverflow.homework;
public class MyApplication
{
static native boolean setAppUserModelID();
static
{
System.loadLibrary("MyApplicationJNI");
setAppUserModelID();
}
}
And on the native side, in the source code of the `MyApplicationJNI.dll library:
JNIEXPORT jboolean JNICALL Java_com_stackoverflow_homework_MyApplication_setAppUserModelID(JNIEnv* env)
{
LPCWSTR id = L"com.stackoverflow.homework.MyApplication";
HRESULT hr = SetCurrentProcessExplicitAppUserModelID(id);
return hr == S_OK;
}
Your question explicitly asked for a JNI solution. However, since your application doesn't need any other native method, jna is another solution which will save you from writing native code just for the sake of forwarding to the windows api. If you decide to go jna, pay attention to the fact that SetCurrentProcessExplicitAppUserModelID() is expecting a UTF-16 string.
When it works in your sandbox, the next step is to add operating system detection in your application as SetCurrentProcessExplicitAppUserModelID() is obviously only available in Windows 7:
you may do that from the Java side by checking that System.getProperty("os.name"); returns "Windows 7".
if you build from the little JNI snippet I gave, you can enhance it by dynamically loading the shell32.dll library using LoadLibrary then getting back the SetCurrentProcessExplicitAppUserModelID function pointer using GetProcAddress. If GetProcAddress returns NULL, it means the symbol is not present in shell32 hence it's not Windows 7.
EDIT: JNA Solution.
References:
The JNI book for more JNI examples
Java Native Access (JNA)
There is a Java library providing the new Windows 7 features for Java. It's called J7Goodies by Strix Code. Applications using it can be properly pinned to the Windows 7 taskbar. You can also create your own jump lists, etc.
I have implemented access to the SetCurrentProcessExplicitAppUserModelID method using JNA and it works quite well when used as the MSDN documentation suggests. I've never used the JNA api in the way you have in your code snippet. My implementation follows the typical JNA usage instead.
First the Shell32 interface definition:
interface Shell32 extends StdCallLibrary {
int SetCurrentProcessExplicitAppUserModelID( WString appID );
}
Then using JNA to load Shell32 and call the function:
final Map<String, Object> WIN32API_OPTIONS = new HashMap<String, Object>() {
{
put(Library.OPTION_FUNCTION_MAPPER, W32APIFunctionMapper.UNICODE);
put(Library.OPTION_TYPE_MAPPER, W32APITypeMapper.UNICODE);
}
};
Shell32 shell32 = (Shell32) Native.loadLibrary("shell32", Shell32.class,
WIN32API_OPTIONS);
WString wAppId = new WString( "Vendor.MyJavaApplication" );
shell32.SetCurrentProcessExplicitAppUserModelID( wAppId );
Many of the API's in the last article you mentioned make use of Windows COM which is quite difficult to use directly with JNA. I have had some success creating a custom DLL to call these API's (eg. using the SHGetPropertyStoreForWindow to set a different app ID for a submodule window) which I then use JNA to access at runtime.
Try to use JSmooth. I use always this one. In JSmooth is there an option under Skeleton by Windowed Wrapper called
Lauch java app in exe process
See on this image.
(source: andrels.com)
Also command line arguments can be passed.
I think this can be a solution for you.
Martijn
SetCurrentProcessExplicitAppUserModelID (or SetAppID()) would in fact do what you're trying to do. However, it might be easier to modify your installer to set the AppUserModel.ID property on your shortcut - quoting from the Application User Model ID document mentioned above:
In the System.AppUserModel.ID property of the application's shortcut file. A shortcut (as an IShellLink, CLSID_ShellLink, or a .lnk file) supports properties through IPropertyStore and other property-setting mechanisms used throughout the Shell. This allows the taskbar to identify the proper shortcut to pin and ensures that windows belonging to the process are appropriately associated with that taskbar button.
Note: The System.AppUserModel.ID property should be applied to a shortcut when that shortcut is created. When using the Microsoft Windows Installer (MSI) to install the application, the MsiShortcutProperty table allows the AppUserModelID to be applied to the shortcut when it is created during installation.
The latest jna-platform library now includes JNA bindings for SetCurrentProcessExplicitAppUserModelID:
https://github.com/java-native-access/jna/pull/680
I fixed mine without any ID settings.
There is an option in Launch4J if you are using it and you say you do then...
You can change the header to JNI Gui and then wrap it around the jar with the JRE.
The good thing is that it runs .exe in the process now instead on running javaw.exe with your jar. It probably does it under the hood (not sure).
Also I have noticed also that it takes around 40-50% less CPU resource which is even better!
And the pinning works fine and all that window features are enabled.
I hope it helps to someone as I spent nearly 2 days trying to solve that issue with my undecorated javafx app.