Camunda ProcessDefinition not available after deployment - java

I have a Camunda 7.3 standalone h2-in-mem configuration which I want to integrate into my own application. When I create a deployment based on a (valid) bpmn xml String, I get no process definition in return:
public String deployProcess(WorkflowDef workflowDef) throws IOException, SQLException {
//prepare resources
String name = workflowDef.getTitle()+".bpmn";
InputStream stream = new ByteArrayInputStream(workflowDef.getBpmnXml().getBytes(Charset.defaultCharset()));
//prepare deployment
RepositoryService repositoryService = processEngine.getRepositoryService();
final DeploymentBuilder deploymentBuilder = repositoryService.createDeployment();
deploymentBuilder.name(workflowDef.getId());
DeploymentEntity deploymentResult;
try {
deploymentResult = (DeploymentEntity) repositoryService.createDeployment().addInputStream(name, stream).deploy();
} finally {
stream.close();
}
//read the result
String deploymentId = deploymentResult.getId();
ProcessDefinition processDef = repositoryService.createProcessDefinitionQuery().deploymentId(deploymentId).singleResult();
//return the process definition id for later query
String processDefinitionId = processDef.getId();
return processDefinitionId;
}
Note: The object workflowDef is just a wrapper, which contains definitions from the bpmn.io modeler, like name and the bpmn XML as String. I also make sure, that engine and db are initialized and running.
After deploying, I get a NullPointerException at the line
String processDefinitionId = processDef.getId();
Even when I try to get the definition via
repositoryService.createProcessDefinitionQuery().list();
I just receive an empty list. Is there a misconception in my code?

Related

MarkLogic: No stream to write

I'm having a problem where I have a method that gets parameters from the AngularJs front-end, creates an object with them, writes the object as XML file in a folder, and then is supposed to write that XML file into the MarkLogic database.
However, the part where it's supposed to write to the database appears to see as if the file doesn't exist, even though it does:
Here's the code:
#RequestMapping(value = "/add/korisnik", method = RequestMethod.POST)
public String addKorisnik(#RequestParam String ime, #RequestParam String prezime, #RequestParam String username, #RequestParam String password, #RequestParam String orcid, #RequestParam String role) throws JAXBException, FileNotFoundException{
Korisnik.Roles roles = new Korisnik.Roles();
roles.setRole(role);
Korisnik k = new Korisnik();
k.setIme(ime);
k.setPrezime(prezime);
k.setUsername(username);
k.setPassword(password);
k.setOrcid(orcid);
k.setRoles(roles);
System.out.println(k.toString());
// create JAXB context and instantiate marshaller
JAXBContext context = JAXBContext.newInstance(Korisnik.class);
Marshaller m = context.createMarshaller();
m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
StringWriter sw = new StringWriter();
m.marshal(k, sw);
// Write to File
File f = new File("src/main/resources/data/korisnici/" + k.getUsername() + ".xml");
if (f.exists()) {
return "Username already taken.";
}
else {
m.marshal(k, new File("src/main/resources/data/korisnici/" + k.getUsername() + ".xml"));
}
// acquire the content
InputStream docStream = ObjavaNaucnihRadovaApplication.class.getClassLoader().getResourceAsStream(
"data/korisnici/" + k.getUsername() + ".xml");
// create the client
DatabaseClient client = DatabaseClientFactory.newClient(MarkLogicConfig.host,
MarkLogicConfig.port, MarkLogicConfig.admin,
MarkLogicConfig.password, MarkLogicConfig.authType);
// create a manager for XML documents
XMLDocumentManager docMgr = client.newXMLDocumentManager();
// create a handle on the content
InputStreamHandle handle = new InputStreamHandle(docStream);
// write the document content
docMgr.write("http://localhost:8011/korisnici/" + k.getUsername()+".xml", handle);
//release the client
client.release();
return "OK";
}
Several issues.
First, the file you write to is not the same as you read from. You are writing to "src/main/resources/data/korisnici/.." which is relative to the current directory of the JVM (application server). You are reading from the classpath resource directory -- not likely to be the same. You could simply reuse the same File object then they would be the same.
Second, you don't need to write to disk this small of an object, just write it to a in-memory stream (like ByteArrayStream() ).
It sounds as if the classloader isn't finding your file.
If you already have a File object that you have just written, why not construct an InputStream from it, rather than attempt to find it from a differently constructed relative path?
InputStream docStream = new FileInputStream(f);

AWS Creating new files from an s3 object using JAVA getting error

I have a shape file and i need to read the shape file from my java code. I used below code for reading shape file.
public class App {
public static void main(String[] args) {
File file = new File("C:\\Test\\sample.shp");
Map<String, Object> map = new HashMap<>();//
try {
map.put("url", URLs.fileToUrl(file));
DataStore dataStore = DataStoreFinder.getDataStore(map);
String typeName = dataStore.getTypeNames()[0];
SimpleFeatureSource source = dataStore.getFeatureSource(typeName);
SimpleFeatureCollection collection = source.getFeatures();
try (FeatureIterator<SimpleFeature> features = collection.features()) {
while (features.hasNext()) {
SimpleFeature feature = features.next();
SimpleFeatureType schema = feature.getFeatureType();
Class<?> geomType = schema.getGeometryDescriptor().getType().getBinding();
String type = "";
if (Polygon.class.isAssignableFrom(geomType) || MultiPolygon.class.isAssignableFrom(geomType)) {
MultiPolygon geom = (MultiPolygon) feature.getDefaultGeometry();
type = "Polygon";
if (geom.getNumGeometries() > 1) {
type = "MultiPolygon";
}
} else if (LineString.class.isAssignableFrom(geomType)
|| MultiLineString.class.isAssignableFrom(geomType)) {
} else {
}
System.out.println(feature.getDefaultGeometryProperty().getValue().toString());
}
}
} catch (Exception e) {
// TODO: handle exception
}
}
}
I got the desired output. But my requirement is write an aws lambda function to read shape file. For this
1. I created a Lambda java project of s3 event. I wrote the same code inside the handleRequest. I uploaded the java lambda project as a lanbda function and added one trigger. When I am uploading a .shp file to as s3 bucket lmbda function will automatically invoked. But I am getting an error like below
java.lang.RuntimeException: java.io.FileNotFoundException: /sample.shp (No such file or directory)
I have sample.shp file inside my s3 bucket. I go through below link.
How to write an S3 object to a file?
I am getting the same error. I tried to change my code like below
S3Object object = s3.getObject(new GetObjectRequest(bucket, key));
InputStream objectData = object.getObjectContent();
map.put("url", objectData );
instead of
File file = new File("C:\\Test\\sample.shp");
map.put("url", URLs.fileToUrl(file));
:-( Now i am getting an error like below
java.lang.NullPointerException
Also I tried the below code
DataStore dataStore = DataStoreFinder.getDataStore(objectData);
instead of
DataStore dataStore = DataStoreFinder.getDataStore(map);
the error was like below
java.lang.ClassCastException:
com.amazonaws.services.s3.model.S3ObjectInputStream cannot be cast to
java.util.Map
Also I tried to add key directly to the map and also as DataStore object. Everything went wrong..:-(
Is there anyone who can help me?
It will be very helpful if someone can do it for me...
The DataStoreFinder.getDataStore method in geotools requires you to provide a map containing a key/value pair with key "url". The value associated with that "url" key needs to be a file URL like "file://host/path/my.shp".
You're trying to insert a Java input stream into the map. That won't work, because it's not a file URL.
The geotools library does not accept http/https URLs (see the geotools code here and here), so you need a file:// URL. That means you will need to download the file from S3 to the local Lambda filesystem and then provide a file:// URL pointing to that local file. To do that, here's Java code that should work:
// get the shape file from S3 to local filesystem
File localshp = new File("/tmp/download.shp");
s3.getObject(new GetObjectRequest(bucket, key), localshp);
// now store file:// URL in the map
map.put("url", localshp.getURI().getURL().toString());
If the geotools library had accepted real URLs (not just file:// URLs) then you could have avoided the download and simply created a time-limited, pre-signed URL for the S3 object and put that URL into the map.
Here's an example of how to do that:
// get current time and add one hour
java.util.Date expiration = new java.util.Date();
long msec = expiration.getTime();
msec += 1000 * 60 * 60;
expiration.setTime(msec);
// request pre-signed URL that will allow bearer to GET the object
GeneratePresignedUrlRequest gpur = new GeneratePresignedUrlRequest(bucket, key);
gpur.setMethod(HttpMethod.GET);
gpur.setExpiration(expiration);
// get URL that will expire in one hour
URL url = s3.generatePresignedUrl(gpur);

Read PDVInputStream dicomObject information on onCStoreRQ association request

I am trying to read (and then store to 3rd party local db) certain DICOM object tags "during" an incoming association request.
For accepting association requests and storing locally my dicom files i have used a modified version of dcmrcv() tool. More specifically i have overriden onCStoreRQ method like:
#Override
protected void onCStoreRQ(Association association, int pcid, DicomObject dcmReqObj,
PDVInputStream dataStream, String transferSyntaxUID,
DicomObject dcmRspObj)
throws DicomServiceException, IOException {
final String classUID = dcmReqObj.getString(Tag.AffectedSOPClassUID);
final String instanceUID = dcmReqObj.getString(Tag.AffectedSOPInstanceUID);
config = new GlobalConfig();
final File associationDir = config.getAssocDirFile();
final String prefixedFileName = instanceUID;
final String dicomFileBaseName = prefixedFileName + DICOM_FILE_EXTENSION;
File dicomFile = new File(associationDir, dicomFileBaseName);
assert !dicomFile.exists();
final BasicDicomObject fileMetaDcmObj = new BasicDicomObject();
fileMetaDcmObj.initFileMetaInformation(classUID, instanceUID, transferSyntaxUID);
final DicomOutputStream outStream = new DicomOutputStream(new BufferedOutputStream(new FileOutputStream(dicomFile), 600000));
//i would like somewhere here to extract some TAGS from incoming dicom object. By trying to do it using dataStream my dicom files
//are getting corrupted!
//System.out.println("StudyInstanceUID: " + dataStream.readDataset().getString(Tag.StudyInstanceUID));
try {
outStream.writeFileMetaInformation(fileMetaDcmObj);
dataStream.copyTo(outStream);
} finally {
outStream.close();
}
dicomFile.renameTo(new File(associationDir, dicomFileBaseName));
System.out.println("DICOM file name: " + dicomFile.getName());
}
#Override
public void associationAccepted(final AssociationAcceptEvent associationAcceptEvent) {
....
#Override
public void associationClosed(final AssociationCloseEvent associationCloseEvent) {
...
}
I would like somewhere between this code to intercept a method wich will read dataStream and will parse specific tags and store to a local database.
However wherever i try to put a piece of code that tries to manipulate (just read for start) dataStream then my dicom files get corrupted!
PDVInputStream is implementing java.io.InputStream ....
Even if i try to just put a:
System.out.println("StudyInstanceUID: " + dataStream.readDataset().getString(Tag.StudyInstanceUID));
before copying datastream to outStream ... then my dicom files are getting corrupted (1KB of size) ...
How am i supposed to use datastream in a CStoreRQ association request to extract some information?
I hope my question is clear ...
The PDVInputStream is probably a PDUDecoder class. You'll have to reset the position when using the input stream multiple times.
Maybe a better solution would be to store the DICOM object in memory and use that for both purposes. Something akin to:
DicomObject dcmobj = dataStream.readDataset();
String whatYouWant = dcmobj.get( Tag.whatever );
dcmobj.initFileMetaInformation( transferSyntaxUID );
outStream.writeDicomFile( dcmobj );

Update database table without uploading a file while using a MultiPart Form - JavaEE, Servlet

I have a servlet which is responsible for enabling a user to update a reports table and upload a report at the same time. I have written code that enables a user upload a document and also be able to update the table with other details e.g date submitted etc.
However not all the times will a user have to upload a document. in this case it should be possible for a user to still edit a report's details and come back later to upload the file. i.e the user can submit the form without selecting a file and it still updates the table.
This part is what is not working. If a user selects a file and makes some changes. The code works. If a user doesn't select a file and tries to submit the form, it redirects to my servlet but it is blank. no stacktrace. No error is thrown.
Below is part of the code I have in my servlet:
if(param.equals("updateschedule"))
{
String[] allowedextensions = {"pdf","xlsx","xls","doc","docx","jpeg","jpg","msg"};
final String path = request.getParameter("uploadlocation_hidden");
final Part filepart=request.getPart("uploadreport_file");
int repid = Integer.parseInt(request.getParameter("repid_hidden"));
int reptype = Integer.parseInt(request.getParameter("reporttype_select"));
String webdocpath = request.getParameter("doclocation_hidden");
String subperiod = request.getParameter("submitperiod_select");
String duedate = request.getParameter("reportduedate_textfield");
String repname = request.getParameter("reportname_textfield");
String repdesc = request.getParameter("reportdesc_textarea");
String repinstr = request.getParameter("reportinst_textarea");
int repsubmitted = Integer.parseInt(request.getParameter("repsubmitted_select"));
String datesubmitted = request.getParameter("reportsubmitdate_textfield");
final String filename = getFileName(filepart);
OutputStream out = null;
InputStream filecontent=null;
String extension = filename.substring(filename.lastIndexOf(".") + 1, filename.length());
if(Arrays.asList(allowedextensions).contains(extension))
{
try
{
out=new FileOutputStream(new File(path+File.separator+filename));
filecontent = filepart.getInputStream();
int read=0;
final byte[] bytes = new byte[1024];
while((read=filecontent.read(bytes))!=-1)
{
out.write(bytes,0,read);
}
String fulldocpath = webdocpath+"/"+filename;
boolean succ = icreditdao.updatereportschedule(repid, reptype, subperiod, repname, repsubmitted,datesubmitted, duedate,fulldocpath, repdesc, repinstr);
if(succ==true)
{
response.sendRedirect("/webapp/Pages/Secured/ReportingSchedule.jsp?msg=Report Schedule updated successfully");
}
}
catch(Exception ex)
{
throw new ServletException(ex);
}
}
I'm still teaching myself javaee. Any help will be appreciated. Also open to other alternatives. I have thought of using jquery to detect if a file has been selected then use a different set of code. e.g
if(param.equals("updatewithnofileselected"))
{//update code here}
but I think there must be a better solution. Using jdk6, servlet3.0.
try this one.
MultipartParser parser = new MultipartParser(request, 500000000, false, false, "UTF-8");
Part part;
while ((part = parser.readNextPart()) != null) {
if(part.isParam()){
if(part.isFile()){
if(part.getName().equals("updatewithnofileselected")){
//update code here.
} else if(part.getName().equals("updateschedule")) {
//updateschedule
}
}
}
}
I used this one when I am using Multipart-form and it's working fine.

how to get input file name in hadoop cascading

In map-reduce I would extract the input file name as following
public void map(WritableComparable<Text> key, Text value, OutputCollector<Text,Text> output, Reporter reporter)
throws IOException {
FileSplit fileSplit = (FileSplit)reporter.getInputSplit();
String filename = fileSplit.getPath().getName();
System.out.println("File name "+filename);
System.out.println("Directory and File name"+fileSplit.getPath().toString());
process(key,value);
}
How can I do the similar with cascading
Pipe assembly = new Pipe(SomeFlowFactory.class.getSimpleName());
Function<Object> parseFunc = new SomeParseFunction();
assembly = new Each(assembly, new Fields(LINE), parseFunc);
...
public class SomeParseFunction extends BaseOperation<Object> implements Function<Object> {
...
#Override
public void operate(FlowProcess flowProcess, FunctionCall<Object> functionCall) {
how can I get the input file name here ???
}
Thanks,
I don't use Cascading but I think it should be sufficient to access the context instance, using functionCall.getContext(), to obtain the filename you can use:
String filename= ((FileSplit)context.getInputSplit()).getPath().getName();
However, it seems that cascading use the old API, if the above doesn't work you must try with:
Object name = flowProcess.getProperty( "map.input.file" );
Thank Engineiro for sharing the answer. However, when invoking hfp.getReporter().getInputSplit() method, I got MultiInputSplit type which can't be casted into FileSplit type directly in cascading 2.5.3. After diving into the related cascading APIs, I found a way and retrieved input file names successfully. Therefore, I would like to share this to supplement Engineiro's answer. Please see the following code.
HadoopFlowProcess hfp = (HadoopFlowProcess) flowProcess;
MultiInputSplit mis = (MultiInputSplit) hfp.getReporter().getInputSplit();
FileSplit fs = (FileSplit) mis.getWrappedInputSplit();
String fileName = fs.getPath().getName();
You would do this by getting the reporter within the buffer class, from the provided flowprocess argument in the buffer operate call.
HadoopFlowProcess hfp = (HadoopFlowProcess) flowprocess;
FileSplit fileSplit = (FileSplit)hfp.getReporter().getInputSplit();
.
.//the rest of your code
.

Categories