MarkLogic: No stream to write - java

I'm having a problem where I have a method that gets parameters from the AngularJs front-end, creates an object with them, writes the object as XML file in a folder, and then is supposed to write that XML file into the MarkLogic database.
However, the part where it's supposed to write to the database appears to see as if the file doesn't exist, even though it does:
Here's the code:
#RequestMapping(value = "/add/korisnik", method = RequestMethod.POST)
public String addKorisnik(#RequestParam String ime, #RequestParam String prezime, #RequestParam String username, #RequestParam String password, #RequestParam String orcid, #RequestParam String role) throws JAXBException, FileNotFoundException{
Korisnik.Roles roles = new Korisnik.Roles();
roles.setRole(role);
Korisnik k = new Korisnik();
k.setIme(ime);
k.setPrezime(prezime);
k.setUsername(username);
k.setPassword(password);
k.setOrcid(orcid);
k.setRoles(roles);
System.out.println(k.toString());
// create JAXB context and instantiate marshaller
JAXBContext context = JAXBContext.newInstance(Korisnik.class);
Marshaller m = context.createMarshaller();
m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
StringWriter sw = new StringWriter();
m.marshal(k, sw);
// Write to File
File f = new File("src/main/resources/data/korisnici/" + k.getUsername() + ".xml");
if (f.exists()) {
return "Username already taken.";
}
else {
m.marshal(k, new File("src/main/resources/data/korisnici/" + k.getUsername() + ".xml"));
}
// acquire the content
InputStream docStream = ObjavaNaucnihRadovaApplication.class.getClassLoader().getResourceAsStream(
"data/korisnici/" + k.getUsername() + ".xml");
// create the client
DatabaseClient client = DatabaseClientFactory.newClient(MarkLogicConfig.host,
MarkLogicConfig.port, MarkLogicConfig.admin,
MarkLogicConfig.password, MarkLogicConfig.authType);
// create a manager for XML documents
XMLDocumentManager docMgr = client.newXMLDocumentManager();
// create a handle on the content
InputStreamHandle handle = new InputStreamHandle(docStream);
// write the document content
docMgr.write("http://localhost:8011/korisnici/" + k.getUsername()+".xml", handle);
//release the client
client.release();
return "OK";
}

Several issues.
First, the file you write to is not the same as you read from. You are writing to "src/main/resources/data/korisnici/.." which is relative to the current directory of the JVM (application server). You are reading from the classpath resource directory -- not likely to be the same. You could simply reuse the same File object then they would be the same.
Second, you don't need to write to disk this small of an object, just write it to a in-memory stream (like ByteArrayStream() ).

It sounds as if the classloader isn't finding your file.
If you already have a File object that you have just written, why not construct an InputStream from it, rather than attempt to find it from a differently constructed relative path?
InputStream docStream = new FileInputStream(f);

Related

Transferring and saving MultipartFile instance

I have the following method, with the simple aim to store the contents of a given MultipartFile instance under a specified directory:
private void saveOnDisk(final String clientProductId, final MultipartFile image, final String parentDirectoryPath, final String fileSeparator) throws IOException
{
final File imageFile = new File(parentDirectoryPath + fileSeparator + clientProductId + image.getOriginalFilename());
image.transferTo(imageFile);
OutputStream out = new FileOutputStream(imageFile);
out. //... ? How do we proceed? OutputStream::write() requires a byte array or int as parameter
}
For what it might be worth, the MultipartFile instance is going to contain an image file which I receive on a REST API I'm building.
I've checked some SO posts such as this and this but this problem is not quite touched: I'm effectively looking to create an entirely new image file and store it on a specified location on disk: the method write() of OutputStream, given that it requires byte[] or int params, doesn't seem to be fitting my use case. Any ideas?

How to set file name while downloading as zip?

I have a rest api which allows me to pass multiple IDS to a resource to download records from specific table and zip it. MSSQL is the backend mastering messages.
So when a ID is passed as param, it calls the database table to return the message data. Below is the code:
#GetMapping("message/{ids}")
public void downloadmessage(#PathVariable Long[] ids, HttpServletResponse response) throws Exception {
List<MultiplemessageID> multiplemessageID = auditRepository.findbyId(ids);
String xml = new ObjectMapper().writeValueAsString(MultiplemessageID);
String fileName = "message.zip";
String xml_name = "message.xml";
byte[] data = xml.getBytes();
byte[] bytes;
try (ByteOutputStream bout = new ByteOutputStream(); ZipOutputStream zout = new ZipOutputStream(bout)) {
for (Long id : ids) {
zout.setLevel(1);
ZipEntry ze = new ZipEntry(xml_name);
ze.setSize(data.length);
ze.setTime(System.currentTimeMillis());
zout.putNextEntry(ze);
zout.write(data);
zout.closeEntry();
}
bytes = bout.getBytes();
}
response.setContentType("application/zip");
response.setContentLength(bytes.length);
response.setHeader("Content-Disposition", "attachment; " + String.format("filename=" + fileName));
ServletOutputStream outputStream = response.getOutputStream();
FileCopyUtils.copy(bytes, outputStream);
outputStream.close();
}
Message on the database has the following structure:
MSG_ID C_ID NAME INSERT_TIMESTAMP MSG CONF F_NAME POS ID INB HEADERS
0011d540 EDW,WSO2,AS400 invoicetoedw 2019-08-29 23:59:13 <invoice>100923084207</invoice> [iden1:SMTP, iden2:SAP, service:invoicetoedw, clients:EDW,WSO2,AS400, file.path:/c:/nfs/store/invoicetoedw/output, rqst.message.format:XML,] p3_pfi_1 Pre 101 MES_P3_IN [clients:EDW,WSO2,AS400, UniqueName:Domain]
My file name should be like: part of header name + _input parameterId[0]
i.e. Domain_1
File name for multiple paramter (1,2,3,4)will be like
Domain_1
Domain_2
Domain_3
Domain_4
Below code retrieves the part of file name as string from the header.
private static String serviceNameHeadersToMap(String headers) {
String sHeaders = headers.replace("[", "");
sHeaders = sHeaders.replace("]", "");
String res = Arrays.stream(sHeaders.split(", "))
.filter(s->s.contains("serviceNameIdentifier"))
.findFirst()
.map(name->name.split(":")[1])
.orElse("Not Present");
return res;
I need to create a file name with header and input parameter. Once the file name is set, I would like individual records downloaded with correct file name and zipped.
Zip file name is message.zip. When unzipped it should contain individual files like Domain_1.xml, Domain_2.xml, Domain_3.xml, Domain_4.xml etc...
How do I achieve this? Please advise. I need some guidance for the limited knowledge on java I have. Thank you.

Camunda ProcessDefinition not available after deployment

I have a Camunda 7.3 standalone h2-in-mem configuration which I want to integrate into my own application. When I create a deployment based on a (valid) bpmn xml String, I get no process definition in return:
public String deployProcess(WorkflowDef workflowDef) throws IOException, SQLException {
//prepare resources
String name = workflowDef.getTitle()+".bpmn";
InputStream stream = new ByteArrayInputStream(workflowDef.getBpmnXml().getBytes(Charset.defaultCharset()));
//prepare deployment
RepositoryService repositoryService = processEngine.getRepositoryService();
final DeploymentBuilder deploymentBuilder = repositoryService.createDeployment();
deploymentBuilder.name(workflowDef.getId());
DeploymentEntity deploymentResult;
try {
deploymentResult = (DeploymentEntity) repositoryService.createDeployment().addInputStream(name, stream).deploy();
} finally {
stream.close();
}
//read the result
String deploymentId = deploymentResult.getId();
ProcessDefinition processDef = repositoryService.createProcessDefinitionQuery().deploymentId(deploymentId).singleResult();
//return the process definition id for later query
String processDefinitionId = processDef.getId();
return processDefinitionId;
}
Note: The object workflowDef is just a wrapper, which contains definitions from the bpmn.io modeler, like name and the bpmn XML as String. I also make sure, that engine and db are initialized and running.
After deploying, I get a NullPointerException at the line
String processDefinitionId = processDef.getId();
Even when I try to get the definition via
repositoryService.createProcessDefinitionQuery().list();
I just receive an empty list. Is there a misconception in my code?

Read PDVInputStream dicomObject information on onCStoreRQ association request

I am trying to read (and then store to 3rd party local db) certain DICOM object tags "during" an incoming association request.
For accepting association requests and storing locally my dicom files i have used a modified version of dcmrcv() tool. More specifically i have overriden onCStoreRQ method like:
#Override
protected void onCStoreRQ(Association association, int pcid, DicomObject dcmReqObj,
PDVInputStream dataStream, String transferSyntaxUID,
DicomObject dcmRspObj)
throws DicomServiceException, IOException {
final String classUID = dcmReqObj.getString(Tag.AffectedSOPClassUID);
final String instanceUID = dcmReqObj.getString(Tag.AffectedSOPInstanceUID);
config = new GlobalConfig();
final File associationDir = config.getAssocDirFile();
final String prefixedFileName = instanceUID;
final String dicomFileBaseName = prefixedFileName + DICOM_FILE_EXTENSION;
File dicomFile = new File(associationDir, dicomFileBaseName);
assert !dicomFile.exists();
final BasicDicomObject fileMetaDcmObj = new BasicDicomObject();
fileMetaDcmObj.initFileMetaInformation(classUID, instanceUID, transferSyntaxUID);
final DicomOutputStream outStream = new DicomOutputStream(new BufferedOutputStream(new FileOutputStream(dicomFile), 600000));
//i would like somewhere here to extract some TAGS from incoming dicom object. By trying to do it using dataStream my dicom files
//are getting corrupted!
//System.out.println("StudyInstanceUID: " + dataStream.readDataset().getString(Tag.StudyInstanceUID));
try {
outStream.writeFileMetaInformation(fileMetaDcmObj);
dataStream.copyTo(outStream);
} finally {
outStream.close();
}
dicomFile.renameTo(new File(associationDir, dicomFileBaseName));
System.out.println("DICOM file name: " + dicomFile.getName());
}
#Override
public void associationAccepted(final AssociationAcceptEvent associationAcceptEvent) {
....
#Override
public void associationClosed(final AssociationCloseEvent associationCloseEvent) {
...
}
I would like somewhere between this code to intercept a method wich will read dataStream and will parse specific tags and store to a local database.
However wherever i try to put a piece of code that tries to manipulate (just read for start) dataStream then my dicom files get corrupted!
PDVInputStream is implementing java.io.InputStream ....
Even if i try to just put a:
System.out.println("StudyInstanceUID: " + dataStream.readDataset().getString(Tag.StudyInstanceUID));
before copying datastream to outStream ... then my dicom files are getting corrupted (1KB of size) ...
How am i supposed to use datastream in a CStoreRQ association request to extract some information?
I hope my question is clear ...
The PDVInputStream is probably a PDUDecoder class. You'll have to reset the position when using the input stream multiple times.
Maybe a better solution would be to store the DICOM object in memory and use that for both purposes. Something akin to:
DicomObject dcmobj = dataStream.readDataset();
String whatYouWant = dcmobj.get( Tag.whatever );
dcmobj.initFileMetaInformation( transferSyntaxUID );
outStream.writeDicomFile( dcmobj );

how to append data in docx file using docx4j

Please tell me how to append data in docx file using java and docx4j.
What I'm doing is, I am using a template in docx format in which some field are dilled by java at run time,
My problem is for every group of data it creates a new file and i just want to append the new file into 1 file. and this is not done using java streams
String outputfilepath = "e:\\Practice/DOC/output/generatedLatterOUTPUT.docx";
String outputfilepath1 = "e:\\Practice/DOC/output/generatedLatterOUTPUT1.docx";
WordprocessingMLPackage wordMLPackage;
public void templetsubtitution(String name, String age, String gender, Document document)
throws Exception {
// input file name
String inputfilepath = "e:\\Practice/DOC/profile.docx";
// out put file name
// id of Xml file
String itemId1 = "{A5D3A327-5613-4B97-98A9-FF42A2BA0F74}".toLowerCase();
String itemId2 = "{A5D3A327-5613-4B97-98A9-FF42A2BA0F74}".toLowerCase();
String itemId3 = "{A5D3A327-5613-4B97-98A9-FF42A2BA0F74}".toLowerCase();
// Load the Package
if (inputfilepath.endsWith(".xml")) {
JAXBContext jc = Context.jcXmlPackage;
Unmarshaller u = jc.createUnmarshaller();
u.setEventHandler(new org.docx4j.jaxb.JaxbValidationEventHandler());
org.docx4j.xmlPackage.Package wmlPackageEl = (org.docx4j.xmlPackage.Package) ((JAXBElement) u
.unmarshal(new javax.xml.transform.stream.StreamSource(
new FileInputStream(inputfilepath)))).getValue();
org.docx4j.convert.in.FlatOpcXmlImporter xmlPackage = new org.docx4j.convert.in.FlatOpcXmlImporter(
wmlPackageEl);
wordMLPackage = (WordprocessingMLPackage) xmlPackage.get();
} else {
wordMLPackage = WordprocessingMLPackage
.load(new File(inputfilepath));
}
CustomXmlDataStoragePart customXmlDataStoragePart = wordMLPackage
.getCustomXmlDataStorageParts().get(itemId1);
// Get the contents
CustomXmlDataStorage customXmlDataStorage = customXmlDataStoragePart
.getData();
// Change its contents
((CustomXmlDataStorageImpl) customXmlDataStorage).setNodeValueAtXPath(
"/ns0:orderForm[1]/ns0:record[1]/ns0:name[1]", name,
"xmlns:ns0='EasyForm'");
customXmlDataStoragePart = wordMLPackage.getCustomXmlDataStorageParts()
.get(itemId2);
// Get the contents
customXmlDataStorage = customXmlDataStoragePart.getData();
// Change its contents
((CustomXmlDataStorageImpl) customXmlDataStorage).setNodeValueAtXPath(
"/ns0:orderForm[1]/ns0:record[1]/ns0:age[1]", age,
"xmlns:ns0='EasyForm'");
customXmlDataStoragePart = wordMLPackage.getCustomXmlDataStorageParts()
.get(itemId3);
// Get the contents
customXmlDataStorage = customXmlDataStoragePart.getData();
// Change its contents
((CustomXmlDataStorageImpl) customXmlDataStorage).setNodeValueAtXPath(
"/ns0:orderForm[1]/ns0:record[1]/ns0:gender[1]", gender,
"xmlns:ns0='EasyForm'");
// Apply the bindings
BindingHandler.applyBindings(wordMLPackage.getMainDocumentPart());
File f = new File(outputfilepath);
wordMLPackage.save(f);
FileInputStream fis = new FileInputStream(f);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] buf = new byte[1024];
try {
for (int readNum; (readNum = fis.read(buf)) != -1;) {
bos.write(buf, 0, readNum);
}
// System.out.println( buf.length);
} catch (IOException ex) {
}
byte[] bytes = bos.toByteArray();
FileOutputStream file = new FileOutputStream(outputfilepath1, true);
DataOutputStream out = new DataOutputStream(file);
out.write(bytes);
out.flush();
out.close();
System.out.println("..done");
}
public static void main(String[] args) {
utility u = new utility();
u.templetsubtitution("aditya",24,mohan);
}
thanks in advance
If I understand you correctly, you're essentially talking about merging documents. There are two very simple approaches that you can use, and their effectiveness really depends on the structure and onward use of your data:
PhilippeAuriach describes one approach in his answer, which entails
appending all components within a MaindocumentPart instance to
another. In terms of the final docx file, this means the content
that appears in document.xml -- it won't take into account headers
and footers ( for example), but that may be fine for you.
You can insert multiple documents into a single docx file by inserting them
as AltChunk elements (see the docx4j documentation). This will
bring everything from one Word file into another, headers and all.
The downside of this is that your final document won't be a proper
flowing Word file until you open it and save it in MS Word itself
(the imported components remain as standalone files within the docx
bundle). This will cause you issues if you want to generated
'merged' files and then do something with them like render PDFs --
the merged content will simply be ignored.
The more complete (and complex) approach is to perform a "deep merge". This updates and maintains all references held within a document. Imported content becomes part of the main "flow" of the document (i.e. it is not stored as separate references), so the end result is a properly-merged file which can be rendered to PDF or whatever.
The downside to this is you need a good knowledge of docx structure and the API, and you will be writing a fair amount of code (I would recommend buying a license to Plutext's MergeDocx instead).
I had to deal with similar things, and here is what I did (probably not the most efficient, but working) :
create a finalDoc loading the template, and emptying it (so you have the styles in this doc)
for each data row, create a new doc loading the template, then replace your fields with your values
use the function below to append the doc filled with the datas to the finalDoc :
public static void append(WordprocessingMLPackage docDest, WordprocessingMLPackage docSource) {
List<Object> objects = docSource.getMainDocumentPart().getContent();
for(Object o : objects){
docDest.getMainDocumentPart().getContent().add(o);
}
}
Hope this helps.

Categories