I need to build an utility program that accepts .wsdl file as input. Validates the file and gives output as weather file is valid(in terms of semantics). And if possible i also want to show error's in file (if any).
I know there are utilities available online, but i cannot upload incoming .wsdl files over internet(for security purpose). Hence i wan to do this operation programatically using Java.
Please suggest me if there are any API's available for this in java?
Credit to this blog usign wsdl4j
www.vorburger.ch/files/WSDLValidationTask.java
package wsdlvalidation;
import java.io.File;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.StringTokenizer;
import java.util.Map.Entry;
import javax.wsdl.Definition;
import javax.wsdl.Fault;
import javax.wsdl.Message;
import javax.wsdl.Operation;
import javax.wsdl.Part;
import javax.wsdl.PortType;
import javax.wsdl.WSDLException;
import javax.wsdl.factory.WSDLFactory;
import javax.wsdl.xml.WSDLReader;
import javax.xml.namespace.QName;
import org.apache.tools.ant.BuildException;
import org.apache.tools.ant.taskdefs.MatchingTask;
import org.apache.tools.ant.types.FileSet;
/**
* Ant Task to validate a WDSL for an Axis1 bug.
*
*/
public class WSDLValidationTask extends MatchingTask
{
private FileSet configuredWsdl;
public void execute() throws BuildException
{
super.execute();
try {
WSDLFactory wsdlFactory = WSDLFactory.newInstance();
WSDLReader reader = wsdlFactory.newWSDLReader();
Iterator it = getWSDLFileNamesList().iterator();
while (it.hasNext()) {
String wsdl = (String) it.next();
Definition theWSDL = reader.readWSDL(wsdl);
// This is a Bag of all Messages in the WSDL that are used in some <wsdl:fault> of any Operation of any PortType
Set faultMessages = new HashSet();
Map allPortTypes = theWSDL.getPortTypes();
Iterator portTypeIt = allPortTypes.entrySet().iterator();
while (portTypeIt.hasNext()) {
Map.Entry entry = (Entry) portTypeIt.next();
PortType portType = (PortType) entry.getValue();
List allOperations = portType.getOperations();
Iterator listIt = allOperations.iterator();
while (listIt.hasNext()) {
Operation operation = (Operation)listIt.next();
Iterator faultIt = operation.getFaults().values().iterator();
while (faultIt.hasNext()) {
Fault fault = (Fault) faultIt.next();
faultMessages.add(fault.getMessage());
}
}
}
Map allMessages = theWSDL.getMessages();
Iterator messageIt = allMessages.entrySet().iterator();
while (messageIt.hasNext()) {
Map.Entry entry = (Entry) messageIt.next();
QName messageNameQName = (QName) entry.getKey();
String messageName = messageNameQName.getLocalPart();
Message message = (Message) entry.getValue();
Map parts = message.getParts();
validate(parts.size() == 1, wsdl,
"wsdl:message has more than one part: " + messageNameQName.toString());
Part messagePart = (Part) parts.values().iterator().next();
validate(messagePart.getTypeName() == null, wsdl, "wsdl:part should not have a 'type' attribute: " + messagePart.getName());
// Only for Messages that are used in Fault:
if (faultMessages.contains(message)) {
validate(!messagePart.getElementName().getLocalPart().equals(messageName), wsdl,
"Due to an Axis1 bug, please do NOT use the same name for <wsdl:message name=\"" + messageName + "\"> and <xsd:element name=\"" + messagePart.getElementName().getLocalPart()+"\">");
}
}
}
} catch (WSDLException e) {
throw new BuildException(e);
}
}
private void validate(boolean condition, String wsdlFilename, String failureMessage) throws BuildException {
if (!condition) {
throw new BuildException(wsdlFilename + ": " + failureMessage);
}
}
// TODO Doesn't code like this already exist in ant??
private List getWSDLFileNamesList() {
List/*<String>*/ wsdlList = new ArrayList/*<String>*/();
File dir = configuredWsdl.getDir(configuredWsdl.getProject());
StringTokenizer tokenizer = new StringTokenizer(configuredWsdl.toString(), ";");
while (tokenizer.hasMoreTokens()) {
String token = tokenizer.nextToken();
wsdlList.add(new File(dir, token).toString());
}
return wsdlList;
}
public void addConfiguredWsdl(FileSet fileSet) {
configuredWsdl = fileSet;
}
}
Another way is to use SOAPUI tool API. Refer to below link.
Try to import WSDL and catch any exception to know whether WSDL is correct or not.
WsdlInterface iface = WsdlInterfaceFactory.importWsdl( "WSDl_LOCATION", true )[0];
Integrating With SoapUI
Related
I am trying to return a list of files from a directory. Here's my code:
package com.demo.web.api.file;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import com.demo.core.Logger;
import io.swagger.v3.oas.annotations.Operation;
#RestController
#RequestMapping(value = "/files")
public class FileService {
private static final Logger logger = Logger.factory(FileService.class);
#Value("${file-upload-path}")
public String DIRECTORY;
#Value("${file-upload-check-subfolders}")
public boolean CHECK_SUBFOLDERS;
#GetMapping(value = "/list")
#Operation(summary = "Get list of Uploaded files")
public ResponseEntity<List<File>> list() {
List<File> files = new ArrayList<>();
if (CHECK_SUBFOLDERS) {
// Recursive check
try (Stream<Path> walk = Files.walk(Paths.get(DIRECTORY))) {
List<Path> result = walk.filter(Files::isRegularFile).collect(Collectors.toList());
for (Path p : result) {
files.add(p.toFile().getAbsoluteFile());
}
} catch (Exception e) {
logger.error(e.getMessage());
}
} else {
// Checks the root directory only.
try (Stream<Path> walk = Files.walk(Paths.get(DIRECTORY), 1)) {
List<Path> result = walk.filter(Files::isRegularFile).collect(Collectors.toList());
for (Path p : result) {
files.add(p.toFile().getAbsoluteFile());
}
} catch (Exception e) {
logger.error(e.getMessage());
}
}
return ResponseEntity.ok().body(files);
}
}
As seen in the code, I am trying to return a list of files.
However, when I test in PostMan, I get a list of string instead.
How can I make it return the file object instead of the file path string? I need to get the file attributes (size, date, etc.) to display in my view.
I would recommend that you change your ResponseEntity<> to return not a List of File but instead, a List of Resource, which you can then use to obtain the file metadata that you need.
public ResponseEntity<List<Resource>> list() {}
You can also try specifying a produces=MediaType... param in your #GetMapping annotation so as to tell the HTTP marshaller which kind of content to expect.
You'd have to Create a separate payload with the details you wanna respond with.
public class FilePayload{
private String id;
private String name;
private String size;
public static fromFile(File file){
// create FilePayload from File object here
}
}
And convert it using a mapper from your internal DTO objects to payload ones.
final List<FilePayload> payload = files.forEach(FilePayload::fromFile).collect(Collectors.toList());
return new ResponseEntity<>(payload, HttpStatus.OK);
I think you should not return a body in this case as you may be unaware of the size.
Better to have another endpoint to GET /files/{id}
I did give this another thought. What I just needed was the filename, size and date of the file. From there, I can get the file extension and make my list display look good already.
Here's the refactored method:
#GetMapping(value = "/list")
#Operation(summary = "Get list of Uploaded files")
public ResponseEntity<String> list() {
JSONObject responseObj = new JSONObject();
List<JSONObject> files = new ArrayList<>();
// If CHECK_SUBFOLDERS is true, pass MAX_VALUE to make it recursive on all
// sub-folders. Otherwise, pass 1 to use the root directory only.
try (Stream<Path> walk = Files.walk(Paths.get(DIRECTORY), CHECK_SUBFOLDERS ? MAX_VALUE : 1)) {
List<Path> result = walk.filter(Files::isRegularFile).collect(Collectors.toList());
for (Path p : result) {
JSONObject file = new JSONObject();
file.put("name", p.toFile().getName());
file.put("size", p.toFile().length());
file.put("lastModified", p.toFile().lastModified());
files.add(file);
}
responseObj.put("data", files);
} catch (Exception e) {
String errMsg = CoreUtils.formatString("%s: Error reading files from the directory: \"%s\"",
e.getClass().getName(), DIRECTORY);
logger.error(e, errMsg);
responseObj.put("errors", errMsg);
}
return ResponseEntity.ok().body(responseObj.toString());
}
The above was what I ended up doing. I created a JSONObject of the props I need, and then returned the error if it did not succeed. This made it a lot better for me.
I'm trying to create and save a generated model directly from Java. The documentation specifies how to do this in R and Python, but not in Java. A similar question was asked before, but no real answer was provided (beyond linking to H2O doc, which doesn't contain a code example).
It'd be sufficient for my present purpose get some pointers to be able to translate the following reference code to Java. I'm mainly looking for guidance on the relevant JAR(s) to import from the Maven repository.
import h2o
h2o.init()
path = h2o.system_file("prostate.csv")
h2o_df = h2o.import_file(path)
h2o_df['CAPSULE'] = h2o_df['CAPSULE'].asfactor()
model = h2o.glm(y = "CAPSULE",
x = ["AGE", "RACE", "PSA", "GLEASON"],
training_frame = h2o_df,
family = "binomial")
h2o.download_pojo(model)
I think I've figured out an answer to my question. A self-contained sample code follows. However, I'll still appreciate an answer from the community since I don't know if this is the best/idiomatic way to do it.
package org.name.company;
import hex.glm.GLMModel;
import water.H2O;
import water.Key;
import water.api.StreamWriter;
import water.api.StreamingSchema;
import water.fvec.Frame;
import water.fvec.NFSFileVec;
import hex.glm.GLMModel.GLMParameters.Family;
import hex.glm.GLMModel.GLMParameters;
import hex.glm.GLM;
import water.util.JCodeGen;
import java.io.*;
import java.util.Map;
public class Launcher
{
public static void initCloud(){
String[] args = new String [] {"-name", "h2o_test_cloud"};
H2O.main(args);
H2O.waitForCloudSize(1, 10 * 1000);
}
public static void main( String[] args ) throws Exception {
// Initialize the cloud
initCloud();
// Create a Frame object from CSV
File f = new File("/path/to/data.csv");
NFSFileVec nfs = NFSFileVec.make(f);
Key frameKey = Key.make("frameKey");
Frame fr = water.parser.ParseDataset.parse(frameKey, nfs._key);
// Create a GLM and output coefficients
Key modelKey = Key.make("modelKey");
try {
GLMParameters params = new GLMParameters();
params._train = frameKey;
params._response_column = fr.names()[1];
params._intercept = true;
params._lambda = new double[]{0};
params._family = Family.gaussian;
GLMModel model = new GLM(params).trainModel().get();
Map<String, Double> coefs = model.coefficients();
for(Map.Entry<String, Double> entry : coefs.entrySet()) {
System.out.format("%s: %f\n", entry.getKey(), entry.getValue());
}
String filename = JCodeGen.toJavaId(model._key.toString()) + ".java";
StreamingSchema ss = new StreamingSchema(model.new JavaModelStreamWriter(false), filename);
StreamWriter sw = ss.getStreamWriter();
OutputStream os = new FileOutputStream("/base/path/" + filename);
sw.writeTo(os);
} finally {
if (fr != null) {
fr.remove();
}
}
}
}
Would something like this do the trick?
public void saveModel(URI uri, Keyed<Frame> model)
{
Persist p = H2O.getPM().getPersistForURI(uri);
OutputStream os = p.create(uri.toString(), true);
model.writeAll(new AutoBuffer(os, true)).close();
}
Make sure the URI has a proper form otherwise H2O will break on an npe. As for Maven you should be able to get away with the h2o core.
<dependency>
<groupId>ai.h2o</groupId>
<artifactId>h2o-core</artifactId>
<version>3.14.0.2</version>
</dependency>
The split method returns spaces and i need to return all elements of the string so i can pick the values i want. it works fine on Android but not on app engine. please help i need the html as an array of strings with no spaces. no this is not a duplicate of the other question. look am using the right regex "\s+"
import com.google.api.server.spi.config.Api;
import com.google.api.server.spi.config.ApiMethod;
import com.google.api.server.spi.config.ApiNamespace;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.select.Elements;
import java.util.ArrayList;
import java.util.List;
import java.util.logging.Logger;
import javax.inject.Named;
/**
* An endpoint class we are exposing
*/
#Api(name = "myApi", version = "v1", namespace = #ApiNamespace(ownerDomain = "backend.abokiforex.greatcallie.com", ownerName = "backend.abokiforex.greatcallie.com", packagePath = ""))
public class RateEndPoint {
String[] html;
private static final Logger LOG = Logger.getLogger(RateEndPoint.class.getName());
/**
* A simple endpoint method that takes a name and says Hi back
*/
#ApiMethod(name = "getRates")
public MyRates getRates() {
MyRates response = new MyRates();
try {
Document site = Jsoup.connect("http://abokifx.com/").timeout(0).get();
Elements tags = site.select("p");
String txt = tags.text();
html = txt.split("\\s+");
} catch (Exception e) {
e.printStackTrace();
}
for (int i = 0; i < html.length; i++){
LOG.info(html[i] +"\n");
}
response.setValue1(html[18]);
response.setValue2(html[20]);
response.setValue3(html[21]);
response.setValue4(html[23]);
response.setValue5(html[24]);
response.setValue6(html[26]);
response.setValue7(html[97]);
response.setValue8(html[98]);
response.setValue9(html[99]);
response.setValue10(html[70]);
response.setValue11(html[71]);
response.setValue12(html[72]);
return response;
}
}
okay here is the answer. the app engine wasnt using utf-8(unicode) and so it wasnt getting the regex right. i defined it in the appengine-web xml file for it to work
I am working on a requirement where I need to parse CSV record fields against multiple validations. I am using supercsv which has support for field level processors to validate data.
My requirement is to validate each record/row field against multiple validations and save them to the database with success/failure status. for failure records I have to display all the failed validations using some codes.
Super CSV is working file but it is checking only first validation for a filed and if it is failed , ignoring second validation for the same field.Please look at below code and help me on this.
package com.demo.supercsv;
import java.io.FileReader;
import java.io.IOException;
import java.io.StringWriter;
import java.util.ArrayList;
import java.util.List;
import org.supercsv.cellprocessor.Optional;
import org.supercsv.cellprocessor.constraint.NotNull;
import org.supercsv.cellprocessor.constraint.StrMinMax;
import org.supercsv.cellprocessor.constraint.StrRegEx;
import org.supercsv.cellprocessor.constraint.UniqueHashCode;
import org.supercsv.cellprocessor.ift.CellProcessor;
import org.supercsv.exception.SuperCsvCellProcessorException;
import org.supercsv.io.CsvBeanReader;
import org.supercsv.io.CsvBeanWriter;
import org.supercsv.io.ICsvBeanReader;
import org.supercsv.io.ICsvBeanWriter;
import org.supercsv.prefs.CsvPreference;
public class ParserDemo {
public static void main(String[] args) throws IOException {
List<Employee> emps = readCSVToBean();
System.out.println(emps);
System.out.println("******");
writeCSVData(emps);
}
private static void writeCSVData(List<Employee> emps) throws IOException {
ICsvBeanWriter beanWriter = null;
StringWriter writer = new StringWriter();
try{
beanWriter = new CsvBeanWriter(writer, CsvPreference.STANDARD_PREFERENCE);
final String[] header = new String[]{"id","name","role","salary"};
final CellProcessor[] processors = getProcessors();
// write the header
beanWriter.writeHeader(header);
//write the beans data
for(Employee emp : emps){
beanWriter.write(emp, header, processors);
}
}finally{
if( beanWriter != null ) {
beanWriter.close();
}
}
System.out.println("CSV Data\n"+writer.toString());
}
private static List<Employee> readCSVToBean() throws IOException {
ICsvBeanReader beanReader = null;
List<Employee> emps = new ArrayList<Employee>();
try {
beanReader = new CsvBeanReader(new FileReader("src/employees.csv"),
CsvPreference.STANDARD_PREFERENCE);
// the name mapping provide the basis for bean setters
final String[] nameMapping = new String[]{"id","name","role","salary"};
//just read the header, so that it don't get mapped to Employee object
final String[] header = beanReader.getHeader(true);
final CellProcessor[] processors = getProcessors();
Employee emp;
while ((emp = beanReader.read(Employee.class, nameMapping,
processors)) != null) {
emps.add(emp);
if (!CaptureExceptions.SUPPRESSED_EXCEPTIONS.isEmpty()) {
System.out.println("Suppressed exceptions for row "
+ beanReader.getRowNumber() + ":");
for (SuperCsvCellProcessorException e :
CaptureExceptions.SUPPRESSED_EXCEPTIONS) {
System.out.println(e);
}
// for processing next row clearing validation list
CaptureExceptions.SUPPRESSED_EXCEPTIONS.clear();
}
}
} finally {
if (beanReader != null) {
beanReader.close();
}
}
return emps;
}
private static CellProcessor[] getProcessors() {
final CellProcessor[] processors = new CellProcessor[] {
new CaptureExceptions(new NotNull(new StrRegEx("\\d+",new StrMinMax(0, 2)))),//id must be in digits and should not be more than two charecters
new CaptureExceptions(new Optional()),
new CaptureExceptions(new Optional()),
new CaptureExceptions(new NotNull()),
// Salary
};
return processors;
}
}
Exception Handler:
package com.demo.supercsv;
import java.util.ArrayList;
import java.util.List;
import org.supercsv.cellprocessor.CellProcessorAdaptor;
import org.supercsv.cellprocessor.ift.CellProcessor;
import org.supercsv.exception.SuperCsvCellProcessorException;
import org.supercsv.util.CsvContext;
public class CaptureExceptions extends CellProcessorAdaptor {
public static List<SuperCsvCellProcessorException> SUPPRESSED_EXCEPTIONS =
new ArrayList<SuperCsvCellProcessorException>();
public CaptureExceptions(CellProcessor next) {
super(next);
}
public Object execute(Object value, CsvContext context) {
try {
return next.execute(value, context);
} catch (SuperCsvCellProcessorException e) {
// save the exception
SUPPRESSED_EXCEPTIONS.add(e);
if(value!=null)
return value.toString();
else
return "";
}
}
}
sample csv file
ID,Name,Role,Salary
a123,kiran,CEO,"5000USD"
2,Kumar,Manager,2000USD
3,David,developer,1000USD
when I run my program supercsv exception handler displaying this message for the ID value in the first row
Suppressed exceptions for row 2:
org.supercsv.exception.SuperCsvConstraintViolationException: 'a123' does not match the regular expression '\d+'
processor=org.supercsv.cellprocessor.constraint.StrRegEx
context={lineNo=2, rowNo=2, columnNo=1, rowSource=[a123, kiran, CEO, 5000USD]}
[com.demo.supercsv.Employee#23bf011e, com.demo.supercsv.Employee#50e26ae7, com.demo.supercsv.Employee#40d88d2d]
for field Id length should not be null and more than two and it should be neumeric...I have defined field processor like this.
new CaptureExceptions(new NotNull(new StrRegEx("\\d+",new StrMinMax(0, 2))))
but super csv ignoring second validation (maxlenght 2) if given input is not neumeric...if my input is 100 then its validating max lenght..but how to get two validations for wrong input.plese help me on this
SuperCSV cell processors will work in sequence. So, if it passes the previous constraint validation then it will check next one.
To achieve your goal, you need to write a custom CellProcessor, which will check whether the input is a number (digit) and length is between 0 to 2.
So, that both of those checks are done in a single step.
I'm trying to parse a java file with Java Compiler APIs.
The documents are very poor. After hours of digging I still cannot get the Trees#getElement work for me. Here's my code:
import com.sun.source.tree.*;
import com.sun.source.util.*;
import javax.tools.JavaCompiler;
import javax.tools.JavaFileObject;
import javax.tools.StandardJavaFileManager;
import javax.tools.ToolProvider;
import java.io.IOException;
import java.nio.CharBuffer;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
class CodeAnalyzerTreeVisitor extends TreePathScanner<Object, Trees> {
#Override
public Object visitClass(ClassTree classTree, Trees trees) {
System.out.println("className " + classTree.getSimpleName());
//prints name of class
TreePath path = getCurrentPath();
printLocationAndSource(trees, path, classTree);
//prints the original source code
while (path != null) {
System.out.println("treepath");
System.out.println(trees.getElement(path));
path = path.getParentPath();
}
//it prints several nulls here
//why?
return super.visitClass(classTree, trees);
}
public static void printLocationAndSource(Trees trees,
TreePath path, Tree tree) {
SourcePositions sourcePosition = trees.getSourcePositions();
long startPosition = sourcePosition.
getStartPosition(path.getCompilationUnit(), tree);
long endPosition = sourcePosition.
getEndPosition(path.getCompilationUnit(), tree);
JavaFileObject file = path.getCompilationUnit().getSourceFile();
CharBuffer sourceContent = null;
try {
sourceContent = CharBuffer.wrap(file.getCharContent(true).toString().toCharArray());
} catch (IOException e) {
e.printStackTrace();
}
CharBuffer relatedSource = null;
if (sourceContent != null) {
relatedSource = sourceContent.subSequence((int) startPosition, (int) endPosition);
}
System.out.println("start: " + startPosition + " end: " + endPosition);
// System.out.println("source: "+relatedSource);
System.out.println();
}
}
public class JavaParser {
private static final JavaCompiler javac
= ToolProvider.getSystemJavaCompiler();
private static final String filePath = "/home/pinyin/Source/hadoop-common/hadoop-yarn-project/hadoop-ya" +
"rn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/ya" +
"rn/server/resourcemanager/ResourceManager.java";
public static void main(String[] args) throws IOException {
StandardJavaFileManager jfm = javac.getStandardFileManager(null, null, null);
Iterable<? extends javax.tools.JavaFileObject> javaFileObjects = jfm.getJavaFileObjects(filePath);
String[] sourcePathParam = {
"-sourcepath",
"/home/pinyin/Source/hadoop-common/hadoop-yarn-project/hadoop-yarn/" +
"hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/"
};
List<String> params = new ArrayList<String>();
params.addAll(Arrays.asList(sourcePathParam));
JavacTask task = (JavacTask) javac.getTask(null, jfm, null, params, null, javaFileObjects);
Iterable<? extends CompilationUnitTree> asts = task.parse();
Trees trees = Trees.instance(task);
for (CompilationUnitTree ast : asts) {
new CodeAnalyzerTreeVisitor().scan(ast, trees);
}
}
}
The lines about params and -sourcepath are added because I thought the compiler is trying to find the source file in the wrong places. They didn't work.
I'm still trying to understand how the Trees, javac and related JSRs work together, are there any recommended documents for beginners?
Thanks for your help.
edit:
The java file I'm trying to analyze is:
https://github.com/apache/hadoop-common/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
The file can be compiled without errors in its maven project, but its dependencies are not passed to javac in my situation. I'm not sure if this is the problem.
The trees.getElement returns null in the middle part of the code above, while the other parts seems to work well.
According to this answer, it seems that the Elements' information is not usable until the compilation is completed.
So calling task.analyze() solved my problem. Although javac is complaining about missing dependencies.
Please correct me if I'm wrong, thanks.