Formatting header field for bean generator - Java - java

I have written a program that parses a csv file and creates a bean from the data to be put into a database. Everything works perfectly however now that this will be moved out of a testing environment, the real header names from the csv's will have to be added. These headers contain spaces and /. I am searching for a way to allow my parser to read these headers. When I define the header names, I have to use camelCasing and I am unable to insert spaces or other characters. Is there anyway to alter this?
Here is my constructor(integrationTeam needs to be Integration Team, softwareHardware needs to be Hardware/Software -- as is in csv header)
public class BeanGen {
public BeanGen(
final String name,
final String manufacturer,
final String model,
final String owner,
final String integrationTeam,
final String shipping,
final String hardwareSoftware,
final String subsystem,
final String plane,
final String integrationStandalone,
final String integrationInterface,
final String function,
final String helpLinks,
final String installationInstructions,
final String testSteps,
final String leadEngineer)
{
this.name = name;
this.manufacturer = manufacturer;
this.model = model;
this.owner = owner;
this.integrationTeam = integrationTeam;
this.shipping = shipping;
this.hardwareSoftware = hardwareSoftware;
this.subsystem = subsystem;
this.plane = plane;
this.integrationStandalone = integrationStandalone;
this.integrationInterface = integrationInterface;
this.function = function;
this.helpLinks = helpLinks;
this.installationInstructions = installationInstructions;
this.testSteps = testSteps;
this.leadEngineer = leadEngineer;
}
Here is the parser that handles the constructor
public class ParseHandler {
private static CellProcessor[] getProcessors() {
final CellProcessor[] processors = new CellProcessor[] {
new Optional(),
new Optional(),
new Optional(),
new Optional(),
new Optional(),
new Optional(),
new Optional(),
new Optional(),
new Optional(),
new Optional(),
new Optional(),
new Optional(),
new Optional(),
new Optional(),
new Optional(),
new Optional(),
};
return processors;
}
public static BeanGen readWithCsvBeanReader(Path path) throws IOException {
ICsvBeanReader beanReader = null;
BeanGen projectBean = null;
System.out.println("Processing File: " + path);
try {
beanReader = new CsvBeanReader(new FileReader(path.toString()), CsvPreference.STANDARD_PREFERENCE);
// the header elements are used to map the values to the bean (names
// must match)
final String[] header = beanReader.getHeader(true);
final CellProcessor[] processors = getProcessors();
if ((projectBean = beanReader.read(BeanGen.class, header, processors)) != null) {
System.out.println(String.format("%s", projectBean.toString()));
}
} finally {
if (beanReader != null) {
beanReader.close();
}
} return projectBean;
}
}

See the Super CSV documentation, section Reading with CsvBeanReader:
This relies on the fact that the column names in the header of the CSV file [...] match up exactly with the bean's field names, and the bean has the appropriate setters defined for each field.
If your header doesn't match (or there is no header), then you can simply define your own name mapping array.
You read the header and pass it to beanReader.read() as the second parameter. But according to the API reference the second parameter is a string array containing the bean property names. So you should pass something like
new String[] { "name", "manufacturer", "model", "owner", "integrationTeam", ... }
as the second parameter. So the first CSV column matches to bean field name, the second field matches to bean field manufacturer, etc.

Related

Corda LinearState: trying to transfer consumed state from one owner to the other

I am trying to transfer a state from one owner to the other back and forth. So always end up having new state created with same values. But intend to pass the state even if it is consumed to another owner. Trying to achieve this with linear state. I have pasted the transfer flow that basically should use the same car state that was issued to transfer across owners. The same consumed car state should be possible to transfer back and forth with the same state being consumed agian and again. Is this possible in Corda. From atheory perspective I am trying to transfer the car back and forth between two or more partys.
State
#ConstructorForDeserialization
public CarState(String carMake, String carModel, int carYear, double carMileAge, String carVIN, Party issuer, Party owner,UniqueIdentifier linearId) {
this.carMake = carMake;
this.carModel = carModel;
this.carYear = carYear;
this.carMileAge = carMileAge;
this.carVIN = carVIN;
this.issuer = issuer;
this.owner = owner;
this.linearId = linearId;
}
Contract
if(!(inputState.getLinearId().getExternalId().equals(outputState.getLinearId().getExternalId()))){
throw new IllegalArgumentException("UUID of input state and output state must be same");
}
Flow
#InitiatingFlow
#StartableByRPC
public class CarTransferFlowInitiator extends FlowLogic<String> {
// private final String carMake;
// private final String carModel;
// private final int carYear;
private final String carVin;
// private final double carMileage;
private final Party carOwner;
private final UniqueIdentifier linearId;
private int input;
public CarTransferFlowInitiator(String carVin,Party carOwner,UniqueIdentifier linearId){
this.carVin = carVin;
this.carOwner = carOwner;
this.linearId = linearId;
}
private final ProgressTracker.Step RETRIEVING_NOTARY = new ProgressTracker.Step("Retrieving Notary");
private final ProgressTracker.Step CREATE_TRANSACTION_INPUT= new ProgressTracker.Step("Creating Transaction Input");
private final ProgressTracker.Step CREATE_TRANSACTION_OUTPUT= new ProgressTracker.Step("Creating Transaction Output");
private final ProgressTracker.Step CREATE_TRANSACTION_BUILDER= new ProgressTracker.Step("Creating transaction Builder");
private final ProgressTracker.Step SIGN_TRANSACTION = new ProgressTracker.Step("Signing Transaction");
private final ProgressTracker.Step INITIATE_SESSION = new ProgressTracker.Step("Initiating session with counterparty");
private final ProgressTracker.Step FINALIZE_FLOW = new ProgressTracker.Step("Finalizing the flow");
private final ProgressTracker progressTracker = new ProgressTracker(
RETRIEVING_NOTARY,
CREATE_TRANSACTION_OUTPUT,
CREATE_TRANSACTION_BUILDER,
SIGN_TRANSACTION,
INITIATE_SESSION,
FINALIZE_FLOW
);
#Override
public ProgressTracker getProgressTracker() {
return progressTracker;
}
public StateAndRef<CarState> checkForCarStates() throws FlowException {
//QueryCriteria generalCriteria = new QueryCriteria.VaultQueryCriteria(Vault.StateStatus.UNCONSUMED);
QueryCriteria generalCriteria = new QueryCriteria.VaultQueryCriteria(Vault.StateStatus.ALL);
List<StateAndRef<CarState>> CarStates = getServiceHub().getVaultService().queryBy(CarState.class, generalCriteria).getStates();
boolean inputFound = false;
int t = CarStates.size();
input = 0;
for (int x = 0; x < t; x++) {
if (CarStates.get(x).getState().getData().getCarVIN().equals(carVin)) {
// if (CarStates.get(x).getState().getData().getLinearId().getExternalId().equals(linearId.getExternalId())) {
input = x;
inputFound = true;
}
}
if (inputFound) {
System.out.println("\n Input Found");
// System.out.println(CarStates.get(input).getState().getData().getCarMake());
// System.out.println(CarStates.get(input).getState().getData().getCarModel());
// System.out.println(CarStates.get(input).getState().getData().getCarYear());
// System.out.println(CarStates.get(input).getState().getData().getCarMileAge());
// System.out.println(CarStates.get(input).getState().getData().getCarVIN());
} else {
System.out.println("\n Input not found");
throw new FlowException();
}
return CarStates.get(input);
}
#Suspendable
public String call() throws FlowException {
//Retrieve the notary identity from the network map
progressTracker.setCurrentStep(RETRIEVING_NOTARY);
Party notary = getServiceHub().getNetworkMapCache().getNotaryIdentities().get(0);
//Create transaction components both input and output for this application
progressTracker.setCurrentStep(CREATE_TRANSACTION_OUTPUT);
StateAndRef<CarState> inputState = null;
inputState = checkForCarStates();
//Issuer is Toyota
//Owner is AutoSmart
Party issuer = inputState.getState().getData().getIssuer();
PublicKey issuerKey = issuer.getOwningKey();
//Create transaction components both input and output for this application
progressTracker.setCurrentStep(CREATE_TRANSACTION_OUTPUT);
//CarState outputState = new CarState(carMake,carModel,carYear,carMileage,carVin,issuer,carOwner);
String carMake = inputState.getState().getData().getCarMake();
String carModel = inputState.getState().getData().getCarModel();
int carYear = inputState.getState().getData().getCarYear();
double carMile = inputState.getState().getData().getCarMileAge();
String carVIN = inputState.getState().getData().getCarVIN();
Party carIssuer = inputState.getState().getData().getIssuer();
UniqueIdentifier carLinearId = inputState.getState().getData().getLinearId();
System.out.println(carLinearId);
CarState outputState = new CarState(carMake,carModel,carYear,carMile, carVin,carIssuer,carOwner,carLinearId);
List<PublicKey> requiresSigners = Arrays.asList(issuerKey,getOurIdentity().getOwningKey(),outputState.getOwner().getOwningKey());
// requiresSigners.add(outputState.getIssuer().getOwningKey());
// requiresSigners.add(outputState.getOwner().getOwningKey());
final Command<CarContract.Transfer> txCommand = new Command<>(
new CarContract.Transfer(),
requiresSigners
);
final TransactionBuilder txBuilder = new TransactionBuilder(notary)
.addInputState(inputState)
.addOutputState(outputState, CID)
.addCommand(txCommand);
// Create the transaction builder here and add compenents to it
progressTracker.setCurrentStep(CREATE_TRANSACTION_BUILDER);
//TransactionBuilder txB = new TransactionBuilder(notary);
// PublicKey issuerKey = getServiceHub().getMyInfo().getLegalIdentitiesAndCerts().get(0).getOwningKey();
// PublicKey ownerKey = carOwner.getOwningKey();
//List<PublicKey> requiredSigners = ImmutableList.of(issuerKey,ownerKey);
//ArrayList<PublicKey> requiredSigners = new ArrayList<PublicKey>();
//requiredSigners.add(issuerKey);
//requiredSigners.add(ownerKey);
// Command cmd = new Command(new CarContract.Register(), getOurIdentity().getOwningKey());
//txB.addOutputState(outputState, CID)
// .addCommand(cmd);
// Sign the transaction
progressTracker.setCurrentStep(SIGN_TRANSACTION);
final SignedTransaction signedTx = getServiceHub().signInitialTransaction(txBuilder);
// Create session with counterparty
progressTracker.setCurrentStep(INITIATE_SESSION);
FlowSession issuePartySession = initiateFlow((issuer));
FlowSession otherPartySession = initiateFlow(carOwner);
ArrayList<FlowSession> sessions = new ArrayList<>();
sessions.add(otherPartySession);
sessions.add(issuePartySession);
final SignedTransaction fullySignedTx = subFlow(
new CollectSignaturesFlow(signedTx, sessions, CollectSignaturesFlow.Companion.tracker()));
// SignedTransaction fullySignedTx = subFlow(new CollectSignaturesFlow(
// signedTx, Arrays.asList(otherPartySession), CollectSignaturesFlow.tracker()));
//Finalizing the transaction
progressTracker.setCurrentStep(FINALIZE_FLOW);
subFlow(new FinalityFlow(fullySignedTx,sessions));
return "Transfer Completed";
}
}
You need to create a second constructor for your linear state which accepts as input parameter a linearId; you should mark the constructor with the annotation #ConstructorForDeserialization.
Otherwise, when your flow suspends (for any reason) and it serializes your linear state, when the flow will attempt to resume and deserialize the state; it will use your current constructor which generates a random linearId, so the flow will end up with a new/different state than yours (because it has a different linearId).
But when you create that second constructor and mark it with the annotation, the flow will use it to deserialize and create an identical state.
You can use that constructor as well when you create the output state instead of using the setter like you do now.
Edit (after you added the flow code):
You cannot use a consumed state as an input; that's what's known as the double-spend problem. Imagine you had a US Dollar state and you tried to use the same dollar twice to buy things; that cannot happen. The notary will throw and exception if you try to use a consumed state as an input.
That's why when you query, you should query for UNCONSUMED instead of ALL.
Your query solution is not correct and not efficient, imagine if you had 100,000 cars; you're going to fetch all 100,000 then loop through them until you find the car with the VIN that you want? First of all you need pagination (Corda will throw an error if your result set returns more than 200 records and you're not using pagination), second of all this will probably drain your Java heap from creating those 100,000 objects and crash your CorDapp.
Instead, you should create a custom schema for your car state; then use a custom query criteria to query by the VIN number. Have a look at the IOU example how they created a custom schema for IOU state (see here and here). Then you can use the custom schema in a VaultCustomQueryCriteria (read here).

How to validate CSV headers using Univocity routines?

Im using Univocity CSV parser with routines when I iterate over Java beans. Is there a way to validate CSV header? When I edit the CSV and add invalid header, it just insert into given bean null without any error.
Model class:
public class Customer {
#Format(formats ="yyyy-MM-dd")
#Parsed(field="C_DAY")
private Date day;
#Parsed(field="C_ID")
private Long id;
#Parsed(field="C_TYPE")
private String type;
#Format(formats ="yyyy-MM-dd")
#Parsed(field="C_ORIGIN_DATE")
private Date originDate;
#Format(formats ="yyyy-MM-dd")
#Parsed(field="C_REL_DATE")
private Date relDate;
#Parsed(field="C_LEGAL_ID")
private String legalId;
#Parsed(field="C_NAME")
private String name;}
Parser:
#Autowired
private CustomerDAO dao;
public void parse(File file) throws IOException, SQLException, CustomerValidationException, ParseException {
CsvParserSettings parserSettings = new CsvParserSettings();
parserSettings.getFormat().setLineSeparator("\n");
parserSettings.setHeaderExtractionEnabled(false);
CsvRoutines routines = new CsvRoutines(parserSettings);
List<Customer> customers = new ArrayList<>();
java.util.Date stamp = getTimestamp(file);
dao.checkTimestampDate(stamp);
for (Customer customer : routines.iterate(Customer.class, file, "UTF-8")) {
validateFileDateWithFileName(stamp, customer.getDay());
validateCustomer(customer);
customers.add(customer);
}
dao.save(customers);
}
Author of the library here. The BeanListProcessor has a strictHeaderValidationEnabled property you can set to true to ensure all headers in your class exist in the input.
You just can't use the CsvRoutines in that case as that class implements convenience methods that use their own internal row processors, so yours will be ignored. Try this code:
CsvParserSettings parserSettings = new CsvParserSettings();
parserSettings.getFormat().setLineSeparator("\n");
final List<Customer> customers = new ArrayList<>();
final java.util.Date stamp = getTimestamp(file);
dao.checkTimestampDate(stamp);
parserSettings.setProcessor(new BeanProcessor<Customer>() {
#Override
public void beanProcessed(Customer customer, ParsingContext context) {
validateFileDateWithFileName(stamp, customer.getDay());
validateCustomer(customer);
customers.add(customer);
}
});
new CsvParser(parserSettings).parse(file, "UTF-8");
dao.save(customers);
Hope this helps.
Based on answer by Jeronimo Backes.
In case you have #Header annotation on the bean or know the exact headers but still need setHeaderExtractionEnabled(true):
public <T> List<T> parse(File file, Class<T> beanType, char delimiter, Charset charset) {
String[] headers = beanType.getDeclaredAnnotation(Headers.class).sequence(); // or other source
CsvParserSettings parserSettings = Csv.parseRfc4180(); // or some other
parserSettings.detectFormatAutomatically(delimiter);
parserSettings.setHeaderExtractionEnabled(true);
// initialize new processor (stateful, should not be reused! See implementation of parseAll)
BeanListProcessor<T> processor= new BeanListProcessor<>();
processor.setStrictHeaderValidationEnabled(true);
parserSettings.setProcessor(processor);
CsvParser csvParser = new CsvParser(parserSettings);
csvParser.parse(file, charset);
// header validation
String[] headersParsed = processor.getHeaders();
if (!Arrays.equals(headers, headersParsed)) {
String message = String.format("Header validation failed. Expected: %s, but was: %s",
Arrays.toString(headers), Arrays.toString(headersParsed));
throw new DataProcessingException(message);
}
return beanListProcessor.getBeans();
}

How to see created order in square pos

I am able to create order using square(v2/locations/location_id/orders)api and getting order id. But I am not able to get this order details and also how I can see this created order on square dashboard? please help me.
I am using the below method for doing it:
public CreateOrderResponse createOrder(String locationId, CreateOrderRequest body) throws ApiException {
Object localVarPostBody = body;
// verify the required parameter 'locationId' is set
if (locationId == null) {
throw new ApiException(400, "Missing the required parameter 'locationId' when calling createOrder");
}
// verify the required parameter 'body' is set
if (body == null) {
throw new ApiException(400, "Missing the required parameter 'body' when calling createOrder");
}
// create path and map variables
String localVarPath = "/v2/locations/{location_id}/orders".replaceAll("\\{" + "location_id" + "\\}",
apiClient.escapeString(locationId.toString()));
// query params
List<Pair> localVarQueryParams = new ArrayList<Pair>();
Map<String, String> localVarHeaderParams = new HashMap<String, String>();
Map<String, Object> localVarFormParams = new HashMap<String, Object>();
final String[] localVarAccepts = { "application/json" };
final String localVarAccept = apiClient.selectHeaderAccept(localVarAccepts);
final String[] localVarContentTypes = { "application/json" };
final String localVarContentType = apiClient.selectHeaderContentType(localVarContentTypes);
String[] localVarAuthNames = new String[] { "oauth2" };
GenericType<CreateOrderResponse> localVarReturnType = new GenericType<CreateOrderResponse>() {
};
CompleteResponse<CreateOrderResponse> completeResponse = (CompleteResponse<CreateOrderResponse>) apiClient
.invokeAPI(localVarPath, "POST", localVarQueryParams, localVarPostBody, localVarHeaderParams,
localVarFormParams, localVarAccept, localVarContentType, localVarAuthNames,
localVarReturnType);
return completeResponse.getData();
}
Thanks
The orders endpoint is only for creating itemized orders for e-commerce transactions. You won't see them anywhere until you charge them, and then you'll see the itemizations for the order in your dashboard with the transaction.

Using AWS Java's SDKs, how can I terminate the CloudFormation stack of the current instance?

Uses on-line decomentation I come up with the following code to terminate the current EC2 Instance:
public class Ec2Utility {
static private final String LOCAL_META_DATA_ENDPOINT = "http://169.254.169.254/latest/meta-data/";
static private final String LOCAL_INSTANCE_ID_SERVICE = "instance-id";
static public void terminateMe() throws Exception {
TerminateInstancesRequest terminateRequest = new TerminateInstancesRequest().withInstanceIds(getInstanceId());
AmazonEC2 ec2 = new AmazonEC2Client();
ec2.terminateInstances(terminateRequest);
}
static public String getInstanceId() throws Exception {
//SimpleRestClient, is an internal wrapper on http client.
SimpleRestClient client = new SimpleRestClient(LOCAL_META_DATA_ENDPOINT);
HttpResponse response = client.makeRequest(METHOD.GET, LOCAL_INSTANCE_ID_SERVICE);
return IOUtils.toString(response.getEntity().getContent(), "UTF-8");
}
}
My issue is that my EC2 instance is under an AutoScalingGroup which is under a CloudFormationStack, that is because of my organisation deployment standards though this single EC2 is all there is there for this feature.
So, I want to terminate the entire CloudFormationStack from the JavaSDK, keep in mind, I don't have the CloudFormation Stack Name in advance as I didn't have the EC2 Instance Id so I will have to get it from the code using the API calls.
How can I do that, if I can do it?
you should be able to use the deleteStack method from cloud formation sdk
DeleteStackRequest request = new DeleteStackRequest();
request.setStackName(<stack_name_to_be_deleted>);
AmazonCloudFormationClient client = new AmazonCloudFormationClient (<credentials>);
client.deleteStack(request);
If you don't have the stack name, you should be able to retrieve from the Tag of your instance
DescribeInstancesRequest request =new DescribeInstancesRequest();
request.setInstanceIds(instancesList);
DescribeInstancesResult disresult = ec2.describeInstances(request);
List <Reservation> list = disresult.getReservations();
for (Reservation res:list){
List <Instance> instancelist = res.getInstances();
for (Instance instance:instancelist){
List <Tag> tags = instance.getTags();
for (Tag tag:tags){
if (tag.getKey().equals("aws:cloudformation:stack-name")) {
tag.getValue(); // name of the stack
}
}
At the end I've achieved the desired behaviour using the set of the following util functions I wrote:
/**
* Delete the CloudFormationStack with the given name.
*
* #param stackName
* #throws Exception
*/
static public void deleteCloudFormationStack(String stackName) throws Exception {
AmazonCloudFormationClient client = new AmazonCloudFormationClient();
DeleteStackRequest deleteStackRequest = new DeleteStackRequest().withStackName("");
client.deleteStack(deleteStackRequest);
}
static public String getCloudFormationStackName() throws Exception {
AmazonEC2 ec2 = new AmazonEC2Client();
String instanceId = getInstanceId();
List<Tag> tags = getEc2Tags(ec2, instanceId);
for (Tag t : tags) {
if (t.getKey().equalsIgnoreCase(TAG_KEY_STACK_NAME)) {
return t.getValue();
}
}
throw new Exception("Couldn't find stack name for instanceId:" + instanceId);
}
static private List<Tag> getEc2Tags(AmazonEC2 ec2, String instanceId) throws Exception {
DescribeInstancesRequest describeInstancesRequest = new DescribeInstancesRequest().withInstanceIds(instanceId);
DescribeInstancesResult describeInstances = ec2.describeInstances(describeInstancesRequest);
List<Reservation> reservations = describeInstances.getReservations();
if (reservations.isEmpty()) {
throw new Exception("DescribeInstances didn't returned reservation for instanceId:" + instanceId);
}
List<Instance> instances = reservations.get(0).getInstances();
if (instances.isEmpty()) {
throw new Exception("DescribeInstances didn't returned instance for instanceId:" + instanceId);
}
return instances.get(0).getTags();
}
// XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
// Example of usage from the code:
deleteCloudFormationStack(getCloudFormationStackName());
// XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Tapestry generates input tags in pair

I want to generate HTML5 valid document, but I have a problem with forms in my Tapestry app. I am using tapestry textfields like below:
<t:textfield t:id="specId" value="val" />
Tapestry generates html input element:
<input id="specId" name="specId" type="text"></input>
But element input is not valid in pair (with end tag </input>) and html validator yells: "Error: Stray end tag input.".
Is any way how to generate input tags in single form like
<input .../> ?
You can override MarkupWriterFactory service with its own MarkupModel that will abbreviate html5 void elements instead of rendering end tag.
public class Html5MarkupModel extends AbstractMarkupModel {
private static final Set<String> VOID_ELEMENTS = new HashSet<String>(Arrays.asList(
"area", "base", "br", "col", "command", "embed", "hr", "img", "input", "keygen", "link", "meta", "param", "source", "track", "wbr"
));
public Html5MarkupModel(boolean useApostropheForAttributes) {
super(useApostropheForAttributes);
}
public EndTagStyle getEndTagStyle(String element) {
return VOID_ELEMENTS.contains(element) ? EndTagStyle.ABBREVIATE : EndTagStyle.REQUIRE;
}
public boolean isXML() {
return false;
}
}
public class Html5MarkupWriterFactory implements MarkupWriterFactory {
private final PageContentTypeAnalyzer analyzer;
private final RequestPageCache cache;
private final MarkupModel htmlModel = new Html5MarkupModel(false);
private final MarkupModel htmlPartialModel = new Html5MarkupModel(true);
private final MarkupModel xmlModel = new XMLMarkupModel();
private final MarkupModel xmlPartialModel = new XMLMarkupModel(true);
public Html5MarkupWriterFactory(PageContentTypeAnalyzer analyzer, RequestPageCache cache) {
this.analyzer = analyzer;
this.cache = cache;
}
public MarkupWriter newMarkupWriter(ContentType contentType) {
return newMarkupWriter(contentType, false);
}
public MarkupWriter newPartialMarkupWriter(ContentType contentType) {
return newMarkupWriter(contentType, true);
}
public MarkupWriter newMarkupWriter(String pageName) {
return newMarkupWriter(analyzer.findContentType(cache.get(pageName)));
}
private MarkupWriter newMarkupWriter(ContentType contentType, boolean partial) {
boolean isHTML = contentType.getMimeType().equalsIgnoreCase("text/html");
MarkupModel model = partial
? (isHTML ? htmlPartialModel : xmlPartialModel)
: (isHTML ? htmlModel : xmlModel);
// The charset parameter sets the encoding attribute of the XML declaration, if
// not null and if using the XML model.
return new MarkupWriterImpl(model, contentType.getCharset());
}
}
And service override contribution:
#Contribute(ServiceOverride.class)
public void contributeServiceOverrides(MappedConfiguration<Class, Object> configuration,
ObjectLocator objectLocator) {
// use proxy instead of real service instance
// to prevent recursion on initialization cycle
configuration.add(MarkupWriterFactory.class,
objectLocator.proxy(MarkupWriterFactory.class, Html5MarkupWriterFactory.class));
}

Categories