Tapestry - how to get other component's field - java

I have a tapestry class which loads a variable
public class Component1 {
Object onActionFromEdit(MyClass object){
String param = object.getMyParam();
...}
}
I would like to access the param value from other component.
What is the best way to get it?
Is there some common context where I could store variables?
I tried to search the documentation but had no luck so far.

Difficult to guess, but judging by the name onActionFromEdit may be event handler for EventLink where parameter is passed in HTTP GET request as a query param.
If that's the case there are several ways of sharing event context with other components/parts of the app:
Component parameters - they are actually bi-directional bindings, assigning parameter field from component will write the value to where the parameter was bound. Of course the value will only be available in the scope of current request unless either the fields or associated bindings were annotated with #Persist
class Component1 {
#Parameter MyClass param1;
onActionFromEdit(MyClass object) {
param1 = object; // this will update value of a binding
// where this parameter is set
}
}
If you want to share state with parent component, then you could also #InjectComponent Component1 component1 in parent and access its state directly as with regular java objects, e.g. component1.getState1(), of course this has to be done after event handler assigned the value from request.
Environmental services. This is a per-request service, it's often user by parent components pushing state for child/nested components during rendering/event handling, e.g. t:Form and t:TextField.
It's always possible to create a dedicated per-thread service for concrete use-case and #Inject it where needed. The service may hold shared state in own fields, it will be discarded at the end of the request. The service may also register a cleanup listener in case you need custom logic for end of request cleanup.
Again, difficult to tell why you need this/what's your use-case, but Tapestry pages support activation contexts. Each event link will have this context transparently added, so each time event link is called, containing page will be activated with the stored context and it's then possible to publish this state for components, e.g. using regular component parameters, or one of the methods mentioned above. Jumpstart examples for onActivate/onPassivate.
It's also worth mentioning the #ActivationRequestParameter, you can find more usage examples in the JumpStart.
There's also Session Storage if you need to capture data and share it with other parts of the app in multiple requests. But it sounds too much for your question.
There are other, non-tapestry, ways of sharing data that also work in Tapestry apps, e.g. request attributes, query parameters, etc. But you normally don't use them.

I think you are looking for the ComponentResources class which will allow you to do the following:
public class Component1 {
#Inject
private ComponentResources resources;
Object onActionFromEdit(){
MyComponent comp = (MyComponent) resources.getEmbeddedComponent("ComponentId");
String param = comp.getMyParam();
}
}
If the component is not embedded but part of a sibling, you can also resources.getContainerResources().getEmbeddedComponent("ComponentId") or even fetch it from the page like so: resources.getPage().getComponentResources().getEmbeddedComponent("ComponentId")

Related

Dependency injection of IHttpContextAccessor vs passing parameter up the method chain

Our application calls many external API's which take a session token of the current user as input. So what we currently do is in a controller, get the session token for the user and pass it into a service which in turn might call another service or some API client. To give an idea, we end up with something like this (example is .NET but something similar is I think possible in Java)
public IActionResult DoSomething(string something)
{
this.someService.DoSomethingForUser(this.HttpContext.SessionToken, something);
return View();
}
And then we have
public class SomeService
{
private readonly IApiClient apiClient;
public SomeService(IApiClient apiClient)
{
this.apiClient = apiClient;
}
public void DoSomethingForUser(string sessionToken, something)
{
this.apiClient.DoSomethingForUser(sessionToken, something);
}
}
It can also happen that in SomeService another service is injected which in turn calls the IApiClient instead of SomeService calling IApiClient directly, basically adding another "layer".
We had a discussion with the team if it isn't better to instead of passing the session token, inject it using DI so you get something like this:
public IActionResult DoSomething(string something)
{
this.someService.DoSomethingForUser(something);
return View();
}
And then we have
public class SomeService
{
private readonly IUserService userService;
private readonly IApiClient apiClient;
public SomeService(IUserService userService, IApiClient apiClient)
{
this.userService = userService;
this.apiClient = apiClient;
}
public void DoSomethingForUser(string something)
{
this.apiClient.DoSomethingForUser(userService.SessionToken, something);
}
}
The IUserService would have an IHttpContextAccessor injected:
public class UserService : IUserService
{
private readonly IHttpContextAccessor httpContextAccessor;
public UserService(IHttpContextAccessor httpContextAccessor)
{
this.httpContextAccessor = httpContextAccessor;
}
public string SessionToken => httpContextAccessor.HttpContext.SessionToken;
}
The benefits of this pattern are I think pretty clear. Especially with many services, it keeps the code "cleaner" and you end up with less boilerplate code to pass a token around.
Still, I don't like it. To me the downsides of this pattern are more important than its benefit:
I like that passing the token in the methods is concise. It is clear that the service needs some sort of authentication token for it to function. I'm not sure if you can call it a side effect but the fact that a session token is magically injected three layers deep is impossible to tell just by reading the code
Unit testing is a bit more tedious if you have to Mock the IUserService
You run into problems when calling this in another thread, e.g. calling SomeService from another thread. Although these problems can be mitigated by injecting another concrete type of IUserService which gets the token from some place else, it feels like a chore.
To me it strongly feels like an anti pattern but apart from the arguments above it is mostly a feeling. There was a lot of discussion and not everybody was convinced that it was a bad idea. Therefor, my question is, is it an anti pattern or is it perfectly valid? What are some strong arguments for and against it, hopefully so there can be not much debate that this pattern is indeed, either perfectly valid or something to avoid.
I would say the main point is to enable your desired separation of concerns. I think it is a good question if expressed in those terms. As Kit says, different people may prefer different solutions.
REQUEST SCOPED OBJECTS
These occur quite naturally in APIs. Consider the following example, where a UI calls an Orders API, then the Orders API forwards the JWT to an upstream Billing API. A unique Request ID is also sent, in case the flow experiences a temporary problem. If the flow is retried, the Request ID can be used by APIs to prevent data duplication. Yet business logic should not need to know about either the Request ID or the JWT.
BUSINESS LOGIC CLASS DESIGN
I would start by designing my logic classes with my desired inputs, then work out the DI later. In my example the OrderService class might use claims to get the user identity and also for authorization. But I would not want it to know about HTTP level concerns:
public class OrderService
{
private readonly IBillingApiClient billingClient;
public OrderService(IBillingApiClient billingClient, ClaimsPrincipal user)
{
this.billingClient = billingClient;
}
public async void CreateOrder(OrderInput data)
{
this.Authorize();
var order = this.CreateOrder(data);
await this.billingClient.CreateInvoice(order);
}
}
DI SETUP
To enable my preferred business logic, I would write a little DI plumbing, so that I could inject request scoped dependencies in my preferred way. First, when the app starts, I would create a small middleware class. This will run early in the HTTP request pipeline:
private void ConfigureApiMiddleware(IApplicationBuilder api)
{
api.UseMiddleware<ClientContextMiddleware>();
}
In the middleware class I would then create a ClientContext object from runtime data. The OrderService class will run later, after next() is called:
public class ClientContextMiddleware
{
public async Task Invoke(HttpContext context)
{
var jwt = readJwt(context.Request);
var requestId = readRequestId(context.Request);
var holder = context.RequestServices.GetService<ClientContextHolder>();
holder.ClientContext = new ClientContext(jwt, requestIO);
await this.next(context);
}
}
In my DI composition at application startup I would express that the API client should be created when it is first referenced. In the HTTP request pipeline, the OrderService request scoped object will be constructed after the middleware has run. The below lambda will then be invoked:
private void RegisterDependencies(IServiceCollection services)
{
this.services.AddScoped<IApiClient>(
ctx =>
{
var holder = ctx.GetService<ClientContextHolder>();
return new ApiClient(holder.context);
});
this.services.AddScoped<ClientContextHolder>();
}
The holder object is just due to a technology limitation. The MS stack does not allow you to create new request scoped injectable objects at runtime, so you have to update an existing one. In a previous .NET tech stack, the concept of child container per request was made available to developers, so the holder object was not needed.
ASYNC AWAIT
Request scoped objects are stored against the HTTP request object, which is the correct behaviour when using async await. The current thread ID may switch, eg from 4 to 6 after the call to the Billing API.
If the OrderService class has a transient scope, it could get recreated when the flow resumes on thread 6. If this is the case, then resolution will continue to work.
SUMMARY
Designing inputs first, then writing some support code if needed is a good approach I think, and it is also useful to know the DI techniques. Personally I think natural request scoped objects that need to be created at runtime should be usable in DI. Some people may prefer a different approach though.
See in dotnet the area that I am an expert is not an anti standard on the contrary it is the model that many adopt but it is not a model that I would follow for the following reasons
it is not clear where is the token for those who read and use it being an anti clean code
you load important information in a place that is frequently accessed by the framework in the case of .netCore
your classes will reference a large property carrying a lot of unnecessary information when you could have created a more clean model that costs less memory and allocation time, I'm saying this because the HttpAcessor carries all the information relevant to your request
As I would take care of readability (clean code) and improve my performance
I would make a middleware or filter in my flow mvc where I would do the authentication part and create a class like:
public class TokenAuthenciationValues
{
public string TokenClient { get; set; }
public string TokenValue { get; set; }
}
Of course my method is an example but in my middleware I would implement it by loading its token values ​​after calling the necessary apis (of course this model needs an interface and it needs to be configured as .AddScoped() in the case of .net)
That way I would use it in my methods only instantiating my ITokenAuthenciationValues ​​in the constructor and I would have clear and clean information loaded in memory during the entire request
If it is necessary in the middle of the request to change the token any class can access it and change its value
I would have less memory allocated unused in my classes since the IHttpAcessor contract the ITokenAuthenciationValues ​​only has relevant information
Hope this helps

Is the model or a service responsible for defining the default TTL to a new instance of the model?

Here is the thing, I want to add ttl in my Dynamo tables, and it has a default Duration, but I want it to be configurable by properties, for each Spring Profile to have it's own configuration.
public Long getTtl() {
return ttl;
}
public void setTtl(Long ttl) {
this.ttl = ttl;
}
And to populate it correctly I will need to do something like this.
entity.setTtl(Instant.now().plus(Duration.from(defaultTTL)).getEpochSecond());
this defaultTTL I would load from an #Value or something.
But my question is where to put it. My first instinct was to set in the default constructor, but I don't like to load Spring values in it.
Following the "a code that changes together, belong together."
Am I wrong?
I defined the DEFAULT_TTL in a constant in a service for the CRUD operations and set the TTL in the Model's Constructor.
It looks good.

Creating JsonLd + Hydra based Generic Client API in java. Is there any projects exist for reference?

I am creating Client API in Java using :+ Apache Jena FrameWork+ Hydra(for Hypermedia driven) + my private vocab similar to Markus Lanther Event-API Vocab instead of schema.org(for Ontology/Vocabulary part)
Section 1 :
After looking this Markus Lanther EventDemo repo and hydra-java.I found that they are creating classes for each hydra:Class that can break client in future .For example :
A Person class (Person.java)
public class Person
{
String name;
};
But in future requirement name is also a class eg:
public class Name
{
String firstName;
String LastName;
};
So to fulfill this requirement I have to update Person class like this:
public class Person
{
Name name;
};
Question 1:
Is my understanding correct or not of this Section? If yes then what is the way to deal with this part ?
Section 2:
To avoid above problem I created a GenericResource class(GenericResource.java)
public class GenericResource
{
private Model model;
public void addProperty(String propertyName,Object propertyValue)
{
propertyName = "myvocab:"+propertyName;
//Because he will pass propertyName only eg: "name" and I will map it to "myvocab:name"
//Some logic to add propertyName and propertyValue to model
}
public GenericResource retriveProperty(String propertyName)
{
propertyName = "myvocab:"+propertyName;
//Some logic to query and retrieve propertyName data from this Object add it to new GenericResource Object and return
}
public GenericResouce performAction(String actionName,String postData)
{
//Some logic to make http call give response in return
}
}
But again I stuck in lots of problem :
Problem 1: It is not necessary that every propertyName is mapped to myvocab:propertyName. Some may be mapped to some other vocab eg: hydra:propertyName, schema:propertyName, rdfs:propertyName, newVocab:propertyName, etc.
Problem 2: How to validate whether this propertyName belongs to this class ?
Suggestion: Put type field/variable in GenericResource class.And then check supportedProperty in vocab corresponding to that class.To more clarity assume above Person class which is also defined in vocab and having supportedProperty : [name,age,etc] .So my GenericResource have type "Person" and at time of addProperty or some other operation , I will query through vocab for that property is in supportedProperty list or in supportedOperation list in case of performAction().
Is it correct way ? Any other suggestion will be most welcomed?
Question 1: Is my understanding correct or not of this Section? If yes
then what is the way to deal with this part ?
Yes, that seems to be correct. Just because hydra-java decided to creates classes doesn't mean you have to do the same in your implementation though. I would rather write a mapper and annotate an internal class that can then stay stable (you need to update the mapping instead). Your GenericResource approach also looks good btw.
Problem 1: It is not necessary that every propertyName is mapped to
myvocab:propertyName. Some may be mapped to some other vocab eg:
hydra:propertyName, schema:propertyName, rdfs:propertyName,
newVocab:propertyName, etc.
Why don't you store and access the properties with full URLs, i.e., including the vocab? You can of course implement some convenience methods to simplify the work with your vocab.
Problem 2: How to validate whether this propertyName belongs to this
class
Suggestion: Put type field/variable in GenericResource class
JSON-LD's #type in node objects (not in #value objects) corresponds to rdf:type. So simply add it as every other property.
And then check supportedProperty in vocab corresponding to that class.
Please keep in mind that supportedProperty only tells you which properties are known to be supported. It doesn't tell you which aren't. In other words, it is valid to have properties other than the ones listed as supportedProperty on an object/resource.
Ad Q1:
For the flexibility you want, the client has to be prepared for semantic and structural changes.
In HTML that is possible. The server can change the structure of an html form in the way outlined by you, by having a firstName and lastName field rather than just a name field. The client does not break, rather it adjusts its UI, following the new semantics. The trick is that the UI is generated, not fixed.
A client which tries to unmarshal the incoming message into a fixed representation, such as a Java bean, is out of luck, and I do not think there is any solution how you could deserialize into a Java bean and survive a change like yours.
If you do not try to deserialize, but stick to reading and processing the incoming message into a more flexible representation, then you can achieve the kind of evolvability you're after. The client must be able to handle the flexible representation accordingly. It could generate UIs rather than binding data to fixed markup, which means, it makes no assumptions about the semantics and structure of the data. If the client absolutely has to know what a data element means, then the server cannot change the related semantics, it can only add new items with the new semantics while keeping the old ones around.
If there were a way how a server could hand out a new structure with a code-on-demand adapter for existing clients, then the server would gain a lot of evolvability. But I am not aware of any such solutions yet.
Ad Q2:
If your goal is to read an incoming json-ld response into a Jena Model on the client side, please see https://jena.apache.org/documentation/io/rdf-input.html
Model model = ModelFactory.createDefaultModel() ;
String base = null;
model.read(inputStream, base, "JSON-LD");
Thus your client will not break in the sense that it cannot read the incoming response. I think that is what your GenericResource achieves, too. But you could use Jena directly on the client side. Basically, you would avoid unmarshalling into a fixed type.

Accessing Magnolia TemplatingFunctions from inside a controller

Is it possible to serve different experiences depending on whether the user is in edit mode? I've noticed that the following method exists; however it is not static:
info.magnolia.templating.functions.TemplatingFunctions.isEditMode()
Is there a way to access the isEditMode() method from inside a controller? Is an instance of it defined somewhere which can be accessed? I imagine that creating a new instance of the TemplatingFunctions class won't help ...
I've looked at using #Inject; however I keep getting issues on injection of all parameters.
#Inject
public ModelAndView renderView(Model model, Node node, TemplatingFunctions templatingFunctions) throws RepositoryException {
if (templatingFunctions.isEditMode()) {
}
}
When I checked what that method does, I found out that its a combination of two functions.
Components.getComponent(ServerConfiguration.class).isAdmin()
and
aggregationStateProvider.get().isPreviewMode()
It seems that you should be injecting a provider to know if user is in preview mode
Provider<AggregationState> aggregationStateProvider
As a general remark, one may understand if a component is injectable or not from the related modules configuration which is under myModule/src/main/resources/META-INF/mymodule.xml. If a component is listed there, then is it injectable in other classes. For instance you should have no problems injecting a type of TemplatinFunctions because it is indeed defined as;
<component>
<type>info.magnolia.templating.functions.TemplatingFunctions</type>
<implementation>info.magnolia.templating.functions.TemplatingFunctions</implementation>
<scope>singleton</scope>
</component>
Further reading can be found at https://documentation.magnolia-cms.com/display/DOCS/Dependency+injection+and+inversion+of+control
Hope this helps,

JSF: navigation

I have to warn you: the question may be rather silly, but I can't seem to wrap my head around it right now.
I have two managed beans, let's say A and B:
class A
{
private Date d8; // ...getters & setters
public String search()
{
// search by d8
}
}
class B
{
private Date d9; //...getters & setters
public String insert()
{
// insert a new item for date d9
}
}
and then I have two JSP pages, pageA.jsp (the search page) and pageB.jsp (the input page).
What I would like to do is placing a commandbutton in pageB so to open the search page pageA passing the parameter d9 somehow, or navigating to pageA directly after b.insert(). What I would like to do is showing the search result after the insertion.
Maybe it's just that I can't see the clear, simple solution, but I'd like to know what the best practice might be here, also...
I though of these possible solutions:
including **A** in **B** and linking the command button with **b.a.search**
passing **d9** as a **hiddenInput** and adding a new method **searchFromB** in **A** (ugly!)
collapsing the two beans into one
JSF 1.1/1.2 raw doesn't provide an easy way to do this. Seam/Spring both have ways around this and there are a couple of things you can do. JSF 2 should also have solutions to this once it is released.
Probably the easiest and most expedient would be to collapse the two beans into one and make it session scoped. The worry, of course, is that this bean will not get removed and stay in session until the session times out. Yay Memory leaks!
The other solution would be to pass the date on as a GET parameter. For instance, you action method could call the
FacesContext.getCurrentInstance().getExternalContext().redirect("pageB?d9=" + convertDateToLong(d9));
and then get the parameter on the other side.
You should configure the navigation flow in faces-config.xml. In ideal scenario you would return a "status" message which would decide the flow. Read more at following link:
http://www.horstmann.com/corejsf/faces-config.html
http://publib.boulder.ibm.com/infocenter/rtnlhelp/v6r0m0/index.jsp?topic=/com.businessobjects.integration.eclipse.doc.devtools/developer/JSF_Walkthrough8.html
As far as passing the values from one page to another is concerned you can use backing beans. More about backing beans here:
http://www.netbeans.org/kb/articles/jAstrologer-intro.html
http://www.coderanch.com/t/214065/JSF/java/backing-beans-vs-managed-beans
Hope i have understood and answered correctly to your question
Way to share values between beans
FacesContext facesContext = FacesContext.getCurrentInstance();
Application app = facesContext.getApplication();
ExpressionFactory elFactory = app.getExpressionFactory();
ELContext elContext = facesContext.getELContext();
ValueExpression valueExp = elFactory.createValueExpression(elContext, expression, Object.class);
return valueExp.getValue(elContext);
In above code "expression" would be something like #{xyzBean.beanProperty}
Since JSF uses singleton instances, you should be able to access the values from other beans. If you find more details on this technique, I am sure you'll get what you are looking for.
Add commandButton action attribute referencing to B'insert method
<h:commandLink action="#{b.insert}" value="insert"/>
In B'insert method,add d9 parameter as request parameter. Then return an arbitrary string from insert method.
FacesContext fc = FacesContext.getCurrentInstance();
fc.getExternalContext().getRequestMap().put("d9", d9);
Then go to faces context and add navigation from B to A with "from-outcome" as the arbitrary String you returned from insert method. But don't add redirect tag to navigation tags as it will destroy the request coming from B and the parameter you added (d9) will be cleared.
<from-outcome>return string of insert method</from-outcome>
<to-view-id>address of A</to-view-id>
Then you might get the "d9" in A class by fetching it from request map at its constructor or in a place where its more appropriate (getters). You might add it into a session scope or place it to a hidden variable if you want to keep track of it later.
in class A, when page is navigated, A should be initialized as it will be referenced.
FacesContext fc = FacesContext.getCurrentInstance();
fc.getExternalContext().getRequestMap().get("d9", d9);
Sorry i cant give full code, as i have no ide at here, its internet machine at work. I could not give details therefore.
In my opinion, the simplest way is 3-rd option - have both query and insert methods in same class. And you can do something like that:
public String query () {
//...
}
public String Insert() {
//insert
return Query(); }
If your classes are managed Beans you can load class A from class B and call A.query() in your insert method at the end. Also class A can have
<managed-bean-scope>session</managed-bean-scope>
parameter in faces-config.xml and it wouldn't be instantiated again when loaded.

Categories