how to logging with a id in quarkus vertx reactive ?
I want to see processing steps from request to response with the same id in the log.
Although each component is a different thread.
I'm afraid there's no built-in concept of a request ID, and you'll have to generate your IDs yourself. One solution could be that you use an AtomicLong instance to generate a request ID for each request.
Then, to store and access the ID, you basically have two options.
First option: you can store that ID in the request's context by having
#Inject
CurrentVertxRequest request;
(...)
request.getCurrent().put("requestId", id);
And then various components that produce logs can access the ID by
request.getCurrent().get("requestId");
and add that to the log message.
Second option: if you want to avoid the mess of having to append the ID in each log message manually, you can add it to the Mapped Diagnostic Context (MDC). The problem with this is that the MDC context is not propagated by default, so to make sure that each thread sees the ID, you'll need a custom ThreadContextProvider like this:
public class MdcContextProvider implements ThreadContextProvider {
#Override
public ThreadContextSnapshot currentContext(Map<String, String> props) {
Map<String, String> propagate = MDC.getCopyOfContextMap();
return () -> {
Map<String, String> old = MDC.getCopyOfContextMap();
MDC.setContextMap(propagate);
return () -> {
MDC.setContextMap(old);
};
};
}
#Override
public ThreadContextSnapshot clearedContext(Map<String, String> props) {
return () -> {
Map<String, String> old = MDC.getCopyOfContextMap();
MDC.clear();
return () -> {
MDC.setContextMap(old);
};
};
}
#Override
public String getThreadContextType() {
return "MDC";
}
}
and add a META-INF/services/org.eclipse.microprofile.context.spi.ThreadContextProvider file containing the qualified name of that class.
Then, store the request ID in the MDC using
MDC.put("rid", requestId);
And change the formatting string of your logs (for example, the quarkus.log.console.format property) to contain a reference to it, which would be %X{rid}, to make sure that this value is added to each log.
With this option you should probably also make sure that the MDC entry gets cleared when the request processing is done.
So this option is unfortunately much more complicated, but will potentially help keep your code cleaner, because you won't have to append the ID to each log.
Related
After a couple of years of normal operations, the API of Activiti 5.17.0 to retrieve tasks is not returning the latest tasks anymore.
The API invoked is a GET to /runtime/tasks?includeProcessVariables=true&size=600000&order=desc with basic authentication.
Nobody changed it, but it is just stuck at returning tasks from 10 days ago.
I checked the Activiti tables and they contain the records for the tasks I need to retrieve.
I also tried to cleanup some old data from act_hi_taskinst and act_ru_task and from , supposing it was a matter of cardinality (maybe too many tasks), but nothing changed.
I also tried to increase the size parameter in the request, but nothing changes (not reaching that limit).
What is going on?
--- Edit
It seems a matter of IDs. If I try to get the last 10 tasks order by create_time_ desc, only tasks until ID 999907 are returned. The next ID is over 1.000.000 and I can see it in the database, but the API is not returning it.
I changed the order by: ordering by id (which is a varying char in the database) is counterintuitive. In fact, when order by id_ desc the tasks with ID over 1.000.000 are AFTER tasks 900.000.
Putting the size to Integer.MAX_VALUE did not fix the problem, for some reason I don't get (maybe the reason is inside Activiti query building code).
BTW I changed the order and used createTime desc. This was, most recent tasks are returned regardless of their ID.
Here's my custom-tailored controller (to be improved, but working for my specific use case).
#RestController
#RequestMapping("/api/")
public class CustomRest extends TaskBaseResource {
#Autowired
TaskService taskService;
/*
#RequestMapping(method = RequestMethod.GET, path = "custom")
public List<Task> retrieveAllTasks() {
return taskService.createTaskQuery().includeProcessVariables().active().list();
}
*/
#RequestMapping(method = RequestMethod.GET, path = "custom")
public DataResponse getTasks(#RequestParam Map<String, String> requestParams, HttpServletRequest httpRequest) {
// Create a Task query request
TaskQueryRequest request = new TaskQueryRequest();
// Populate filter-parameters
if (requestParams.containsKey("name")) {
request.setName(requestParams.get("name"));
}
request.setIncludeProcessVariables(true);
request.setActive(true);
return getTasksFromQueryRequest(request, requestParams);
}
protected DataResponse getTasksFromQueryRequest(TaskQueryRequest request,
Map<String, String> requestParams) {
TaskQuery taskQuery = taskService.createTaskQuery();
taskQuery.active();
taskQuery.includeProcessVariables();
HashMap<String, QueryProperty> properties = new HashMap<String, QueryProperty>();
properties.put("id", TaskQueryProperty.TASK_ID);
properties.put("name", TaskQueryProperty.NAME);
properties.put("description", TaskQueryProperty.DESCRIPTION);
properties.put("dueDate", TaskQueryProperty.DUE_DATE);
properties.put("createTime", TaskQueryProperty.CREATE_TIME);
properties.put("priority", TaskQueryProperty.PRIORITY);
properties.put("executionId", TaskQueryProperty.EXECUTION_ID);
properties.put("processInstanceId", TaskQueryProperty.PROCESS_INSTANCE_ID);
properties.put("tenantId", TaskQueryProperty.TENANT_ID);
request.setSize(Integer.MAX_VALUE);
//request.setSize(10);
request.setOrder("createTime");
request.setOrder("desc");
DataResponse paginatedList = new TaskPaginateList(restResponseFactory).paginateList(
requestParams, request, taskQuery, "createTime", properties);
return paginatedList;
}
}
I have facade interface where users can ask for information about lets say Engineers. That information should be transferred as JSON of which we made a DTO for. Now keep in mind that I have multiple datasources that can provide an item to this list of DTO.
So I believe right now that I can use Decorative Pattern by adding handler of the datasource to the myEngineerListDTO of type List<EngineerDTO>. So by that I mean all the datasources have the same DTO.
This picture below shows that VerticalScrollbar and HorizontalScrollBar have different behaviours added. Which means they add behaviour to the WindowDecorator interface.
My question, does my situation fit the decorator pattern? Do I specifically need to add a behaviour to use this pattern? And is there another pattern that does fit my situation? I have already considered Chain of Responsibility pattern, but because I don't need to terminate my chain on any given moment, i thought maybe Decorator pattern would be better.
Edit:
My end result should be: List<EngineersDTO> from all datasources. The reason I want to add this pattern is so that I can easily add another datasource behind the rest of the "pipeline". This datasource, just like the others, will have addEngineersDTOToList method.
To further illustrate on how you can Chain-of-responsibility pattern I put together a small example. I believe you should be able to adapt this solution to suit the needs of your real world problem.
Problem Space
We have an unknown set of user requests which contain the name of properties to be retrieved. There are multiple datasources which each have varying amounts of properties. We want to search through all possible data sources until all of the properties from the request have been discovered. Some data types and data sources might look like bellow (note I am using Lombok for brevity):
#lombok.Data
class FooBarData {
private final String foo;
private final String bar;
}
#lombok.Data
class FizzBuzzData {
private final String fizz;
private final String buzz;
}
class FooBarService {
public FooBarData invoke() {
System.out.println("This is an expensive FooBar call");
return new FooBarData("FOO", "BAR");
}
}
class FizzBuzzService {
public FizzBuzzData invoke() {
System.out.println("This is an expensive FizzBuzz call");
return new FizzBuzzData("FIZZ", "BUZZ");
}
}
Our end user might require multiple ways to resolve the data. The following could be a valid user input and expected response:
// Input
"foobar", "foo", "fizz"
// Output
{
"foobar" : {
"foo" : "FOO",
"bar" : "BAR"
},
"foo" : "FOO",
"fizz" : "FIZZ"
}
A basic interface and simple concrete implementation for our property resolver might look like bellow:
interface PropertyResolver {
Map<String, Object> resolve(List<String> properties);
}
class UnknownResolver implements PropertyResolver {
#Override
public Map<String, Object> resolve(List<String> properties) {
Map<String, Object> result = new HashMap<>();
for (String property : properties) {
result.put(property, "Unknown");
}
return result;
}
}
Solution Space
Rather than using a normal "Decorator pattern", a better solution may be a "Chain-of-responsibility pattern". This pattern is similar to the decorator pattern, however, each link in the chain is allowed to either work on the item, ignore the item, or end the execution. This is helpful for deciding if a call needs to be made, or terminating the chain if the work is complete for the request. Another difference from the decorator pattern is that resolve will not be overriden by each of the concrete classes; our abstract class can call out to the sub class when required using abstract methods.
Back to the problem at hand... For each resolver we need two components. A way to fetch data from our remote service, and a way to extract all the required properties from the data retrieved. For fetching the data we can provide an abstract method. For extracting a property from the fetched data we can make a small interface and maintain a list of these extractors seeing as multiple properties can be pulled from a single piece of data:
interface PropertyExtractor<Data> {
Object extract(Data data);
}
abstract class PropertyResolverChain<Data> implements PropertyResolver {
private final Map<String, PropertyExtractor<Data>> extractors = new HashMap<>();
private final PropertyResolver successor;
protected PropertyResolverChain(PropertyResolver successor) {
this.successor = successor;
}
protected abstract Data getData();
protected final void setBinding(String property, PropertyExtractor<Data> extractor) {
extractors.put(property, extractor);
}
#Override
public Map<String, Object> resolve(List<String> properties) {
...
}
}
The basic idea for the resolve method is to first evaluate which properties can be fulfilled by this PropertyResolver instance. If there are eligible properties then we will fetch the data using getData. For each eligible property we extract the property value and add it to a result map. Each property which cannot be resolved, the successor will be requested to be resolve that property. If all properties are resolved the chain of execution will end.
#Override
public Map<String, Object> resolve(List<String> properties) {
Map<String, Object> result = new HashMap<>();
List<String> eligibleProperties = new ArrayList<>(properties);
eligibleProperties.retainAll(extractors.keySet());
if (!eligibleProperties.isEmpty()) {
Data data = getData();
for (String property : eligibleProperties) {
result.put(property, extractors.get(property).extract(data));
}
}
List<String> remainingProperties = new ArrayList<>(properties);
remainingProperties.removeAll(eligibleProperties);
if (!remainingProperties.isEmpty()) {
result.putAll(successor.resolve(remainingProperties));
}
return result;
}
Implementing Resolvers
When we go to implement a concrete class for PropertyResolverChain we will need to implement the getData method and also bind PropertyExtractor instances. These bindings can act as an adapter for the data returned by each service. This data can follow the same structure as the data returned by the service, or have a custom schema. Using the FooBarService from earlier as an example, our class could be implemented like bellow (note that we can have many bindings which result in the same data being returned).
class FooBarResolver extends PropertyResolverChain<FooBarData> {
private final FooBarService remoteService;
FooBarResolver(PropertyResolver successor, FooBarService remoteService) {
super(successor);
this.remoteService = remoteService;
// return the whole object
setBinding("foobar", data -> data);
// accept different spellings
setBinding("foo", data -> data.getFoo());
setBinding("bar", data -> data.getBar());
setBinding("FOO", data -> data.getFoo());
setBinding("__bar", data -> data.getBar());
// create new properties all together!!
setBinding("barfoo", data -> data.getBar() + data.getFoo());
}
#Override
protected FooBarData getData() {
return remoteService.invoke();
}
}
Example Usage
Putting it all together, we can invoke the Resolver chain as shown bellow. We can observe that the expensive getData method call is only performed once per Resolver only if the property is bound to the resolver, and that the user gets only the exact fields which they require:
PropertyResolver resolver =
new FizzBuzzResolver(
new FooBarResolver(
new UnknownResolver(),
new FooBarService()),
new FizzBuzzService());
Map<String, Object> result = resolver.resolve(Arrays.asList(
"foobar", "foo", "__bar", "barfoo", "invalid", "fizz"));
ObjectMapper mapper = new ObjectMapper();
mapper.enable(SerializationFeature.INDENT_OUTPUT);
System.out.println(mapper
.writerWithDefaultPrettyPrinter()
.writeValueAsString(result));
Output
This is an expensive FizzBuzz call
This is an expensive FooBar call
{
"foobar" : {
"foo" : "FOO",
"bar" : "BAR"
},
"__bar" : "BAR",
"barfoo" : "BARFOO",
"foo" : "FOO",
"invalid" : "Unknown",
"fizz" : "FIZZ"
}
I'm trying to implement the following logic with help of Kafka Streams:
Listen to some reference data from topic eg. ref-data-topic and creates a global StateStore from it.
Listen to messages from another topic data-topic which must be validated against ref data and either sent to success or errors topics.
Here is example pseudocode:
class SomeProcessor implements Processor<String, String> {
private KeyValueStore<String, String> refDataStore;
#Override
public void init(final ProcessorContext context) {
refDataStore = (KeyValueStore) context.getStateStore("ref-data-store");
}
#Override
public void process(String key String value) {
Object refData = refDataStore.get("some_key");
// business logic here
if(ok) {
sendValueToTopic("success");
} else {
sendValueToTopic("errors");
}
}
}
Or what would be the canonical way to achieve such a desired behavior?
Just like an alternative that I have now in my mind is to enrich data within Processor with validation info and send everything then into only one topic, making a client to deal with e.g. validationStatus in the received message.
Although, I really would like to have a solution with two topics because e.g in such a case I could, using Kafka Connect, link success topic directly with some datastore and deal with error topic somehow differently. In the approach with only one topic, again, I have no idea how to achieve this "store_only_successfully_validated_entities" use case.
Any ideas and suggestions?
If you use Processor API, you can forward data to different processor by name:
class SomeProcessor implements Processor<String, String> {
private KeyValueStore<String, String> refDataStore;
private ProcessorContext processorContext;
#Override
public void init(final ProcessorContext context) {
refDataStore = (KeyValueStore) context.getStateStore("ref-data-store");
processorContext = context;
}
#Override
public void process(String key String value) {
Object refData = refDataStore.get("some_key");
// business logic here
if(ok) {
processorContext.forward(key, value, To.child("success"));
} else {
processorContext.forward(key, value, To.child("error"));
}
}
}
When you plug in your topology, you add two sink nodes, names "success" and "error" that write to success and error topic respectively.
Or you forward data to a single sink node and add the sink with a TopicNameExtractor instead of a hard coded topic name. (Requires version 2.0.)
If you use DSL, you can use KStream#branch() to split a stream and pile different data to different topics via KStream#to(...) (or you use the dynamic routing via KStream#to(TopicNameExtractor) -- required version 2.0)
I have a SessionListener on a CometD server. I want to pass data from a client to the server when the listener's sessionAdded() method is called.
The sessionAdded() method receives a ServerSession and ServerMessage object. ServerSession has an Attribute map that always seems to have nothing in it.
I would like to get some unique client data to the server. This data should be accessed by the server when the sessionAdded() method is invoked.
The documentation talks about basic use of a SessionListener, but says nothing about attributes. All the javadocs for client and server say about it is to describe how setAttribute() sets an attribute and how getAttribute() gets it.
Is there a way to do this? Can the ServerSession's attribute map be used to transfer attributes from the client to the server, and if so, how?
Someone please advise...
The ServerSession attributes map is a map that lives on the server.
It is an opaque (from the CometD point of view) map that applications can populate with whatever they need.
If you want to send data from a client to the server, you can just put this additional data into the handshake message, and then retrieve it from the message when BayeuxServer.SessionListener.sessionAdded() is called.
The client looks like this:
BayeuxClient client = ...;
Map<String, Object> extraFields = new HashMap<>();
Map<String, Object> ext = new HashMap<>();
extraFields.put(Message.EXT_FIELD, ext);
Map<String, Object> extraData = new HashMap<>();
ext.put("com.acme", extraData);
client.handshake(extraFields);
extraData.put("token", "foobar");
This creates an extra data structure that in JSON looks like this:
{
"ext": {
"com.acme": {
"token": "foobar"
}
}
}
It is always a very good practice to put your data under a namespace such as com.acme, so that you don't mess up with CometD fields, nor with other extensions that you may use.
Put your fields inside extraData, like for example field token in the example above.
Then, on the server:
public class MySessionListener implements BayeuxServer.SessionListener {
#Override
public void sessionAdded(ServerSession session, ServerMessage message) {
Map<String, Object> ext = message.getExt();
if (ext != null) {
Map<String, Object> extra = (Map<String, Object>)ext.get("com.acme");
if (extra != null) {
String token = (String)extra.get("token");
session.setAttribute("token", token);
}
}
}
#Override
public void sessionRemoved(ServerSession session, boolean timedout) {
}
}
This listener puts into the session attributes data that has been sent by the client, in the example above the token field.
Then, elsewhere in the application, you can access the session attributes and use that data.
I'm rather new to Play Framework so I hope this is intelligible.
How can I tell play to map a form element to an Object field in the Form's class?
I have a form with a select dropdown of names of objects from my ORM. The values of the dropdown items are the ID field of the ORM objects.
The form object on the Java side has a field with the type of the ORM object, and a setter taking a string and translating it to the object, but on form submission I only get a form error "Invalid Value" indicating the translation is not taking place at all.
My template has a form component:
#helper.select(
createAccountForm("industry"),
helper.options(industries)
)
Where industries is defined in the template constructor by : industries: Map[String, String]
and consists of ID strings to User-Readable names.
My controller defines the class:
public static class CreateAccountForm {
public String name;
public Industry industry;
public void setIndustry(String industryId) {
this.industry = Industry.getIndustry(Integer.parseInt(industryId));
}
}
EDIT: I was doing the setter in the class because this answer indicated to do so, but that didn't work.
EDIT2:
Turns out the setter method was totally not the way to go for this. After banging my head a bit on trying to get an annotation working, I noticed the Formatters.SimpleFormatter and tried that out. It worked, though I don't understand why the extra block around it is necessary.
Global.java:
public class Global extends GlobalSettings {
// Yes, this block is necessary; no, I don't know why.
{
Formatters.register(Industry.class, new Formatters.SimpleFormatter<Industry>() {
#Override
public Industry parse(String industryId, Locale locale) throws ParseException {
return Industry.getIndustry(Integer.parseInt(industryId));
}
#Override
public String print(Industry industry, Locale locale) {
return industry.name;
}
});
}
}
Play is binding the form to an object for you when you use it like described in the documentation: https://github.com/playframework/Play20/wiki/JavaForms
So your controller should look like:
Form<models.Task> taskForm = form(models.Task.class).bindFromRequest();
if (taskForm.hasErrors()) {
return badRequest(views.html.tasks.create.render(taskForm));
}
Task task = taskForm.get();
The task object can have a Priority options list. And you use it in the form (view) like:
#select(editForm("priority.id"), options(Task.priorities), 'class -> "input-xlarge", '_label -> Messages("priority"), '_default -> Messages("make.choice"), 'showConstraints -> false, '_help -> "")
Notice that I am using priorities.id to tell play that a chosen value should be binded by a priority ID. And of course getting the priorities of the Tasks:
public static Map<String, String> priorities() {
LinkedHashMap<String, String> prioritiesList = new LinkedHashMap<String, String>();
List<Priority> priorities = Priority.getPrioritiesForTask("task");
for (Priority orderPrio : priorities) {
prioritiesList.put(orderPrio.getId().toString(), orderPrio.getDescription());
}
return prioritiesList;
}