Spring Cloud Dataflow file starter modification - java

I'm in the process of modifying this starter to suite my requirements:
https://github.com/spring-cloud-stream-app-starters/file/blob/master/spring-cloud-starter-stream-source-file/src/main/java/org/springframework/cloud/stream/app/file/source/FileSourceConfiguration.java
I'm trying to tap into the actual file that's being created in the folder the app is polling from and I wanna persist metadata about the file (and make certain decisions based on it) before it's being passed on to the output channel. E.g. looking at the tests, ContentPayloadTests.testSimpleFile() i wanna be able to access the test.txt file before a Message is generated and posted on the source.output() channel.
Any help is appreciated! Thanks!

The solution was to implement a ChannelInterceptor interface's preSend method..
https://docs.spring.io/spring-integration/archive/1.0.0.M6/reference/html/ch02s05.html

Related

Configuring open telemetry for tracing service to service calls ONLY

I am experimenting with different instrumentation libraries but primarily spring-cloud-sleuth and open-telemetry ( OT) are the ones I liked the most. Spring-cloud-sleuth is simple but it will not work for a non-spring ( Jax-RS)project , so I diverted my attention to open telemetry.
I am able to export the metrics using OT, but there is just too much data which I do not need. Spring sleuth gave the perfect solution wherein it just traces the call across microservices and links all the spans with one traceId.
My question is - How to configure OT to get an output similar to spring-sleuth? I tried various configuration and few worked but still the information is huge.
My configuration
-Dotel.traces.exporter=zipkin -Dotel.instrumentation.[jdbc].enabled=false -Dotel.instrumentation.[methods].enabled=false -Dotel.instrumentation.[jdbc-datasource].enabled=false
However, this still gives me method calls and other data. Also, one big pain is am not able to SHUT DOWN metrics data.
gets error like below
ERROR io.opentelemetry.exporter.internal.grpc.OkHttpGrpcExporter - Failed to export metrics. The request could not be executed. Full error message: Failed to connect to localhost/0:0:0:0:0:0:0:1:4317
Anyhelp will be appreciated -
There are 2 ways to configure the open telemetry agent(otel).
Environment variable
Java system property
you can either set
export OTEL_METRICS_EXPORTER=none
or
java -Dotel.metrics.exporter=none app.jar
Reference
https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md

Deploy Flowable workflow programmtically

I am trying to dynamically generate the workflow file for Flowable and deploy it on the go.
There are two challenges:
1. Create BAR file to package the XML that is generated
2. Deploying it dynamically.
Has anyone ever tried this? If yes, could you please help or suggest an alternative
Accomplished this finally. The only thing I needed to understand was that BAR file is nothing by a normal ZIP file. It simply needs to be named with a .bar extension.
To deploy it dynamically, we need to utilise the Repository service in the Flowable engine library. Below code snippet allows you to dynamically deploy the workflow. Once deployed, you can freely delete the workflow file as the workflow is recorded in the database.
String barFileName = "path/to/process-one.bar";
ZipInputStream inputStream = new ZipInputStream(new FileInputStream(barFileName));
repositoryService.createDeployment()
.name("process-one.bar")
.addZipInputStream(inputStream)
.deploy();

Calling a REST service using business central and JBPM

We're trying to do a POC showing we can call an external REST service using JBPM in business-central.
We've created a new BPM, then added a REST service task. We notice at this point that a WID file is created that has REST definition. Inside the WID file, it defines things like URL, Method, and authentication.
We've sifted through all the 7.2 docs, but for the life of us, we cannot figure out how to actually set those parameters and do something useful. Does anyone have a simple "Hello World" using business central 7.2 calling out to an external process?
We see there's a predefinied REST handler: https://github.com/kiegroup/jbpm/blob/master/jbpm-workitems/jbpm-workitems-rest/src/main/java/org/jbpm/process/workitem/rest/RESTWorkItemHandler.java
We're lacking how to assemble all of this; we can't find documentation or examples on something that seems so simple.
Thank you!
If you're using Busines Central, you can edit the process model and check the data assignments for the specific REST node. In there you can set the values of the variables or use some process variable to map dynamic values. Hope it helps.

Spring integration FileTailingMessageProducer: Remember current line when restarting

We are using the Spring integration FileTailingMessageProducer (Apache Commons) for remotely tailing files and sending messages to rabbitmq.
Obviously when the java process that contains the file tailer is restarted, the information which lines have already been processed is lost. We would like to be able to restart the process and continue tailing at last line we had previously processed.
I guess we will have to keep this state either in a file on the host or a small database. The information stored in this file or db will probably be a simple map mapping file ids (file names will not suffice, since files may be rotated) to line numbers:
file ids -> line number
I am thinking about subclassing the ApacheCommonsFileTailingMessageProducer.
The java process will need to continually update this file or db. Is there a method for updating this file when the JVM exits?
Has anyone done this before? Are there any recommendations on how to proceed?
Spring Integration has an an abstraction MetadataStore - it's a simple key/value abstraction so would be perfect for this use case.
There are several implementations. The PropertiesPersistingMetadataStore persists to a properties file and, by default, only persists on an ApplicationContext close() (destroy()).
It implements Flushable so it can be flush()ed more often.
The other implementations (Redis, MongoDB, Gemfire) don't need flushing because the data is written immediately.
A subclass would work, the file tailer is a simple bean and can be declared as a <bean/> - there's no other "magic" done by the XML parser.
But, if you'd be interested in contributing it to the framework, consider adding the code to the adapter directly. Ideally, it would go in the superclass (FileTailingMessageProducerSupport) but I don't think we will have the ability to look at the file creation timestamp in the OSDelegatingFileTailingMessageProducer because we just get the line data streamed to us.
In any case, please open a JIRA Issue for this feature.

Akka BalancingDispatcher Config

I have created a file application.conf in src/main/resources that looks like this:
balancing-dispatcher {
type = BalancingDispatcher
executor = "thread-pool-executor"
}
There is nothing else in the file.
Upon creating a new Actor (through my test suite using Akka TestKit) that tries to use the dispatcher, I receive this error message:
[WARN] [04/13/2013 21:55:28.007] [default-akka.actor.default-dispatcher-2] [Dispatchers] Dispatcher [balancing-dispatcher] not configured, using default-dispatcher
My program then runs correctly, albeit using only a single thread.
Furthermore, I intend to package my program into a library. The akka docs state this:
If you are writing an Akka application, keep you configuration in application.conf at
the root of the class path. If you are writing an Akka-based library, keep its
configuration in reference.conf at the root of the JAR file.
I have tried both of these methods so far, but neither has worked.
Any ideas?
Since your application.conf is not found I can only assume that src/main/resources is not part of your build path (cannot comment further without knowing which tool you use for building).
One small thing: why do you use "thread-pool-executor" in there? We found the default "fork-join-executor" to scale better.
Your comment about the one thread suggests that you are creating just one actor; using a BalancingDispatcher does not automagically create more actors, you will have to tell Akka to do that somehow (e.g. creating multiple instance of that same actor manually or via a Router).
The question of reference.conf vs. application.conf is more one of the nature of the settings. If your library wants to get its own settings from the config, then default values should go into reference.conf; that is the design concept and the reason why this file is always implicitly merged in. Defaults should only be in that file, never in the code.

Categories