I am trying to run the Camel Salesforce Kafka Source Connector version 1.0.x (LTS) and following the documentation as described on https://camel.apache.org/camel-kafka-connector/1.0.x/reference/connectors/camel-salesforce-source-kafka-source-connector.html all I need to do is to configure a bunch of camel.kamelet.salesforce-source.xxx properties which is exactly what I did.
Let's just assume that camel.kamelet.salesforce-source.clientId=xyz
When trying to run the connector it fails to start complaining that clientId is an unknown parameter:
Failed to resolve endpoint: salesforce://event/Case__e?clientId=xyz due to:
There are 1 parameters that couldn't be set on the endpoint.
Check the uri if the parameters are spelt correctly and that they are properties of the endpoint.
Unknown parameters=[{clientId=xyz}]
Running out of ideas I tried to configure a camel route myself and specified the clientId part of the salesforce endpoint. The issue was exactly the same. Running out of ideas I asked this question Unable to create camel salesforce endpoint and got a valid explanation for that behaviour: This type of settings should be done at component level not at endpoint level.
Digging further I checked that version 0.11.x (LTS) allows us to configure camel.component.salesforce.xxx properties as opposed to 1.0.x (LTS) which only has camel.kamelet.salesforce-source.xxx. In fact I was able to start the 0.11.x (LTS) connector.
Now it is hard to believe the migration from 0.11.x to 1.0.x was not fully tested and I am tempted to say I am missing some basic setup.
Can anyone bring some light in here?
Thank you in advance for your inputs.
Related
Versions
Spring Parent: 2.7.4, Spring Cloud Version: 2021.0.4, Java Version: 11
Issue
My Spring service has been using Eureka to connect to the config server for a long time, but I want to upgrade to Spring 2.7.4. I understand that as of Spring 2.4, the bootstrap context has been deprecated (source) and I need to make some adjustments to the old bootstrap properties and move them over to application.properties.
The documentation for Spring Cloud specifies that in order for me to continue to use discovery-first config lookup, I need to define a spring.config.import property with an optional configserver entry (source). Since I'm also using Vault, I define the property as follows:
spring.config.import = optional:configserver:placeholder,vault://<my-generic-backend>/dev
Next, I need to define the following properties (source). These properties were already defined in my old bootstrap.properties, so all I need to do is copy and paste.
spring.cloud.config.discovery.enabled = true
spring.cloud.config.discovery.serviceId = config-server
eureka.client.serviceUrl.defaultZone = <my-eureka-url>
Unless I'm missing something, these are all the steps I need to take in order to upgrade to 2.7.4. However, when I run the Spring service, it complains that it can't find the config server (via Eureka, or via URL), then it registers successfully with Eureka, and then continues trying and failing to find the config server.
Here is some of the output of the program:
> Running with Spring Boot v2.7.4, Spring v5.3.23
> Could not locate configserver via discovery: No instances found of configserver (config-server)
> Could not locate PropertySource ([ConfigServerConfigDataResource#2aa6311a uris = array<String>['placeholder'], optional = true, profiles = list['local']]): Invalid URL: placeholder
...
> DiscoveryClient_<my-project-name>/local - registration status: 204
I understand why it's failing to find a config server at URL: placeholder since that's not a valid URL, but I don't understand how the service can successfully register with Eureka yet not be able to find the config server. I know the service is registered because the output of the program says it registered correctly (and I can see it in the registry), and I know that the config server has the correct entity ID (config-server) because it was copied and pasted from the old bootstrap (and I can see config-server in the registry).
Workaround with Hardcoded URL
When I hardcode the config server URL like this (and set spring.cloud.config.discovery.enabled to false), the config is loaded properly from the server:
spring.config.import=configserver:https://<my-hardcoded-config-url>.com,vault://<my-generic-backend>/dev
Workaround with Bootstrap
It's possible to return to using the bootstrap context and still use Spring 2.7.4 with discovery-first config lookup by adding the "spring-cloud-starter-bootstrap" dependency. So I added the dependency to my POM and moved these properties back to bootstrap.properties from application.properties.
spring.cloud.config.discovery.enabled=true
spring.cloud.config.discovery.service-id=config-server
I moved the Vault and Eureka properties back into bootstrap.properties as well. The new application.properties now contains no values relating to Eureka, Vault, and Cloud Config.
When I run the service, it does indeed find the address for the config server through Eureka, as expected (although it fails to connect because it's the internal address and I'm running locally).
Conclusion
While these are valid workarounds, it's frustrating to not be able to have a dynamic URL for the config server (as is the entire point of using Eureka). Right now, it looks like my choices are either to use a hard-coded URL and risk having to change every property file, or use a deprecated behavior that Spring documentation specifically disfavors (source).
I would appreciate any guidance you have on the issue, and I thank you in advance.
I am experimenting with different instrumentation libraries but primarily spring-cloud-sleuth and open-telemetry ( OT) are the ones I liked the most. Spring-cloud-sleuth is simple but it will not work for a non-spring ( Jax-RS)project , so I diverted my attention to open telemetry.
I am able to export the metrics using OT, but there is just too much data which I do not need. Spring sleuth gave the perfect solution wherein it just traces the call across microservices and links all the spans with one traceId.
My question is - How to configure OT to get an output similar to spring-sleuth? I tried various configuration and few worked but still the information is huge.
My configuration
-Dotel.traces.exporter=zipkin -Dotel.instrumentation.[jdbc].enabled=false -Dotel.instrumentation.[methods].enabled=false -Dotel.instrumentation.[jdbc-datasource].enabled=false
However, this still gives me method calls and other data. Also, one big pain is am not able to SHUT DOWN metrics data.
gets error like below
ERROR io.opentelemetry.exporter.internal.grpc.OkHttpGrpcExporter - Failed to export metrics. The request could not be executed. Full error message: Failed to connect to localhost/0:0:0:0:0:0:0:1:4317
Anyhelp will be appreciated -
There are 2 ways to configure the open telemetry agent(otel).
Environment variable
Java system property
you can either set
export OTEL_METRICS_EXPORTER=none
or
java -Dotel.metrics.exporter=none app.jar
Reference
https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md
I'm currently working with Apache Camel and hawt.io for monitoring and debugging my Camel routes. This works wonderfully, even if some important information is somewhat hidden in the documentation. For example, it took me a bit to turn on debugging.
However, if I set a breakpoint where the message processing stops at that point in the route, I can't see any "body" or "headers" of my Camel exchange at that point. I've tried all sorts of settings:
tracing / backlog tracing enabled on CamelContext
tracing / backlog tracing enabled on route
Adjusted settings on MBean "BacklogDebugger" and "BacklogTracer".
Tracing on the "Trace" tab works very well: If I activate tracing in the "Trace" tab, I can see the flow of my message through all nodes of the route.
Only when stopping at the breakpoint is the body and header not displayed.
Edited: After some changes concerning other aspects (like assigning an ID to most of the route nodes) debugging works including display of body and headers. I have no clue what changed to make it work.
And at the same time my application property "camel.main.debugging=true" failed on startup
Error binding property (camel.main.debugging=true) with name: debugging on bean: org.apache.camel.main.MainConfigurationProperties
I had to enabled debugging at the context like this:
getContext().setDebugging(true);
Here is some information:
I don't use any special framework: Plain old Java with a Main method in which I start Camel-Main.
Apache Camel: 3.14.1
Jolokai Agent: 1.7.1
hawt.io: 2.14.5
Exchange body type: DOMSource
One of my routes:
getCamelContext().setBacklogTracing(true);
from(rabbitMqFactory.queueConnect("tso11", "tso11-to-nms", "username"))
.routeGroup("Workflow")
.routeId("Workflow-to-NMS|Map-TSO11-to-NMS42")
.routeDescription("Mapping of TSO11 Message to NMS42")
.convertBodyTo(DOMSource.class)
.log("Message for '$simple{header:tenant}' received")
.process(tso11ToNmsMappingProcessor)
.to("xslt:xslt/tso11-to-nms42.xslt")
.to("direct:send");
And here are my current properties:
camel.main.name=TSO11
camel.main.jmxEnabled=true
camel.main.debugging=true
camel.main.backlogTracing=true
camel.main.lightweight=false
camel.main.tracing=false
camel.main.useBreadcrumb=true
Any ideas? Any hints for a good documentation?
I have some more less important questions but I will open another issue for those.
With kind recards
Bert
Finally here are screenhots of debugging tab (with empty body) and trace tab (with body content):
I found the reason for my problem:
I used Camel 3.15.0 which is currently not supported by the camel plugin of hawt.io.
When using latest 3.14.x it works like a charm :-)
Hopefully there is still a maintainer of the camel plugin who will improve it in the near future. I am willing to contribute but the hawt.io developer information is not accessible and I am not able to understand how to run hawt.io from source locally .... especially how to include the camel plugin, which is in a separate github project.
I'm working in a OSGi environment project. I have discovered that camel offer an integration for swagger. So i have used it. It's working well until launching a request with swagger UI.
I mean when i put in swagger ui the uri i have defined with camel-swagger-java, it works. Swagger discovers my api !
But when i want to launch a request with swagger ui, i have some issue with cross domain request.
I have found several solutions :
- first one with camel rest
restConfiguration().component("jetty").bindingMode(RestBindingMode.json)
.dataFormatProperty("prettyPrint", "true")
.contextPath("/").port(8080).apiContextPath("/api-doc/login").apiProperty("api.title", "Login API").apiProperty("api.version", "1.0.0-SNAPSHOT")
.apiProperty("cors", "true").apiProperty("apiContextIdListing", "true");
I have set to true cors property. But it didn't solved my issue. Then after some search, i found it might be jetty which forbidden cross domain request. But a this point, i have not found how to configure Jetty in a OSGi environment (Karaf / Fellix) to accept this kind of request.
Thanks for your help
I found a solution. With Camel i had to create OPTIONS Rest Interface per Service. It's very dirty(http://camel.465427.n5.nabble.com/Workaround-with-REST-DSL-to-avoid-HTTP-method-not-allowed-405-td5771508.html). So I used this solution : github.com/swagger-api/swagger-ui/issues/1888
I have a SpringBoot application and deploying it in PCF where app is trying to connect Oracle 12c Database using PCF User Provided Services but it failing with this error Failed to determine a suitable driver class
build.gradle code:
and here are the configuration that I used in CUP service:
Service binding is happening properly. I can see the same details under VCAP_SERVICES in Environment Variables.
Error:
Short Answer: I think you want the uri to be oracle://... Strip off the jdbc: part. The Spring Autoreconfiguration code that gets injected by the Java buildpack will look at the prefix on the URI, so it needs oracle:// to know it's an Oracle link.
Long Answer: You probably don't want to depend on the injected Spring Autoreconfiguration. When it just works, it's great, but it can be difficult to understand what it's doing when it doesn't work.
It is better to use Spring Cloud Connector or even better, as all signs point to this replacing Spring Cloud Connector, use java-cfenv. For details on java-cfenv, see this blog post.
Spring Cloud Connector has the same issue I mentioned above as the Spring Autoreconfiguration, except that it will pretty clearly tell you when it doesn't recognize a bound service. Anyway, if you decide to use SCC, make sure you prefix the URI with oracle://.
If you use java-cfenv, it's more flexible so it's really up to you what properties and values you inject through the service.
Hope that helps!