I have a written a simple program to test Java to Salesforce integration. I followed the steps mentioned in the links below:
Salesforce Api Partner Examples
Sample Query Calls
But when I execute these, the program hangs at the step
QueryResult qr = partnerConnection.query(soqlQuery);
I'm not sure what is happening here - any advice would be welcome.
If you are using an outdated version of the SDK and running against a new endpoint, your program will hang.
To fix this, use the latest version of the SDK as well as point to the latest endpoint.
For example, I used:
/services/Soap/u/34.0
as the endpoint and the following projects versions in maven:
<dependency>
<groupId>com.force.api</groupId>
<artifactId>force-wsc</artifactId>
<version>34.0</version>
</dependency>
<dependency>
<groupId>com.force.api</groupId>
<artifactId>force-partner-api</artifactId>
<version>34.0</version>
</dependency>
Related
My and project team are looking to add Zipkin logging and tracing to our current project. We are working in an microservice environment using Spring Boot (Java 17) and cloud foundery. For the communication between Microservices we are using HttpClient. From what I've gathered from the documentation Zipkin requires an RestTemplate to function. However we don't have time to change this.
We were able to implement Zipkin in every individual project. However, every call generates their own Trace ID. I think we need to configure the HttpClient to work in tandem with Zipkin, however the documentation is not very clear and I have been unable to find anything that explains how to do this.
What can I try on this? I've included the config and dependencies below.
spring:
application:
name: Application_1
zipkin:
baseUrl: http://localhost:9411
sleuth:
sampler:
probability: 1.0
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-zipkin</artifactId>
<version>3.1.3</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
<version>3.1.3</version>
</dependency>
I am trying to update an application which already pulls in the kitchen sink (or perhaps a few, they're joined at the hip) and I am sorting through version conflicts.
I want to update to Spring Boot 2.5+ and also use Spring Cloud Consul - I am attempting to pull in:
spring-cloud-starter-consul-discovery:3.0.3
spring-boot:2.5.4
For bonus points, within spring-cloud-starter-consul-discovery, I am seeing that it pulls in reactor-core:3.4.6 and at the same time reactor-extra:3.4.3 (which pulls in reactor-core:3.4.5). The list goes on and on ...
https://search.maven.org/artifact/org.springframework.cloud/spring-cloud-starter-consul-discovery/3.0.3/jar - original point of contention is that it pulls in spring boot 2.4.6 ... it was advertised as supporting 2.5+, then shouldn't the version reference 2.5+?
https://search.maven.org/artifact/org.springframework.cloud/spring-cloud-loadbalancer/3.0.3/jar - this to me is just plain laziness, right below reactor-core is reactor-extra, why wouldn't the Spring developers make extra pull in the same version of core? See: https://search.maven.org/artifact/io.projectreactor.addons/reactor-extra/3.4.3/jar
While this is a trivial problem to solve, it shouldn't be my problem. Am I missing something, or is this just the way it is and I shouldn't expect more?
First of all, you need to look at this compatibility matrix between cloud and boot dependencies. Then, you need (for example) to generate your bom, where you import
the correct cloud dependencies bom
spring boot dependencies bom
These boms, internally, either import other boms, like for example consul, the one you are interested in, which is at version 2.2.8.RELEASE. Look in the properties tag in that file and see this:
<spring-cloud-consul.version>2.2.8.RELEASE</spring-cloud-consul.version>
specifically:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-consul-dependencies</artifactId>
<version>${spring-cloud-consul.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
You can then look at the specific consul bom and see that the version consul-discovery is:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-consul-discovery</artifactId>
<version>${project.version}</version>
</dependency>
Same pattern to find out what version is where can be done for reactor dependencies.
From my 10 minutes investing into this, I don't see a version of spring-cloud-starter-consul-discovery:3.0.3 that would be included in a cloud-dependecies.
You could still try to force a certain version of a dependency. We just recently had such a problem in spring-cloud-kubernetes, internally.
This may or may not work, though.
I'm new to a project where we have a spring-boot application running on GKE receiving (via Kafka) and publishing events via Pub/Sub. Consumers of these events might want to have these events replayed and we want them to request this via the REST API of our application. Since the application stores the events in GCS before publishing, we thought Apache Beam pipelines run with DataFlow should do the trick.
One "replay request" might result in multiple pipelines, since the events in GCS are stored in folder structures containing the date (e.g. gs://<entity>/2020/12/13/event.json) and depending on how much history the consumer needs, we create a pipeline per day of events.
I'm fairly confident that the logic of defining and submitting pipelines is correct, since the application is able to perform this on a local Kubernetes cluster with the DirectRunner.
On DataFlow I run into the issue summarized here. Spawning a worker (org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness) fails due to a classpath issue:
Caused by: java.lang.NoClassDefFoundError: org/apache/beam/sdk/options/PipelineOptions
I can see that my jar that should have the correct dependencies on the classpath when DataFlow spawns the worker (Omitted most parameters):
java
-cp
/opt/google/dataflow/batch/libshuffle_v1.jar:/opt/google/dataflow/batch/dataflow-worker.jar:/opt/google/dataflow/slf4j/jcl_over_slf4j.jar:/opt/google/dataflow/slf4j/log4j_over_slf4j.jar:/opt/google/dataflow/slf4j/log4j_to_slf4j.jar:/var/opt/google/dataflow/app-6BkavP-0nx4wHMC__85sdbCjJQa7QcQcOxGSQL5huMU.jar
...
org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness
After playing around with different scopes of the beam dependencies, because I suspected a clash with the google-dataflow.jar, I haven't seen any change. I'm a bit clueless on where to continue looking. I'm using beam library version 2.27.0 and these are the ones referred to in my pom.xml:
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-google-cloud-dataflow-java</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-direct-java</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-google-cloud-platform</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-extensions-google-cloud-platform-core</artifactId>
<version>${beam.version}</version>
</dependency>
Any advice is much appreciated.
The class org/apache/beam/sdk/options/PipelineOptions is found in the core Java SDK. The artifact is beam-sdks-java-core. This is not baked in to the Dataflow worker, but is part of the expected staged files.
The DataflowRunner by default will attempt to stage every file that it finds on the classpath. If there is anything about your environment or application that affects its ability to do this, you will need to add the SDK dependency yourself.
I am using official java client of Kubernetes to request resources of the cluster such as Pods, Deployment and so on.
But unfortunately I found that it seems there is no API to request cluster metrics such as total CPU and Memory used by all pods.
Do you know how to get the cluster metrics using Java client ?
Thanks!
Deploy the metrics-server to the cluster.
You will then have access to query the metrics.k8s.io/v1beta1 API for pods and nodes as normal kubernetes resource. You should be able to run kubectl top pod to test.
I don't see any reason why the official java client wouldn't be able to discover these resources, but if it did they are available at the following URL's
https://api-server/apis/metrics.k8s.io/v1beta1/pods
https://api-server/apis/metrics.k8s.io/v1beta1/nodes
Think you won't be able to this with official java client.
Instead of this you can set up prometheus, configure it to scrape metrics and use Prometheus HTTP API to get API response in JSON format.
I get the resource consumption for each node using kubectl top api, and add them up.
import static io.kubernetes.client.extended.kubectl.Kubectl.top;
import io.kubernetes.client.custom.NodeMetrics;
import io.kubernetes.client.openapi.models.V1Node;
import org.apache.commons.lang3.tuple.Pair;
List<Pair<V1Node, NodeMetrics>> nodesMetrics = top(V1Node.class, NodeMetrics.class).apiClient(client).metric("cpu").execute();
Maven dependency:
<dependency>
<groupId>io.kubernetes</groupId>
<artifactId>client-java</artifactId>
<version>10.0.1</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>io.kubernetes</groupId>
<artifactId>client-java-extended</artifactId>
<version>10.0.1</version>
</dependency>
I have developed one API in Spring boot. where I used Swagger2 for easiness & technical doc purposefully.
Now, I am running through an issue which is mainly cause on our whole API.
It's Frequently printing a logs. it's around in 1-min. 2-5 MB logs are created. which is absolutely not acceptable. Due to the below mentioned error.
I strongly believe, it's because of Swagger UI configuration. it's appearing.
org.springframework.web.servlet.NoHandlerFoundException: No handler found for GET /null/swagger-resources/configuration/security
org.springframework.web.servlet.NoHandlerFoundException: No handler found for GET /null/swagger-resources
org.springframework.web.servlet.NoHandlerFoundException: No handler found for GET /null/swagger-resources/configuration/ui
I have already configured to bypassed the following endpoints from my "Authentication/Authorization" validation checks.
1. /swagger-ui.html
2. /v2/api-docs
3. /swagger-resources/configuration/ui
4. /swagger-resources
5. /swagger-resources/configuration/security
Question is, why it's internally calls endpoints which are starting with /null prefix(see the above mentioned 3-erroneous statements which are printing in my logs)
Surplice! & Interesting! things for me is, it's happening(causing) in only one of my environment(DEV, TEST, PROD). Whereas in other environment, it works very well without throwing any such kind of errors.
NOTE - I have enabled swagger only in DEV & LOCAL env. only. May be due to this reason, it's not giving any error in TEST & PROD respectively. Again I am not sure what's wrong going on.
Even, In my Local also not giving any errors!
I am using following maven dependencies to enable swagger are,
<!-- https://mvnrepository.com/artifact/io.springfox/springfox-swagger2 -->
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
<version>2.8.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/io.springfox/springfox-swagger-ui -->
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger-ui</artifactId>
<version>2.8.0</version>
</dependency>
Any help would be appreciable!!
Upgrading to swagger 3 solved the problem for me. This link might be useful.