How to exchange signals between applications? - java

I have two unconnected applications. One is the main app that performs the business logic and CRUD on database.
A 2nd app periodically rebuilds a database cache (long running taks). I want to send a signal to the main app when the rebuild starts, and when it's finished, as the main app should take specific actions while rebuilding takes place.
How could I achive this best using spring-boot?

using spring-boot you can use jms simply by adding active-mq dependencies.
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-jms</artifactId>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-broker</artifactId>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-pool</artifactId>
</dependency>
in yml config you would start the amq jms broker in one application by not specifying broker-url at all because spring.activemq.in-memory property defaults to true (http://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html)
or configuring it like this:
activemq:
broker-url: failover:(vm://localhost:61616?connectionTimeout=3000)
and connect to it from the other application like this
activemq:
broker-url: failover:(tcp://machineoftheotherapplication:61616?connectionTimeout=3000)
You might need to consider if you need your messages to use persistent reliable delivery, meaning if you send a message and the other application is not running it would get the message after it start up again.

If your application works with http requests you could just add a special controller which would process the requests from the second app.
Another option would be a JMX request.
However, security should be considered.

Related

Access configuration file mentioned in spring boot application.properties after creating the .jar for App engine deployment

I am facing big trouble in deploying a spring boot application in GCP AppEngine with cloud Postgres as a database. Earlier I was using tcp connection jdbc with IP whitelisting to access the database, which worked fine during testing but after deploying into Appengine it didn't warmed up due to sslsocket timed out. So after a bit of digging, I found for standard appengine runtime to connect a cloud Postgres I have to use postgres socket factory
<dependency>
<groupId>com.google.cloud.sql</groupId>
<artifactId>postgres-socket-factory</artifactId>
<version>1.2.2</version>
</dependency>
with
spring.datasource.url=jdbc:postgresql:///{REDACT}?cloudSqlInstance={REDACT}&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user={REDACT}&password={REDACT} in application.properties.
But GCP needs to verify the connection from the key generated from the service account with specific privileges which has been added to the application.properties.
spring.cloud.gcp.credentials.location=classpath:gcloud.json
But in java the built jar file can't access the json, nor the deployed appengine instance.
Here's the error
[Screenshot error in App engine][1]
app.yaml
runtime: java11
pom.xml
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-sql-postgresql</artifactId>
<version>1.2.7.RELEASE</version>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.19</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>com.google.cloud.sql</groupId>
<artifactId>postgres-socket-factory</artifactId>
<version>1.2.2</version>
</dependency>
Any reply would be of great help. Thanks
[1]: https://i.stack.imgur.com/gEV2r.png
With App Engine you don't need to use a service account key file. You can rely on the metadata server that provides automatically App Engine service account credential. Service account is not customizable for now, but should change in 2021! It name is the following: <projectID>#appspot.gserviceaccount.com
Be sure that this service account has enough Cloud SQL permissions (by default, it has Editor role on the project, and have enough (and even too much!!) permissions)
So, use the Cloud SQL App Engine connector in the app.yaml file to set up the connection with Cloud SQL and open a socket (to connect with socket factory) in your code

Is there a way to expose ActiveMQ port when it is in-memory

I'm currently developing a Spring Boot application that uses ActiveMQ "Classic" to communicate with MQTT enabled devices. The main problem is that I need to have in-memory ActiveMQ because when the Spring Boot application doesn't, all the messages sent to the topics can't be read because the #JmsListener for the topics only work in runtime(because the same is subscriber of the topic in that moment). I could use a docker-compose to create a stack and lock everything when ActiveMQ container is down but I can't use it into the project that I'm actually doing.
So, is there a way to expose the ports of in-memory ActiveMQ or a way to start an ActiveMQ deamon when the Spring Boot project start and stop it when Spring Boot stops?
Here is the application.properties file
# Embedded ActiveMQ Configuration Example
spring.activemq.broker-url=vm://embedded?broker.persistent=false,useShutdownHook=false
spring.activemq.close-timeout=15000
spring.activemq.in-memory=true
spring.activemq.non-blocking-redelivery=false
spring.activemq.password=admin
spring.activemq.user=admin
spring.activemq.send-timeout=0
spring.activemq.packages.trust-all=false
spring.activemq.packages.trusted=com.memorynotfound
spring.activemq.pool.block-if-full=true
spring.activemq.pool.block-if-full-timeout=-1
spring.activemq.pool.create-connection-on-startup=true
spring.activemq.pool.enabled=false
spring.activemq.pool.expiry-timeout=0
spring.activemq.pool.idle-timeout=30000
spring.activemq.pool.max-connections=1
spring.activemq.pool.max-sessions-per-connection=500
spring.activemq.pool.reconnect-on-exception=true
spring.activemq.pool.time-between-expiration-check=-1
spring.activemq.pool.use-anonymous-producers=true
Yes, there are a couple ways to fire up the broker in memory. I've found it helps to understand that ActiveMQ is essentially just a library, so you've got lots of options in how to bootstrap.
ActiveMQ is generally wired together with Spring, so you can generally add it to a spring beans file and get all the config you want (transport connectors, etc)
Sample loading activemq.xml file from classpath (ie. src/main/resources or src/test/reosurces if in a unit test)
import org.apache.activemq.broker.BrokerFactory;
import org.apache.activemq.broker.BrokerService;
...
BrokerService broker = BrokerFactory.createBroker(new URI(xbean:activemq.xml));
Maven
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-all</artifactId>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-spring</artifactId>
</dependency>
<dependency>
<groupId>org.apache.xbean</groupId>
<artifactId>xbean-spring</artifactId>
</dependency>
Alternate Java code1:
EmbeddedActiveMQBroker customizedBroker = new EmbeddedActiveMQBroker("bean:customize-activemq.xml");
ActiveMQ testing
Add'l example:
Embedded ActiveMQ unit test

Classpath issue preventing pipeline to be run on Google Dataflow

I'm new to a project where we have a spring-boot application running on GKE receiving (via Kafka) and publishing events via Pub/Sub. Consumers of these events might want to have these events replayed and we want them to request this via the REST API of our application. Since the application stores the events in GCS before publishing, we thought Apache Beam pipelines run with DataFlow should do the trick.
One "replay request" might result in multiple pipelines, since the events in GCS are stored in folder structures containing the date (e.g. gs://<entity>/2020/12/13/event.json) and depending on how much history the consumer needs, we create a pipeline per day of events.
I'm fairly confident that the logic of defining and submitting pipelines is correct, since the application is able to perform this on a local Kubernetes cluster with the DirectRunner.
On DataFlow I run into the issue summarized here. Spawning a worker (org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness) fails due to a classpath issue:
Caused by: java.lang.NoClassDefFoundError: org/apache/beam/sdk/options/PipelineOptions
I can see that my jar that should have the correct dependencies on the classpath when DataFlow spawns the worker (Omitted most parameters):
java
-cp
/opt/google/dataflow/batch/libshuffle_v1.jar:/opt/google/dataflow/batch/dataflow-worker.jar:/opt/google/dataflow/slf4j/jcl_over_slf4j.jar:/opt/google/dataflow/slf4j/log4j_over_slf4j.jar:/opt/google/dataflow/slf4j/log4j_to_slf4j.jar:/var/opt/google/dataflow/app-6BkavP-0nx4wHMC__85sdbCjJQa7QcQcOxGSQL5huMU.jar
...
org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness
After playing around with different scopes of the beam dependencies, because I suspected a clash with the google-dataflow.jar, I haven't seen any change. I'm a bit clueless on where to continue looking. I'm using beam library version 2.27.0 and these are the ones referred to in my pom.xml:
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-google-cloud-dataflow-java</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-direct-java</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-google-cloud-platform</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-extensions-google-cloud-platform-core</artifactId>
<version>${beam.version}</version>
</dependency>
Any advice is much appreciated.
The class org/apache/beam/sdk/options/PipelineOptions is found in the core Java SDK. The artifact is beam-sdks-java-core. This is not baked in to the Dataflow worker, but is part of the expected staged files.
The DataflowRunner by default will attempt to stage every file that it finds on the classpath. If there is anything about your environment or application that affects its ability to do this, you will need to add the SDK dependency yourself.

How does shareSecurityContext work in Spring Cloud with Hystrix?

I’m learning how Spring Cloud works and using one of most popular technical stacks for it: Eureka, Zuul, Hystrix, Ribbon, Feign. Except of registry, config server and gateway my services have the following dependencies with Spring Cloud version 2.2.1.RELEASE:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
<version>${spring-cloud.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
<version>${spring-cloud.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
<version>${spring-cloud.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
<version>${spring-cloud.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-hystrix</artifactId>
<version>${spring-cloud.version}</version>
</dependency>
I do authorization with JWT on gateway and want to use the same Authorization object on other services. Obvious way for doing it is to transfer my JWT with a header but I’ve read in docs that Hystrix can propagate the whole security context with just one property hystrix.shareSecurityContext=true. I’ve tried to do it with Feign Client and Zuul, but SecurityContext on requested service contains just anonymousUser.
I spent two days for understanding how it works but I didn’t. In logs of Feign I don’t see any headers with something like Principal.
So here is my question: is it possible to transfer security context with Zuul and Feign if second service runs in other docker container or on other server? If not what is the best praxis for transferring data about authorized user?
Thanks!
It has been 8 months since you posted the question but I will answer it anyways.
As you know, services are distributed in nature and so they may not share the JVM or even they may not be developed in java at all. The purpose of JWT token is to secure such distributed services so whatever communication happens between them regarding Security, happens through authorization header only. In authorization header one service passes the JWT Token (bearer only) to other service and that service validates the token , reads information from it, and so on.
The hystrix.shareContext has another purpose however. In Spring when the application context is created, by default it doesn't pass it to Hystrix Thread. To make it available to Hystrix, this property is set to true which essentially changes concurrency strategy of hystrix. So, it is passing Security context to "Hystrix's thread" which is part of the same service and not other service.
Hope this solves your query.

Camel AMQP - AMQConnectionFactory ClassNotFound

I'm using Camel 2.13.3 and trying to establish a connection via AMQP to a remote ActiveMQ instance.
According to the Camel AMQP docs is should be sufficient to add the following dependency
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-amqp</artifactId>
<version>2.13.1</version>
</dependency>
It then indicates that you should configure the jms component to use a connection factory supplied by the QPID project. The docs page uses org.apache.qpid.amqp_1_0.jms.impl.ConnectionFactoryImpl, and the results of other google searches indicate that org.apache.qpid.client.AMQConnectionFactory could be used.
However, the org.apache.qpid dependencies do not appear to have been added to the project and, unsurprisingly, I get a ClassNotFoundException when I run it.
I considered downloading the qpid dependency separately, but their web site seems to indicate that the qpid client project has been deprecated and replaced by something else ( QPID Messaging API if I remember correctly )
Can anyone point me in the right direction?
should be sufficient
The Camel docs you linked to does not state that. It just says this dependency is needed, doesn't say anything about additional dependencies. Just looked inside the jar you're using, and it does not contain qpid-client classes. You should add that dependency to your pom as well. For AMQP 0.x, there is a good chance you'll need JMS spec dependency as well:
<dependency>
<groupId>org.apache.qpid</groupId>
<artifactId>qpid-client</artifactId>
<version>0.32</version> <!-- replace with appropriate version -->
</dependency>
<dependency>
<groupId>org.apache.geronimo.specs</groupId>
<artifactId>geronimo-jms_1.1_spec</artifactId>
<version>1.0</version>
</dependency>
If you're using AMQP 1.0,
<dependency>
<groupId>org.apache.qpid</groupId>
<artifactId>qpid-jms-client</artifactId>
<version>0.3.0</version>
</dependency>

Categories