I have a requirement where I want to trigger an event based on some action, and this functionality is to be implemented as a jar file.
Lets explain this with an example.
There is a web application WAR_FILE.
There is a rest client named REST_CLIENT.
There is a jar file that has api methods for client REST_CLIENT named as MY_JAR.
Now WAR_FILE will be using MY_JAR to post data to REST_CLIENT.
But WAR_FILE does not want to wait for its response.Its like post data and do not care for response.
MY_JAR will take all inputs from WAR_FILE and store it in a queue as cache.I am using redis to maintain this queue as cache.
The main problem is MY_JAR has to check every time that if there is any request in that queue to act upon.
Sol 1: use quartz in MY_JAR to check after every n seconds,if there is any new request to act upon.
problem1 : The WAR_FILE may itself be using some quartz.
problem2 : If one thread is executing a list of tasks from queue and other thread comes and start executing the same request.
Sol 2: use cron job
problem : problem2 in sol 1
Sol 3 : RabitMQ / ActiveMQ (just heard of it)
problem : does not know how to use it and how it could be helping me.
Please help me.
I found various solutions to this problem.Actually this is JMS.(Previously I was not aware of this technology)
(1)Using Redis pub/sub event publication
http://redis.io/topics/pubsub
for simple java : http://www.basrikahveci.com/a-simple-jedis-publish-subscribe-example/
for Spring : http://java.dzone.com/articles/redis-pubsub-using-spring
(2)Using RabbitMQ
RabbitMQ installation : https://www.rabbitmq.com/install-debian.html
Java example : http://www.rabbitmq.com/tutorials/tutorial-one-java.html
Related
Given that I have two scheduled component classes uploading files respectively.
I created a sending email method for each of them in order to send a reminder email to myself in case any uploading exceptions happened.
the flow like this:
Scheduler One --- if exception during uploading ---> sending a email after exception
Scheduler Two --- if exception during uploading ---> sending a email after exception
now I want to upgrade as
Scheduler One + Scheduler Two
--if exception--> sending a mail after two scheduler
Nonetheless, how can I do that?
You use case sounds really odd. Schedulers run independent. So if you want to share information (an exception was thrown) between both thos you have to store this information somewhere. A entry in a database or saving in in a global variable during runtime.
I would however suggest that you merge both of you scheduler into one. If they are not independent why divide the code? It saves you from creating theses hacks where the schedulers need to be connected
My first approach was to use the com.microsoft.azure:azure-eventhubs:3.2.0 dependency but I had the following problem: Client hangs when calling Azure Event Hub and facing connection error
So my 2nd approach can be
either to use another dependency (I need to use OAuth2 too) -> unfortunately I haven't found any
or don't use a dependency but code the message sending myself -> it might be a big task
Could you please recommend me library that supports OAuth2?
Or a sample code which supports message sending to Event Hub over AMQP with OAuth2?
Or a 3rd approach ...?
Thanks,
V.
Regarding your hang issue, I posted a new answer in the original thread Client hangs when calling Azure Event Hub and facing connection error. That should fix your issue.
Now coming back to this question, primarily you should follow the latest tutorial (uses Event Hub v5 sdk using Producer/Consumer pattern).
import com.azure.messaging.eventhubs.*;
public class Sender {
public static void main(String[] args) {
final String connectionString = "EVENT HUBS NAMESPACE CONNECTION STRING";
final String eventHubName = "EVENT HUB NAME";
// create a producer using the namespace connection string and event hub name
EventHubProducerClient producer = new EventHubClientBuilder()
.connectionString(connectionString, eventHubName)
.buildProducerClient();
// prepare a batch of events to send to the event hub
EventDataBatch batch = producer.createBatch();
batch.tryAdd(new EventData("First event"));
batch.tryAdd(new EventData("Second event"));
batch.tryAdd(new EventData("Third event"));
batch.tryAdd(new EventData("Fourth event"));
batch.tryAdd(new EventData("Fifth event"));
// send the batch of events to the event hub
producer.send(batch);
// close the producer
producer.close();
}
}
Regarding the question on OAuth2, it is very much possible to leverage AAD authentication approach instead of Shared Access Signature (the default connection string approach). You can follow this tutorial for Managed Identity scenario OR this tutorial for custom AAD application based auth.
Regarding the recommendation on OAuth2 library, the first party choice in the Azure world would be MSAL for seamless integration and support.
Regarding the question on writing the whole thing by custom code, while it's technically possible, but honestly it's super impractical to invest on this mammoth task due to following reasons:
Why to reinvent the wheel?
It will increase your maintenance headache for a hell lot of custom code which does only technical wiring, but hardly adds any value to the business.
You would be pretty much on your own to figure out any issue or future compatibility.
Adding all the low level AMQP and OAuth handling in your code would add unnecessary burden to your team to master those.
..and most importantly it would cost Money and Time :)
Using standard libraries not only saves you from lot of efforts and money, but ensures reusability, and support from both the creator and communities.
I need to implement an ETL-like function to Migrate Mysql Data to another system via http calls. A high degree of real-time data is required in the process
I tried to combine spring-cloud-starter-stream-source-jdbc and spring-cloud-starter-stream-processor-httpclient. Instead, I got spring-cloud-starter-stream-source-jdbc without main class error.
jdbc --spring.datasource.driver-class-name=org.mariadb.jdbc.Driver --spring.datasource.username='******' --spring.dataso… | http …
At first, I reconstructed it with reference to the https://github.com/spring-cloud/spring-cloud-stream-samples/tree/master/source-samples/jdbc-source . I added main class and rabbit binder. The stream is running as scheduled. I didn't think it was supposed to be this complicated, and then I switched to jdbc-source-rabbit, which is exactly what I expected
Suppose you have Spark + Standalone cluster manager. You opened spark session with some configs and want to launch SomeSparkJob 40 times in parallel with different arguments.
Questions
How to set reties amount on job failures?
How to restart jobs programmatically on failure? This could be useful if jobs failure due lack of resources. Than I can launch one by one all jobs that require extra resources.
How to restart spark application on job failure? This could be useful if job lack resources even when it's launched simultaneously. Than to change cores, CPU etc configs I need to relaunch application in Standalone cluster manager.
My workarounds
1) I pretty sure the 1st point is possible, since it's possible at spark local mode. I just don't know how to do that in standalone mode.
2-3) It's possible to hand listener on spark context like spark.sparkContext().addSparkListener(new SparkListener() {. But seems SparkListener lacks failure callbacks.
Also there is a bunch of methods with very poor documentation. I've never used them, but perhaps they could help to solve my problem.
spark.sparkContext().dagScheduler().runJob();
spark.sparkContext().runJob()
spark.sparkContext().submitJob()
spark.sparkContext().taskScheduler().submitTasks();
spark.sparkContext().dagScheduler().handleJobCancellation();
spark.sparkContext().statusTracker()
You can use SparkLauncher and control the flow.
import org.apache.spark.launcher.SparkLauncher;
public class MyLauncher {
public static void main(String[] args) throws Exception {
Process spark = new SparkLauncher()
.setAppResource("/my/app.jar")
.setMainClass("my.spark.app.Main")
.setMaster("local")
.setConf(SparkLauncher.DRIVER_MEMORY, "2g")
.launch();
spark.waitFor();
}
}
See API for more details.
Since it creates process you can check the Process status and retry e.g. try following:
public boolean isAlive()
If Process is not live start again, see API for more details.
Hoping this gives high level idea of how we can achieve what you mentioned in your question. There could be more ways to do same thing but thought to share this approach.
Cheers !
check your spark.sql.broadcastTimeout and spark.broadcast.blockSize properties, try to increase them .
Yesterday, I created a topic and a queue on my OpenJMS server, graphically (using admin.sh). I was able to start it with openjms/bin/admin.sh and then clicking on the menus "Start OpenJMS server>Start connections, etc." and even by executing only openjms/bin/startupt.sh (instead of admin.sh).
Today, I deleted all the topic and queue (graphically, by right-cliking on "Delete" on each node Topic and Queue ).
And now, when I type openjms/bin/startup.sh, it displays this exception : http://pastebin.com/PY2wpBkv
Do you know why and how to solve this problem ?
NB : the graphical tool (so admin.sh) still works well.
In fact one must use the classes from package org.exolab.jms.administration (and not use javax.jms.Session's create<Queue||Topic> methods). Read this page : http://openjms.sourceforge.net/usersguide/admin.html