Can anyone give me an idea on how data can be pulled from redis cache to excel?
Can we pull data from Soap UI or need to write java code?
Thanks
Ekta
Related
I'm working with Kafka for the first time. I've set up confluent cloud locally, created topics along with some JDBC connectors to populate the topics with data. Now, I want to try to trigger a connector from my Java application. The gist is that we have a large data feed that we want to run only once a day, triggered by an existing Java application. I already have the topic, database tables, and a JDBC connector with a custom query. This all works fine, produces the data I want and I can see it in coming in via the CLI but now I need to trigger the pull from java - is this scenario possible with Kafka?
The JDBC Kafka source connector is meant to be ran continuously, but if you want to "trigger" it, that would be an HTTP client to make a POST request with mode=bulk, or incrementing / timestamp to get only the data you added, and a large poll.interval.ms if using bulk to prevent reading the table multiple times. You'd add your query there too.
You then would somehow need to know when the connector started its tasks finished reading the data, then you would issue an HTTP DELETE to stop the sourcing of the database.
Or rather than deleting the connector, you can set the poll interval to a day and leave it alone and just have your database client insert the data as needed. You will still want to monitor if the connector is actually successful on each day.
If I understand your question correctly, you are trying to do the following.
On a trigger, you want to pull data from a database using JDBC and push the those data to kafka. If this is your requirement following is one solution i can propose.
Kafka producer is something that you can create by your own easily. In fact your your Java code that pulls data from databse itself can act as kafka producer also.
You are connecting to database and pulling data using a library. Similarly there are libraries that allows to connect to Kafka directly from java code. You can use these library along with your Java code to push data to the kafka topics. And when your java code is able to push data to a kafka topics , that itself make your java code as kafka producer.
Following is one example that you can refer for creating kafka producer: https://dzone.com/articles/kafka-producer-and-consumer-example
The solution I proposed is depicted over a simple diagram below.
My requirement is to get polarion data and store that to our sql server database.
I went through Polarion SDK document and I feel webservice is the way to do that....
Which is the best way to read and store specific data from polarion to SQL server.
The webservice is very slow and depending on your size it will not be practicable to export your data with help of the webservice.
However the data in Polarion is stored in an SVN-repository in form of small .xml-files. So you can read these XML files directly from the repository.
As Polarion is not stored in a database compatible format, you need to setup your own DB-schema and the transform from the XML-files should be straight forward.
You can either checkout a complete Polarion Project or you retrieve the files on demand via http(s) second approach will be slightly slower again.
I'm working on application that generates a dataset and I would like to save results into a influxdb database, but I didn't find anything about it. Please does anyone have any idea where can I start?
Thank you :)
There are many ways to push data from you application to InfluxDB.
You can use a InfluxDB client library to integrate to InfluxDB from your application. https://github.com/influxdata/influxdb-java is one of such library for JAVA
You can write the data to a log file and use Telegraf client to push data to InfluxDB.
You can write the data to log file in JSON/CSV or InfluxDB specific line protocol. Please see the docs to know what are the input data formats supported by Telegraf
Hope this answer helps
We are using jhipster generator for our new project. to store data it we select Postgres as well as elasticsearch, all search operation will perform using elasticsearch.
when we start the application, It use liquibase to upload csv files and dump data into tables.we added number of csv file and made some change on liquibase configuration files as well, but the problem we found right now that it is only dumping csv data into Postgres only, we are not able to find data dump into elasticsearch.
I do some research and found this.
but still, I am struggling with implementation, any advice will be really helpful.
The JHipster Elasticsearch is indexing on every change over the REST resource. See here. This means that all your data which you are inserting over liquibase is not getting indexed. You can use the generator-jhipster-elasticsearch-reindexer to reindex data which is already in the db.
I have created a JAVA Soap web service that inserts data into mysql using JDBC. i want to modify it so that it inserts data into a Excel table instead of mysql. Can someone please give a in depth description as to how i can do it. I do not know how to use excel. So i expect a clear explanation
Excel table isn't a data structure, its a software.
You could build a file in a form that excel respects, like xml or csv,
and opening the file using ms-excel.
https://docs.oracle.com/javase/tutorial/jaxp/xslt/writingDom.html