My requirement is to create the multiple threads and execute the query and give the final output like Map<String,List<Object>>;
Map contains table name string and List<Object> is the query output that contains list of tables record.
The requirement:
I have one table that contains the list of field like TableName and Query
Eg.
employ | select * from employ; that query have more than 100000 record
employ_detail| select * from employ_detail; that query have more than 300000 record
employ_salary| select * from employ_salary; that query have more than 600000 record
Above table may have 10 000 queries
I want to create one API for that above query using the spring boot + hibernate.
My problem:
I want to create one solution with multiple threading using JAVA 8.
#RestController
public class ApiQueries {
#RequestMapping(value = "/getAllQueries", method = RequestMethod.GET)
public CommonDTO getAllQuery(){
list=apiQueryService.findAll();
if(null!=list){
objectMap= apiQueryService.executeQueryData(list); //here apiQueryService have one method named is executeQuery()
}
}
}
I wrote the below logic in that method.
#Override
public Map<String,List<Object>> executeQueryData(List<ApiQueries>
apiQuerylist, String fromDate, String toDate) {
addExecutor = new ThreadPoolExecutor(3, 5, 10, TimeUnit.MILLISECONDS,new LinkedBlockingQueue<Runnable>());
List<Object> obj=null;
Map<String,List<Object>> returnMap=new HashMap<String,List<Object>>();
try {
if(session==null) {
session = sessionFactory.openSession();
}
apiQuerylist.forEach(list-> addExecutor.execute(new Runnable() {
#Override
public void run() {
apiQueryObject = session.createSQLQuery(list.getQuery()).list();
returnMap.put(list.getTableName(), apiQueryObject);
}
}));
}catch(Exception ex) {
System.out.println("Inside [B] Exception "+ex);
ex.printStackTrace();
}finally {
if(session !=null) {
session.close();
}
}
return returnMap;
}
Issue is when i call that api the below code will run in background and that method is return the null object, But in background i will see the list of queries which executes one by one
apiQuerylist.forEach(list-> addExecutor.execute(new Runnable() {
#Override
public void run() {
apiQueryObject = session.createSQLQuery(list.getQuery()).list();
returnMap.put(list.getTableName(), apiQueryObject);
}
}));
You need to wait for thread pool completion. Something like below after apiQuerylist.forEach should work:
addExecutor.shutdown();
// waiting for executors to finish their jobs
while (!addExecutor.awaitTermination(50, TimeUnit.MILLISECONDS));
Related
I am developing a program that, based on a configuration file, allows different types of databases (e.g., YAML, MySQL, SQLite, and others to be added in the future) to be used to store data.
Currently it is all running on the main thread but I would like to start delegating to secondary threads so as not to block the execution of the program.
For supported databases that use a connection, I use HikariCP so that the process is not slowed down too much by opening a new connection every time.
The main problem is the multitude of available databases. For example, for some databases it might be sufficient to store the query string in a queue and have an executor check it every X seconds; if it is not empty it executes all the queries. For others, however, it is not, because perhaps they require other operations (e.g., YAML files that use a key-value system with a map).
What I can't do is something "universal", that doesn't give problems with the order of queries (cannot just create a Thread and execute it, because then one fetch thread might execute before another insertion thread and the data might not be up to date) and that can return data on completion (in the case of get functions).
I currently have an abstract Database class that contains all the get() and set(...) methods for the various data to be stored. Some methods need to be executed synchronously (must be blocking) while others can and should be executed asynchronously.
Example:
public abstract class Database {
public abstract boolean hasPlayedBefore(#Nonnull final UUID uuid);
}
public final class YAMLDatabase extends Database {
#Override
public boolean hasPlayedBefore(#Nonnull final UUID uuid) { return getFile(uuid).exists(); }
}
public final class MySQLDatabase extends Database {
#Override
public boolean hasPlayedBefore(#Nonnull final UUID uuid) {
try (
final Connection conn = getConnection(); // Get a connection from the poll
final PreparedStatement statement = conn.prepareStatement("SELECT * FROM " + TABLE_NAME + " WHERE UUID= '" + uuid + "';");
final ResultSet result = statement.executeQuery()
) {
return result.isBeforeFirst();
} catch (final SQLException e) {
// Notifies the error
Util.sendMessage("Database error: " + e.getMessage() + ".");
writeLog(e, uuid, "attempt to check whether the user is new or has played before");
}
return true;
}
}
// Simple example class that uses the database
public final class Usage {
private final Database db;
public Usage(#Nonnull final Database db) { this.db = db; }
public User getUser(#Nonnull final UUID uuid) {
if(db.hasPlayedBefore(uuid))
return db.getUser(uuid); // Sync query
else {
// Set default starting balance
final User user = new User(uuid, startingBalance);
db.setBalance(uuid, startingBalance); // Example of sync query that I would like to be async
return user;
}
}
}
Any advice? I am already somewhat familiar with Future, CompletableFuture and Callback.
My flink program should do a Cassandra look up for each input record and based on the results, should do some further processing.
But I'm currently stuck at reading data from Cassandra. This is the code snippet I've come up with so far.
ClusterBuilder secureCassandraSinkClusterBuilder = new ClusterBuilder() {
#Override
protected Cluster buildCluster(Cluster.Builder builder) {
return builder.addContactPoints(props.getCassandraClusterUrlAll().split(","))
.withPort(props.getCassandraPort())
.withAuthProvider(new DseGSSAPIAuthProvider("HTTP"))
.withQueryOptions(new QueryOptions().setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM))
.build();
}
};
for (int i=1; i<5; i++) {
CassandraInputFormat<Tuple2<String, String>> cassandraInputFormat =
new CassandraInputFormat<>("select * from test where id=hello" + i, secureCassandraSinkClusterBuilder);
cassandraInputFormat.configure(null);
cassandraInputFormat.open(null);
Tuple2<String, String> out = new Tuple8<>();
cassandraInputFormat.nextRecord(out);
System.out.println(out);
}
But the issue with this is, it takes nearly 10 seconds for each look up, in other words, this for loop takes 50 seconds to execute.
How do I speed up this operation? Alternatively, is there any other way of looking up Cassandra in Flink?
I came up with a solution that is fairly fast at querying Cassandra with streaming data. Would be of use to someone with the same issue.
Firstly, Cassandra can be queried with as little code as,
Session session = secureCassandraSinkClusterBuilder.getCluster().connect();
ResultSet resultSet = session.execute("SELECT * FROM TABLE");
But the problem with this is, creating Session is a very time-expensive operation and something that should be done once per key space. You create Session once and reuse it for all read queries.
Now, since Session is not Java Serializable, it cannot be passed as an argument to Flink operators like Map or ProcessFunction. There are a few ways of solving this, you can use a RichFunction and initialize it in its Open method, or use a Singleton. I will use the second solution.
Make a Singleton Class as follows where we create the Session.
public class CassandraSessionSingleton {
private static CassandraSessionSingleton cassandraSessionSingleton = null;
public Session session;
private CassandraSessionSingleton(ClusterBuilder clusterBuilder) {
Cluster cluster = clusterBuilder.getCluster();
session = cluster.connect();
}
public static CassandraSessionSingleton getInstance(ClusterBuilder clusterBuilder) {
if (cassandraSessionSingleton == null)
cassandraSessionSingleton = new CassandraSessionSingleton(clusterBuilder);
return cassandraSessionSingleton;
}
}
You can then make use of this session for all future queries. Here I'm using the ProcessFunction to make queries as an example.
public class SomeProcessFunction implements ProcessFunction <Object, ResultSet> {
ClusterBuilder secureCassandraSinkClusterBuilder;
// Constructor
public SomeProcessFunction (ClusterBuilder secureCassandraSinkClusterBuilder) {
this.secureCassandraSinkClusterBuilder = secureCassandraSinkClusterBuilder;
}
#Override
public void ProcessElement (Object obj) throws Exception {
ResultSet resultSet = CassandraLookUp.cassandraLookUp("SELECT * FROM TEST", secureCassandraSinkClusterBuilder);
return resultSet;
}
}
Note that you can pass ClusterBuilder to ProcessFunction as it is Serializable. Now for the cassandraLookUp method where we execute the query.
public class CassandraLookUp {
public static ResultSet cassandraLookUp(String query, ClusterBuilder clusterBuilder) {
CassandraSessionSingleton cassandraSessionSingleton = CassandraSessionSingleton.getInstance(clusterBuilder);
Session session = cassandraSessionSingleton.session;
ResultSet resultSet = session.execute(query);
return resultSet;
}
}
The singleton object is created only the first time the query is run, after that, the same object is reused, so there is no delay in look up.
I am trying to persist to multiple entities. Sample code below:
public List<String> save(SalesInvoice salesInvoice, List<ClosingStock> closingStockList, Company company,
Receipt receipt) {
log.info("Saving Sales Invoice...");
if (salesInvoice.getSalesChallanId() == null) {
for (ClosingStock closingStock : closingStockList) {
if (existingClosingStock(closingStock.getProduct().getId().toString()) == null) {
em.persist(closingStock);
} else {
}
}
}
em.persist(salesInvoice);
receipt.setSalesInvoiceId(salesInvoice.getId());
em.persist(receipt);
return null;
}
// Edit: Add existingClosingStock method provided in comments
public ClosingStock existingClosingStock(String productId) {
try {
return (ClosingStock) em.createQuery("SELECT cv FROM ClosingStock cv WHERE cv.product.id=:productId") .setParameter("productId", productId).getSingleResult();
} catch (NoResultException e) {
return null;
}
}
Well, when I execute this query, the data didn't persist in database, but it shows the list of newly inserted data for small times, but data didn't save in database. I got no errors in console. Also put em.getTransaction().commit(); before return does not work. When I tried persisting on single entity and put em.getTransaction().commit();, it worked perfectly. Like this:
public void save(Location location) {
log.info("Saving Location.");
em.persist(location);
em.getTransaction().commit();
}
What did I miss here?
As explained in this article, persist just schedules an entity state transition. The insert is executed during flush. If you don't commit the transaction, the flush will not be triggered automatically.
Anyway, you should always start a transaction, even if you plan to read data.
I'm building a real time app and trying to use entity listeners to keep my state up to date. The basic idea is that whenever an area of business logic changes, I re-load the affected entities and reconcile the changes. Here's a MWE:
#PrePersist
public void PrePersist() {
LoggerFactory.logger(App.class).info(" >>> PrePersist count: " + getStars().size());
}
#PostPersist
public void PostPersist() {
LoggerFactory.logger(App.class).info(" >>> PostPersist count: " + getStars().size());
}
#PreRemove
public void PreRemove() {
LoggerFactory.logger(App.class).info(" >>> PreRemove count: " + getStars().size());
}
#PostRemove
public void PostRemove() {
LoggerFactory.logger(App.class).info(" >>> PostRemove count: " + getStars().size());
}
private List<Star> getStars() {
EntityManager em = HibernateUtilJpa.getEntityManager();
List<Star> l = new ArrayList<Star>();
try {
em.getTransaction().begin();
l = em.createQuery("from Star", Star.class).getResultList();
em.getTransaction().commit();
} catch (Exception e) {
em.getTransaction().rollback();
} finally {
em.close();
}
return l;
}
I'm using a separate API to insert/remove stars into DB. I was expecting that post-persist would show one item more because of the added item, and post-remove would be one fewer than pre-remove. This is not the case, both post-persist and post-remove show the incorrect number of items, so pre-persist is the same as post-persist, and pre-remove is the same as post-remove. I'm sure it has to do with Hibernate-s caching, but I'm using transactions and everything goes through the EntityManager so I'm scratching my head.
From the documentation:
A callback method must not invoke EntityManager or Query methods!
In practice, the behaviour if you do it is undefined, hence the results you observe in your examples.
I need to create one transaction and execute .insertInto() for multiple unspecified fields of certain tables. I have a problem that transaction is runned successful, but no records is stored. I think the root clause is not proper execution of .insertInto() method or DSLContext has fail interaction with the wrapper Configuration. I will be very greatful for any suggestion.
I have two methods in two classes (tables processing and DAL). Fisrt for transaction creation and sending to DAL-class fields to insert. Second, DAL-class, for inserting a new field.
public Boolean insertToMainDB(List<TableForMainDb> mainTables) throws AppDataAccessLayerException {
Boolean InsertTransactSuccessFlag = false;
try {
TransactionalCallable <Boolean> transactional = new TransactionalCallable<Boolean>() {
#Override
public Boolean run(Configuration configuration) throws Exception {
for (TableForMainDb table : mainTables) {
table.getRecorder().recordToDB(table, configuration);
}
return true;
}
};
InsertTransactSuccessFlag = context.transactionResult(transactional);
} catch (DataAccessException ex) {
throw new AppDataAccessLayerException(ex);
}
return InsertTransactSuccessFlag;
}
Second method in DAL class:
public boolean recordToDB(TableForMainDb mainDBtable, Configuration configuration) {
boolean InsertFlag = false;
for (String key : mainDBtable.fields.keySet()){
//using(configuration).
insertInto (
table(mainDBtable.getTableName())
,field(mainDBtable.fields.get(key).getFieldName())
,value(mainDBtable.fields.get(key).getFieldValue())
).attach(configuration);
InsertFlag = true; //TBD
}
return InsertFlag;
}
Transaction runs successful, but records is not inserted to DB.
In Debug mode I see that objects for DSLContext and Configuration are full and contain record to insert date (fields).
The dumps is below. The transactional field = false in context.
Configuration
DefaultConfiguration [
connected=true,
transactional=true,
dialect=POSTGRES,
data={org.jooq.configuration....},
settings=...
DSLContext
DefaultConfiguration [
connected=true,
transactional=false,
dialect=POSTGRES,
data={},
settings=...
You're never calling Query.execute() on your Insert statement
Thank you Lukas for the comment and for the great JOOQ.
we implement insert as
.insertInto(Table<Record>, Fields[], Object[] ).execute()
we prepared fields as array of Field[] and values as array of Object[] and than pass it to .insertInto()
fieldsAndValuesObject it is object of wrapper class for fields and values (add/get/set).
DSL.using(configuration).insertInto(DSL.table(tableName),
fieldsAndValuesObject.getArrayAllFields())
.values(fieldsAndValuesObject.getArrayAllValues()
)
.execute();