Oracle database change notification not working when inserts qty exceeds 20 - java

public class Register {
#Autowired
private DataSource dataSource;
#Autowired
private DCNListener listener;
private OracleConnection oracleConnection = null;
private DatabaseChangeRegistration dcr = null;
private Statement statement = null;
private ResultSet rs = null;
#PostConstruct
public void init() {
this.register();
}
private void register() {
Properties props = new Properties();
props.put(OracleConnection.DCN_NOTIFY_ROWIDS, "true");
props.setProperty(OracleConnection.DCN_IGNORE_DELETEOP, "true");
props.setProperty(OracleConnection.DCN_IGNORE_UPDATEOP, "true");
try {
oracleConnection = (OracleConnection) dataSource.getConnection();
dcr = oracleConnection.registerDatabaseChangeNotification(props);
statement = oracleConnection.createStatement();
((OracleStatement) statement).setDatabaseChangeRegistration(dcr);
rs = statement.executeQuery(listenerQuery);
while (rs.next()) {
}
dcr.addListener(listener);
String[] tableNames = dcr.getTables();
Arrays.stream(tableNames)
.forEach(i -> log.debug("Table {}" + " registered.", i));
} catch (SQLException e) {
e.printStackTrace();
close();
}
}
}
My Listener:
public class DCNListener implements DatabaseChangeListener {
#Override
public void onDatabaseChangeNotification(DatabaseChangeEvent databaseChangeEvent) {
TableChangeDescription[] tableChanges = databaseChangeEvent.getTableChangeDescription();
for (TableChangeDescription tableChange : tableChanges) {
RowChangeDescription[] rcds = tableChange.getRowChangeDescription();
for (RowChangeDescription rcd : rcds) {
RowOperation op = rcd.getRowOperation();
String rowId = rcd.getRowid().stringValue();
switch (op) {
case INSERT:
//process
break;
case UPDATE:
//do nothing
break;
case DELETE:
//do nothing
break;
default:
//do nothing
}
}
}
}
}
In my Spring boot application, I have an Oracle DCN Register class that listens for INSERTS in an event table of my database. I am listening for insertion new records.
In this Event table, I have different types of events that my application supports, lets say EventA and EventB.
The application gui allows you to upload in bulk these type of events which translate into INSERT into the oracle database table I am listening to.
For one of the event types, my application is not capturing the INSERT ONLY when it is 20 or more events uploaded in bulk, but for the other event type, I do not experience this problem.
So lets say user inserts eventA any number < 20, my application captures the inserts. But if the number of eventA inserts exceeds 20, it does not capture.
This is not the case for eventB which works smoothly. I'd like to understand if I'm missing anything in term of registration and anything I can look out for maybe in the database or what the issue could be here?

You should also look for the ALL_ROWS event from:
EnumSet<TableChangeDescription.TableOperation> tableOps = tableChange.getTableOperations();
if(tableOps.contains(TableChangeDescription.TableOperation.ALL_ROWS)){
// Invalidate the cache
}
Quote fromt the JavaDoc:
The ALL_ROWS event is sent when the table is completely invalidated and row level information isn't available. If the DCN_NOTIFY_ROWIDS option hasn't been turned on during registration, then all events will have this OPERATION_ALL_ROWS flag on. It can also happen in situations where too many rows have changed and it would be too expensive for the server to send the list of them.
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jajdb/oracle/jdbc/dcn/TableChangeDescription.TableOperation.html#ALL_ROWS

Related

Spring-Redis connection not re-connecting between retries

I'm working on a Spring-Batch application, which uses a REDIS connection to populate data.
Here are some relevant dependencies:
implementation 'org.springframework.boot:spring-boot-starter-data-redis'
implementation 'io.lettuce:lettuce-core:5.3.3.RELEASE'
RedisConnection is imported from org.springframework.data.redis.connection
PROBLEM STATEMENT:
There might be a case when the RedisConnection is active when we start the application, but during the time application is running, we might loose the Redis Connection. In that case, when it enters the method below, the method will throw an error that Redis connection is lost. Hence, we retry using the #Retryable logic.
But, lets say during the second retry, the Redis Connection is re-established, we want the Retry to be able to detect that and re-connect to redis and go for the normal flow. But, "THE REDIS-RECONNECTION IS NOT GETTING DETECTED"
TRIED: I tried following https://github.com/lettuce-io/lettuce-core/issues/338 and added lettuceConnectionFactory.validateConnection(); to the defaultRedisConnection as below but to no vain
#Qualifier("defaultRedisConnection")
#Bean
public RedisConnection defaultRedisConnectionDockerCluster() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration();
redisStandaloneConfiguration.setHostName("redis");
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(redisStandaloneConfiguration);
lettuceConnectionFactory.validateConnection();
lettuceConnectionFactory.afterPropertiesSet();
return lettuceConnectionFactory.getConnection();
}
Here is the class:
#Slf4j
#Service
public class PopulateRedisDataService {
#Qualifier("defaultRedisConnection")
private final RedisConnection redisConnection;
private RedisClientData redisClientData = new RedisClientData();
public PopulateRedisDataService(
#Qualifier("defaultRedisConnection") RedisConnection redisConnection,
RedisDataUtils redisDataUtils) {
this.redisConnection = redisConnection;
}
#Retryable(maxAttemptsExpression = "3", backoff = #Backoff(delayExpression = "20_000",
multiplierExpression = "100_000", maxDelayExpression = "100_000"))
public RedisClientData populateData() {
try {
byte[] serObj = Objects.requireNonNull(redisConnection.get("SOME_KEY".getBytes()));
RedisClientData redisClientData = new RedisClientData();
// Some operations to load data from Redis/serObj into redisClientData object.
} catch (Exception e) {
// If Redis doesn't have the key, return empty redisClientData
redisClientData = new RedisClientData();
log.error("Failed to get ClientRegList", e);
}
return redisClientData;
}
#Recover
public void recover(Exception e) {
// Some operations
}
}
Any suggestions to handle this case would be much appreciated.

Getting "NOT NULL constraint failed" when writing non-null data with Java, JDBC, Sqlite

I have a Java application which uses a SQLite database to capture "StatusEvent" objects. A StatusEvent looks like this:
public class StatusEvent
{
private final String statusId;
private final RefreshValue value;
private final Instant createdAt;
public StatusEvent(String id, RefreshValue currentValue)
{
Objects.requireNonNull(id, "StatusId must not be null");
Objects.requireNonNull(currentValue, "RefreshValue must not be null");
statusId = id;
value = currentValue;
createdAt = Instant.now();
}
// Simple getters omitted
}
Where RefreshValue is an object that could contain different types of status values such as int, float, or String. The important thing is that StatusEvent is immutable, and that the 'statusId' is checked for null before construction.
Then I am using the following code below which receives StatusEvent objects and stores them in a LinkedBlockingQueue in order create batches. Every 200ms, the executor runs 'writebatch' which drains the queue, and tries to commit the data for all of the StatusEvents in the current batch.
The problem is that I can occasionally get an error when writing to the database: "org.sqlite.SQLiteException: [SQLITE_CONSTRAINT_NOTNULL] A NOT NULL constraint failed (NOT NULL constraint failed: status.status_id)" I am failing to see how this is possible given the immutable StatusEvent class. Is there some way, given the way that the database is configured, that data is lost between leaving the application and hitting the database? Additionally, it seems that depending on the number of StatusEvent objects being received, the write speed to the database is not fast enough to keep up. After sustained stressful periods there may be several seconds between the creation of the StatusEvent (ie the timestamp in the constructor) and the 'created_at' time in the database. (StatusEvents are received by the StatusEventDB.accept() almost immediately after they are created). Looking for any help fixing either or both of these issues.
public class StatusEventDB
{
private static final String DROP_TABLE = "DROP TABLE IF EXISTS status";
private static final String MAKE_TABLES =
"CREATE TABLE status (id INTEGER PRIMARY KEY AUTOINCREMENT, " +
"status_id TEXT NOT NULL, " + "value TEXT NOT NULL, " +
"time TIMESTAMP NOT NULL, " +
"created_at TIMESTAMP DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f')))";
private static final String SE_STRING =
"INSERT INTO status (status_id, value, time) VALUES(?,?,?)";
private final Connection connection;
private final PreparedStatement insertSEStatement;
private final ScheduledExecutorService scheduler =
Executors.newSingleThreadScheduledExecutor();
protected final LinkedBlockingQueue<StatusEvent> buffer =
new LinkedBlockingQueue<>();
StatusEventDB() throws SQLException
{
SQLiteConfig config = new SQLiteConfig();
config.setJournalMode(SQLiteConfig.JournalMode.WAL);
config.setCacheSize(32768);
config.setTransactionMode(SQLiteConfig.TransactionMode.EXCLUSIVE);
config.setSynchronous(SQLiteConfig.SynchronousMode.NORMAL);
config.setTempStore(TempStore.MEMORY);
config.setBusyTimeout(5000);
this.connection = DriverManager
.getConnection("jdbc:sqlite:/mydir/test.db",
config.toProperties());
try (Statement statement = connection.createStatement())
{
connection.setAutoCommit(false);
statement.executeUpdate(DROP_TABLE);
statement.executeUpdate(MAKE_TABLES);
}
insertSEStatement = connection.prepareStatement(SE_STRING);
scheduler.scheduleAtFixedRate(this::writeBatch, 200, 200,
TimeUnit.MILLISECONDS);
}
public void accept(StatusEvent se)
{
buffer.add(se);
}
private void writeBatch()
{
int bufferSize = buffer.size();
if (0 < bufferSize)
{
List<StatusEvent> commitBatch = new ArrayList<>(bufferSize);
buffer.drainTo(commitBatch, bufferSize);
try
{
for (StatusEvent se : commitBatch)
{
insertSEStatement.setString(1, se.getStatusId());
insertSEStatement.setString(2, se.getValue().getString());
insertSEStatement.setString(3, se.createdAt().toString());
insertSEStatement.addBatch();
}
insertSEStatement.executeBatch();
connection.commit();
}
catch (SQLException e)
{
System.err.println(e.getMessage());
}
}
}
}

How to reset a variable in Flume's custom sink class for every batch

I have a flume process that reads data from file on a spooldir & loads the data into MySQL database. There will be multiple types of files that can be processed by the same flume process.
I have created a custom sink java class (extending AbstractSink), that updates a local variable (sInterfaceType) after an initial/first read to decide the data format in the file.
I have to reset it once the file processing completes, so that it has to start with identifying the next batch/interface file.
I tried to do in stop() but it doesn't help. Did anybody do this?
My sink class looks like this:
public class MyFlumeSink2 extends AbstractSink implements Configurable {
private String sInterfaceType; //tells file format of current load
public MyFlumeSink2() {
//my initialization of variables
}
public void configure(Context context) {
//read context variables
}
public void start() {
//create db connection
}
#Override
public void stop() {
//destroy connection
sInterfaceType = ""; //This doesn't help me
super.stop();
}
public Status process() throws EventDeliveryException {
Channel channel = getChannel();
Transaction transaction = channel.getTransaction();
if((sInterfaceType=="" || sInterfaceType==null))
{
//Read first line & set sInterfaceType
}else
//Insert data in MySQL
transaction.commit();
}
}
We have to manually decide which event it is, there is no specialized method called for every new file.
I revised my code to read the event line & set InterfaceType based on first element. My code looks like this:
public Status process() throws EventDeliveryException {
//....other code...
sEvtBody = new String(event.getBody());
sFields = sEvtBody.split(",");
//check first field to know record type
enumRec = RecordType.valueOf( checkRecordType(sFields[0].toUpperCase()) );
switch(enumRec)
{
case CUST_ID:
sInterfaceType = "T_CUST";
bHeader = true;
break;
case TXN_ID:
sInterfaceType = "T_CUST_TXNS";
bHeader = true;
break;
default:
bHeader = false;
}
//insert if not header
if(!bHeader)
{
if(sInterfaceType == "T_CUST")
{
if(sFields.length == 14)
this.bInsertStatus = daoClass.insertHeader(sFields);
else
throw new Exception("INCORRECT_COLUMN_COUNT");
}else if(sInterfaceType == "T_CUST_TXNS")
{
if(sFields.length == 10)
this.bInsertStatus = daoClass.insertData(sFields);
else
throw new Exception("INCORRECT_COLUMN_COUNT");
}
//if(!bInsertStatus)
// logTransaction(sFields);
}
//....Other code....

Threads and Hibernate with Spring MVC

I am currently working on a web application which is basically a portfolio site for different vendors.
I was working on a thread which copies the details of a vendor and puts it against a new vendor, pretty straightforward.
The thread is intended to work fine but when selecting a particular Catalog object (this catalog object contains a Velocity template), the execution stops and it goes nowhere. Invoking the thread once again just hangs the whole application.
Here is my code.
public class CopySiteThread extends Thread {
public CopySiteThread(ComponentDTO componentDTO, long vendorid, int admin_id) {
/**Application specific business logic not exposed **/
}
public void run() {
/** Application based Business Logic Not Exposed **/
//Copy Catalog first
List<Catalog> catalog = catalogDAO.getCatalog(vendorid);
System.out.println(catalog);
List<Catalog> newCat = new ArrayList<Catalog>();
HashMap<String, Integer> catIdMapList = new HashMap<String, Integer>();
Iterator<Catalog> catIterator = catalog.iterator();
while (catIterator.hasNext()) {
Catalog cat = catIterator.next();
System.out.println(cat);
int catId = catalogDAO.addTemplate(admin_id, cat.getHtml(), cat.getName(), cat.getNickname(), cat.getTemplategroup(), vendor.getVendorid());
catIdMapList.put(cat.getName(), catId);
cat = null;
}
}
}
And the thread is invoked like this.
CopySiteThread thread = new CopySiteThread(componentDTO, baseVendor, admin_id);
thread.start();
After a certain number of iterations, it gets stuck on line Catalog cat = catIterator.next();
This issue is rather strange because I've developed many applications like this without any problem.
Any help appreciated.
The actual problem was in the addCatalog method in CatalogDAO
Session session = sf.openSession();
Transaction tx = null;
Integer templateID = null;
Date date = new Date();
try {
tx = session.beginTransaction();
Catalog catalog = new Catalog();
//Business Logic
templateID = (Integer) session.save(catalog);
} catch (HibernateException ex) {
if (tx != null) tx.rolback();
} finally {
session.close();
}
return templateID;
Fixed by adding a finally clause and closing all sessions.

Hibernate with Multithreading in Swing application

I am facing a problem with hibernate with multithreading.
I am developing a swing based application where there are some number of POJO classes. The relations among the classes: Category has a set of Protocols, Protocol has a set of Step, Step has a set of Mode. All the collections are loaded lazily with fetch = FetchType.LAZY. I am maintaining a single session for the application. After getting the list of all Category, I need to start some threads to do some operations on the category list. Here I am getting LazyInitializationException. The test code is as follows:
final List<Category> cats = protocolDao.getCategoryList();
for (int i = 0; i < 10; i++) {
new Thread("THREAD_" + i) {
public void run() {
try {
for (Category category : cats) {
Set<Protocol> protocols = category.getProtocols();
for (Protocol protocol : protocols) {
Set<Step> steps = protocol.getStep();
for (Step step : steps) {
step.getModes());
}
}
}
System.out.println(Thread.currentThread().getName()+"SUCCESS" ;
} catch (Exception e) {
System.out.println("EXCEPTION ON " + Thread.currentThread().getName());
}
};
}.start();
}
The dao method is as follows:
public List<Category> getCategoryList() throws ProtocolException {
try {
Transaction transaction = session.beginTransaction();
List list = session.createCriteria(Category.class)
.setResultTransformer(Criteria.DISTINCT_ROOT_ENTITY)
.addOrder(Order.asc("categoryposition")).list();
transaction.commit();
return list;
} catch (Exception e) {
throw new ProtocolException(e);
}
}
When I try to run the above code I get the following exception for some of the threads:
SEVERE: illegal access to loading collection
org.hibernate.LazyInitializationException: illegal access to loading collection
at org.hibernate.collection.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:363)
at org.hibernate.collection.AbstractPersistentCollection.read(AbstractPersistentCollection.java:108)
at org.hibernate.collection.PersistentSet.toString(PersistentSet.java:332)
at java.lang.String.valueOf(String.java:2826)
at java.lang.StringBuilder.append(StringBuilder.java:115)
at com.mycomp.core.protocol.dao.test.TestLazyLoading$1.run(TestLazyLoading.java:76)
So some of the tasks are not completed. I cannot avoid multiple threads to work on the same category list(it works fine with single thread). Every thread requires to do its own task. The database is too big to avoid lazy loading. Can anyone help me how will I be able to work with multiple threads with the same category list?
You need to ensure that only the thread that gets your entities uses them. If you have a get ids and get by id method or similar:
final int[] catIds = protocolDao.getCategoryIds();
for (int i : catIds) {
new Thread("THREAD_" + i) {
public void run() {
Category category = protocolDao.getCategory(i);
Set<Protocol> protocols = category.getProtocols();
for (Protocol protocol : protocols) {
Set<Step> steps = protocol.getStep();
for (Step step : steps) {
step.getModes());
}
}
};
}.start();
}

Categories