Error in write.lock in hibernate search - java

By Hibernate search we create a search engine in my application. Here same application running in two different JBoss and using two different folder for storing index data. Two folder and Two JBoss running in different system. But some time showing bellow type of error. Please, give me any resolution.
16:45:58,184 ERROR
[org.hibernate.search.exception.impl.LogErrorHandler] (Hibernate
Search: Index updates queue processor for index
in.issac.wisebank.systemadmin.customermanagement.entiry.WbSaCustomermaster-1)
HSEARCH000058: Exception occurred
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
out: NativeFSLock#/Folder_Path/write.lock Primary Failure: Entity
in.issac.wisebank.systemadmin.customermanagement.entiry.WbSaCustomermaster
Id 49621 Work Type org.hibernate.search.backend.UpdateLuceneWork :
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
out:
NativeFSLock#/home/ccblcbs/wisebankenterprise/globalsearch/index12/in.issac.wisebank.systemadmin.customermanagement.entiry.WbSaCustomermaster/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
[lucene-core-3.6.2.jar:3.6.2 1423725 - rmuir - 2012-12-18 19:45:40] at
org.apache.lucene.index.IndexWriter.(IndexWriter.java:1098)
[lucene-core-3.6.2.jar:3.6.2 1423725 - rmuir - 2012-12-18 19:45:40] at
org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:146)
[hibernate-search-engine-4.4.4.Final.jar:4.4.4.Final] at
org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:113)
[hibernate-search-engine-4.4.4.Final.jar:4.4.4.Final] at
org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:117)
[hibernate-search-engine-4.4.4.Final.jar:4.4.4.Final] at
org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:101)
[hibernate-search-engine-4.4.4.Final.jar:4.4.4.Final] at
org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67)
[hibernate-search-engine-4.4.4.Final.jar:4.4.4.Final] at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
[rt.jar:1.6.0_24] at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
[rt.jar:1.6.0_24] at
java.util.concurrent.FutureTask.run(FutureTask.java:138)
[rt.jar:1.6.0_24] at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
[rt.jar:1.6.0_24] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
[rt.jar:1.6.0_24] at java.lang.Thread.run(Thread.java:662)
[rt.jar:1.6.0_24] 16:45:58,187 ERROR
[org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask]
(Hibernate Search: Index updates queue processor for index
in.issac.wisebank.systemadmin.customermanagement.entiry.WbSaCustomermaster-1)
HSEARCH000072: Couldn't open the IndexWriter because of previous
error: operation skipped, index ouf of sync!

It's caused by Windows, you should read about LOCKING STRATEGY.
https://docs.jboss.org/hibernate/search/3.2/reference/en/html_single/#search-configuration-directory-lockfactories

This version of Hibernate Search is extremely old; this issue could occasionally happen but was resolved. You need to update.

Related

Getting deadlock for insert and select statement(sql server 2014) in java application

In my application there are many threads which uses insert and select statement to store/retrieve data from database
So, i am getting deadlock on PRD environment, i have observe deadlock from application log file
Unable to read Cycle (ID: 1-A | PosID: 1 due to SQL Exception :
com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 129) was deadlocked on lock |
communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
at com.microsoft.sqlserver.jdbc.SQLServerResultSet$FetchBuffer.nextRow(SQLServerResultSet.java:4762)
at com.microsoft.sqlserver.jdbc.SQLServerResultSet.fetchBufferNext(SQLServerResultSet.java:1682)
at com.microsoft.sqlserver.jdbc.SQLServerResultSet.next(SQLServerResultSet.java:955)
at org.apache.tomcat.dbcp.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
at org.apache.tomcat.dbcp.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
at com.ilume.db.adapter.module.RoutingModul.getRoutes(RoutingModul.java:2867)
at com.ilume.db.adapter.module.RoutingModul.getAccounts(RoutingModul.java:2826)
at com.ilume.db.adapter.module.RoutingModul.getAccounts(RoutingModul.java:2651)
at com.ilume.db.adapter.module.RoutingModul.getEmployee(RoutingModul.java:2141)
at com.ilume.db.adapter.module.RoutingModul.getEmployee(RoutingModul.java:1925)
at com.ilume.db.adapter.module.RoutingModul.getCycle(RoutingModul.java:767)
at com.ilume.jti.logic.controller.tingController.readCycle(RoutingController.java:225)
at com.ilume.jti.logic.controller.tingController.<init>(RoutingController.java:137)
at com.ilume.jti.logic.controller.ukraine.tingController.<init>(tingController.java:30)
at com.ilume.jti.logic.controller.ukraine.eThread.run(UkraineThread.java:43)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
and i am not able to reproduce same situation on my development environment.
Please give me suggestion to resolve this kind of deadlock and how to get more clear understanding to capture deadlock.
Thank you

How to debug why the map job fails after multiple retries

I wrote a mapreduce job to scan an hbase table for a certain time range to count certain elements we need for analysis.
Mappers in the MR job keeps failing but I don't know why. Seems like each time I run the job, a different number of mappers fail. The YARN log (see below) from Cloudera manager isn't helpful in pointing what the problem is, although, someone said I might be running out of memory.
It seems to retry multiple times but each time it fails. What do I need to do to make it stop failing or how can I log things to help me better determine what is happening?
Below is a log from YARN for one of the mappers that failed.
Error: org.apache.hadoop.hbase.client.RetriesExhaustedException:
Failed after attempts=36, exceptions: Thu Jun 15 16:26:57 PDT 2017,
null, java.net.SocketTimeoutException: callTimeout=60000,
callDuration=60301: row '152_p3401.db161139.sjc102.dbi_1496271480' on
table 'dbi_based_data' at
region=dbi_based_data,151_p3413.db162024.iad4.dbi_1476974340,1486675565213.d83250d0682e648d165872afe5abd60e., hostname=hslave35118.ams9.mysecretdomain.com,60020,1483570489305,
seqNum=19308931 at
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:207)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:403)
at
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
at
org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:236)
at
org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:147)
at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$1.nextKeyValue(TableInputFormatBase.java:216)
at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:415) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused
by: java.net.SocketTimeoutException: callTimeout=60000,
callDuration=60301: row '152_p3401.db161139.sjc102.dbi_1496271480' on
table 'dbi_based_data' at
region=dbi_based_data,151_p3413.db162024.iad4.dbi_1476974340,1486675565213.d83250d0682e648d165872afe5abd60e., hostname=hslave35118.ams9.mysecretdomain.com,60020,1483570489305,
seqNum=19308931 at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) Caused by:
java.io.IOException: Call to
hslave35118.ams9.mysecretdomain.com/10.216.35.118:60020 failed on
local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException:
Call id=12, waitTime=60001, operationTimeout=60000 expired. at
org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:291)
at
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1272)
at
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
at
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219)
at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:64)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:360)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:334)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more Caused by:
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=12,
waitTime=60001, operationTimeout=60000 expired. at
org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:73) at
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1246)
... 13 more
So it looks like for my case I needed to extend the timeout setting. In my Java program I had to add the following lines to make the exception go away:
conf.set("hbase.rpc.timeout","90000");
conf.set("hbase.client.scanner.timeout.period","90000");
The answer was found on this link on Cloudera's site

Java Quartz Ibatis Cron Issues

I have a java webapp using an ibatis row handler to load a very large dataset (1 million rows in an innodb table). The process is run as a nightly cron job by quartz scheduler. However, after it processes for 6 minutes, it dies with the following stack trace:
WARN [DefaultQuartzScheduler_Worker-8] MethodInvokingJobDetailFactoryBean$MethodInvokingJob.executeInternal(168) | Could not invoke method 'doBatch' on target object [org.myCron#4adb34]
org.springframework.jdbc.UncategorizedSQLException: SqlMapClient operation: encountered SQLException [
--- The error occurred in org/myCron/mySqlMap.xml.
--- The error occurred while applying a result map.
--- Check the mySqlMap.outputMapping.
--- The error happened while setting a property on the result object.
--- Cause: com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception:
** BEGIN NESTED EXCEPTION **
java.io.EOFException
STACKTRACE:
java.io.EOFException
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:1903)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2402)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2860)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:771)
at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1289)
at com.mysql.jdbc.RowDataDynamic.nextRecord(RowDataDynamic.java:362)
at com.mysql.jdbc.RowDataDynamic.next(RowDataDynamic.java:352)
at com.mysql.jdbc.ResultSet.next(ResultSet.java:6106)
at org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:168)
at sun.reflect.GeneratedMethodAccessor71.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:592)
at com.ibatis.common.jdbc.logging.ResultSetLogProxy.invoke(ResultSetLogProxy.java:47)
at $Proxy10.next(Unknown Source)
at com.ibatis.sqlmap.engine.execution.SqlExecutor.handleResults(SqlExecutor.java:380)
at com.ibatis.sqlmap.engine.execution.SqlExecutor.handleMultipleResults(SqlExecutor.java:301)
at com.ibatis.sqlmap.engine.execution.SqlExecutor.executeQuery(SqlExecutor.java:190)
at com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.sqlExecuteQuery(GeneralStatement.java:205)
at com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryWithCallback(GeneralStatement.java:173)
at com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryWithRowHandler(GeneralStatement.java:133)
at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryWithRowHandler(SqlMapExecutorDelegate.java:649)
at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryWithRowHandler(SqlMapSessionImpl.java:156)
at com.ibatis.sqlmap.engine.impl.SqlMapClientImpl.queryWithRowHandler(SqlMapClientImpl.java:133)
at org.springframework.orm.ibatis.SqlMapClientTemplate$5.doInSqlMapClient(SqlMapClientTemplate.java:267)
at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:165)
at org.springframework.orm.ibatis.SqlMapClientTemplate.queryWithRowHandler(SqlMapClientTemplate.java:265)
at org.myCron.doBatch(MyCron.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:592)
at org.springframework.util.MethodInvoker.invoke(MethodInvoker.java:248)
at org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean$MethodInvokingJob.executeInternal(MethodInvokingJobDetailFactoryBean.java:165)
at org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:66)
at org.quartz.core.JobRunShell.run(JobRunShell.java:191)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:516)
** END NESTED EXCEPTION **
The stack trace is very vague. The only hints that I see are 'the error happened while setting a property on the result object'. There are only two properties on the result object: a String and an Integer. Both of them permit null values, but my select statements indicate that neither of them have any null values. They both have a proper gettter/setter (which makes sense since the process runs for a while successfully before dying). Every time that the cron runs, it dies at a random point (so it isn't stuck on a particular row).
Note - The method 'doBatch' does exist since that is the method that starts the cron process. If it couldn't find doBatch, it couldn't successfully process the first thousand rows.
I've also tried runnning the job outside of quartz and it also fails there as well. We tried increasing our MySQL net_read_timeout, net_write_timeout, and delayed_insert_timeout but none of these settings helped with the problem. I also tried setting my log4j setting to DEBUG and I did not get any helpful info.
Any other ideas about what I could try?
Sounds like MySQL closed the connection for some reason. Check the MySQL log see if anything shows up. Turn on various logging options for MySQL if necessary.
Also, start printing debug data (including timestamp) from your app - just print everything, then see what the last action was - perhaps you have some rarely triggered conditions in your code that has a bug.
I.e. every single time you talk to MySQL log it before AND after.

Apache Camel - Failing FTP Component

I made wrote a little piece of camel to consume a ftp server.
But after it was running for some time, it throws an exception, keeps running but doesn't consume anything any more. Also when I start it again and there are a larger number of file waiting to be consumed it will crash again. I already added an exception handler but it seems like is doesn't catch the exceptions.
This is the exception I receive:
Caused by: [org.apache.camel.component.file.GenericFileOperationFailedException - File operation failed: 150 Opening ASCII mode data connection for 2386442.XML(3895 bytes).
Accept timed out. Code: 150]
org.apache.camel.component.file.GenericFileOperationFailedException: File operation failed: 150 Opening ASCII mode data connection for 2386442.XML(3895 bytes).
Accept timed out. Code: 150
at org.apache.camel.component.file.remote.FtpOperations.retrieveFileToStreamInBody(FtpOperations.java:336)
at org.apache.camel.component.file.remote.FtpOperations.retrieveFile(FtpOperations.java:297)
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:333)
at org.apache.camel.component.file.remote.RemoteFileConsumer.processExchange(RemoteFileConsumer.java:94)
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:175)
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:136)
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:140)
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:92)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408)
at java.net.ServerSocket.implAccept(ServerSocket.java:462)
at java.net.ServerSocket.accept(ServerSocket.java:430)
at org.apache.commons.net.ftp.FTPClient._openDataConnection_(FTPClient.java:560)
at org.apache.commons.net.ftp.FTPClient.retrieveFile(FTPClient.java:1442)
at org.apache.camel.component.file.remote.FtpOperations.retrieveFileToStreamInBody(FtpOperations.java:328)
... 16 more
Caused by: [org.apache.camel.component.file.GenericFileOperationFailedException - Cannot retrieve file: GenericFile[2386448.XML] from: Endpoint[ftp://1.1.1.1?delay=15000&delete=true&disconnect=true&exclude=((?i).*pdf$)&password=******&username=user]
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot retrieve file: GenericFile[2386448.XML] from: Endpoint[ftp://1.1.1.1?delay=15000&delete=true&disconnect=true&exclude=((?i).*pdf$)&password=******&username=user]
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:338)
at org.apache.camel.component.file.remote.RemoteFileConsumer.processExchange(RemoteFileConsumer.java:94)
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:175)
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:136)
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:140)
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:92)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
And this is the route I made using the Java DSL:
// XML Predicate
// only allows names without spaces
Predicate xmlPredicate = header(RssUtils.CAMEL_FILE_NAME).regex(
"([\\S]+(\\.(?i)(xml))$)");
// Images Predicate
// only allows names without spaces
Predicate imgPredicate = header(RssUtils.CAMEL_FILE_NAME).regex(
"([\\S]+(\\.(?i)(jpg|png|gif))$)");
onException(SchemaValidationException.class).to(
"file://" + props.getProperty(RssUtils.ROOT_DIR)
+ "/errors/SchemaValidationException");
onException(GenericFileOperationFailedException.class).to(
"file://" + props.getProperty(RssUtils.ROOT_DIR)
+ "/errors/GenericFileExceptions");
from(
"ftp://"
+ props.getProperty(RssUtils.FTP_URL)
+ "?username="
+ props.getProperty(RssUtils.FTP_USER)
+ "&password="
+ props.getProperty(RssUtils.FTP_PWD)
+ "&disconnect=true&delete=true&exclude=((?i).*pdf$)&delay="
+ props.getProperty(RssUtils.FTP_DELAY))
.choice()
.when(xmlPredicate)
.to("jms:xmlQueue")
.to("jms:archiveQueue")
.when(imgPredicate)
.to("file://" + props.getProperty(RssUtils.ROOT_DIR) + "/img")
.otherwise()
.to("file://" + props.getProperty(RssUtils.ROOT_DIR)
+ "/errors/other");
from("jms:xmlQueue").to("validator:FtpXmlValidator.xsd")
.to("xslt://XmlToRssConverter.xsl")
.process(rssFeedProcessor)
.to("file://" + props.getProperty(RssUtils.ROOT_DIR) + "/rss/");
from("jms:archiveQueue")
.to("file://" + props.getProperty(RssUtils.ROOT_DIR) + "/archive/");
Is there anything I can do to avoid this kind af behavior? It is really difficult to test so I'm hoping somebody spots a flaw in my code. I have searching for quite some time now but I don't find anything solid. Maybe some way I could debug this issue?
There maybe a few things that I found somebody could give his tought on:
use handled(true) when using the onException
can I set the max batch size of the consumer? (can I use throttle for this ?)
use explicit try catch finally because I'm using the Java DSL
Don't shoot me if I'm saying anything wrong here, I just learning Camel.
So if anybody has suggestions on the code above I would appreciate it!
Thanks a lot in advance!
What you have here is an FTP issue, the fact that it is occurring in Apache Camel is largely irrelevant.
The telltale part of the bomb is:
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408)
at java.net.ServerSocket.implAccept(ServerSocket.java:462)
at java.net.ServerSocket.accept(ServerSocket.java:430)
at org.apache.commons.net.ftp.FTPClient.openDataConnection(FTPClient.java:560)
The openDataConnection method of org.apache.commons.net.ftp.FTPClient is there to support active mode FTP - passive mode just uses the same port as for commands, so it doesn't need a separate port connection.
Try switching to passive mode (passiveMode = true with Apache Camel).
On the surface, without looking at WHY your route is failing, it sounds like what you're looking to do is handle and continue--i.e., handle this exception and continue your route where you left off. Per the documentation:
Available as of Camel 2.3
In Camel 2.3 we introduced a new option continued which allows you to both handle and continue routing in the original route as if the exception did not occur.
For instance to just ignore and continue if the IDontCareException was thrown we can do this:
onException(IDontCareException).continued(true);
What happens here is:
Camel will catch the exception and . . . just ignore it and continue routing in the original route. However . . . it will route that [onException] route first, before it will continue routing in the original route.
Give this a try and it may solve your problem. As I implied, above, depending on what your root issue is, this may be more of a bandaid than a proper solution. The better approach might be to figure out why the FTP consumer is failing. At a glance, it appears that it can't find the file named 2386448.XML.
Once you determine the root cause, you can use a choice to behave differently at the right time as in:
.choice()
.when(isValidFtpResponse())
.to(DIRECT_CONTINUE_FTP_ROUTE)
.otherwise()
.setBody(constant(null))
.log(ERROR, "FTP failed: ${headers}")
.end()
Hopefully, that gives you a few ideas and helps you get passed this issue.

Error using mongodb with java

I am using MongoDB to store data. In my project I am connecting multiple users(100-2000 users) to a server using thread pool and store the responses of those users in the database. But when I do this I am getting this error. Please help me with this. I am not able to get it. I have already wasted 45 hours on this error.
Error:
java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.avaya.onex.hss.requesthandlers.CommandExecutor.executeCommands(CommandExecutor.java:129)
at com.avaya.onex.hss.requesthandlers.CommandExecutor.run(CommandExecutor.java:59)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: com.mongodb.DBPortPool$SemaphoresOut: Out of semaphores to get db connection
at com.mongodb.DBPortPool.get(DBPortPool.java:176)
at com.mongodb.DBTCPConnector$MyPort.get(DBTCPConnector.java:370)
at com.mongodb.DBTCPConnector.say(DBTCPConnector.java:149)
at com.mongodb.DBTCPConnector.say(DBTCPConnector.java:138)
at com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:261)
at com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:211)
at com.mongodb.DBCollection.insert(DBCollection.java:57)
at com.mongodb.DBCollection.insert(DBCollection.java:102)
at com.avaya.onex.hss.objects.LoginRequest.storeData(LoginRequest.java:152)
at com.avaya.onex.hss.requesthandlers.LoginHandler.handleRequest(LoginHandler.java:20)
... 8 more
com.mongodb.DBPortPool$SemaphoresOut: Out of semaphores to get db connection
at com.mongodb.DBPortPool.get(DBPortPool.java:176)
at com.mongodb.DBTCPConnector$MyPort.get(DBTCPConnector.java:370)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:212)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:305)
at com.mongodb.DB.command(DB.java:160)
at com.mongodb.DB.command(DB.java:183)
at com.mongodb.DB.command(DB.java:144)
at com.mongodb.DBCollection.drop(DBCollection.java:777)
at com.mongodb.DBApiLayer$MyCollection.drop(DBApiLayer.java:206)
at com.avaya.onex.hss.requesthandlers.CommandExecutor.executeCommands(CommandExecutor.java:118)
at com.avaya.onex.hss.requesthandlers.CommandExecutor.run(CommandExecutor.java:59)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Try increasing the number of connections in the connection pool.
Look here, at the connectionsPerHost.

Categories