I have a requirement wherein I have to read all "history" exception logs from a WebSphere server and load them in Hive.
Below is what a typical log looks like but message rows are sometimes extended for 4-5 lines as well. I do not really care about the stack trace but definitely need the Timestamp, ThereadId, Short name, Event Type and full error message in their individual columns.
[5/20/16 22:35:39:841 CDT] 00233723 SystemOut O 22:35:39,840 ERROR [com.xxx.app.yyy.hms.jms.receivers.impl.B2bTonnn278InReceiverImpl]
xxxRuntimeException{errorVO=com.xxx.app.yyy.nnn.mmm.data.mmmCompleteIntakeErrorVO(diagnosesMessagesExist:false, mmmMessagesExist:false, incrementedKey:null, numPagesWithMessages:1, primaryKeyFields:[], providersMessagesExist:false, requiredFields:[], servicesMessagesExist:true, changeDateTime:05-20-2016 10:35:39:840 PM CDT, changeUserID:SYSTEM, createDateTime:null, createUserID:null, dataSecured:false, dataSecurityTypeList:null, globalMessages:[], historyID:0, messages:{procedureUnitCount=[Field For Label: procedureUnitCount Message ID: 'ERR0010', Message Arguments: '[]']}, trackChanges:false, updateVersion:-1, messages={procedureUnitCount=[Field For Label: procedureUnitCount Message ID: 'ERR0010', Message Arguments: '[]']})}
at com.xxx.app.yyy.nnn.mmm.businesslogic.impl.mmmImpl.completemmm(mmmImpl.groovy:612)
at sun.reflect.GeneratedMethodAccessor4988.invoke(Unknown Source)
I tried doing this by reading one line at a time and parsing using Regex - which failed miserably (only 20% of data met the Regex) and that quality is also poor. I really do not know to proceed here and what delimiter to choose to break that exception string to columns (\t already tried - not working too.)
Any help or pointer to right direction here ?
Use Logstash to read and parse the WebSphere logs and post them into Elasticsearch for further processing (i.e use ELK Stack).
Read related discussion here.
With Logstash, you can use Grok to parse any crappy unstructured log data into something structured and queryable.
grep -A 1 SystemOut LogFile | awk 'NR%3{printf $0" ";next;}2' | awk '{print $2" "$4" "$8" "$10}'
Related
Here is the application.yml I am using for my Spring WebFlux project
redis:
redisson:
config: |
clusterServersConfig:
idleConnectionTimeout: 10000
connectTimeout: ${REDISSON_CONNECT_TIMEOUT:20000}
timeout: ${REDISSON_TIMEOUT:3000}
retryAttempts: ${REDISSON_RETRY_ATTEMPTS:3}
retryInterval: ${REDISSON_RETRY_INTERVAL:1500}
subscriptionConnectionPoolSize: ${REDISSON_SUBSCRIPTION_POOL_SIZE:50}
slaveConnectionMinimumIdleSize: ${REDISSON_SLAVE_MIN_IDLE_SIZE:24}
slaveConnectionPoolSize: ${REDISSON_SLAVE_POOL_SIZE:48}
masterConnectionMinimumIdleSize: ${REDISSON_MASTER_MIN_IDLE_SIZE:24}
masterConnectionPoolSize: ${REDISSON_MASTER_POOL_SIZE:48}
nodeAddresses:
- "rediss://${APPS_REDIS:-}:${APPS_REDIS_PORT:6379}"
password: ${APPS_REDIS_SECRET:-}
threads: ${REDISSON_THREADS:16}
nettyThreads: ${REDISSON_NETTY_THREADS:96}
But whenever I am starting the project in my laptop, this error comes up
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'clusterServersConfig': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
I am not sure why it is saying clusterServersConfig is an unrecognized token. In the official doc also, it is mentioned and here is an example of this.
At first I thought it might be because I am running redis locally in my M1 Mac so redis-clusters aren't generated by default. I even tried to enable clusters in redis.conf and run a redis clusters with 3 nodes using redis-cli but still this happens. I have tried almost everything I could think of or search on the net. Any help appreciated :)
I'm using ActiveMQ Artemis version 2.19.1, and I'm facing an issue in a 6-node (3 masters) cluster where redistribution is failing for large messages with the below warning logs:
23:35:05,551 WARN [org.apache.activemq.artemis.core.server] AMQ222303: Redistribution by Redistributor[TEST_QUEUE/2244] of messageID = 196,950,715 failed: java.lang.UnsupportedOperationException: Method not supported with Large Messages
at org.apache.activemq.artemis.protocol.amqp.broker.AMQPLargeMessage.getData(AMQPLargeMessage.java:311) [artemis-amqp-protocol-2.19.1.jar:2.19.1]
at org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage.anyMessageAnnotations(AMQPMessage.java:1374) [artemis-amqp-protocol-2.19.1.jar:2.19.1]
at org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage.hasScheduledDeliveryTime(AMQPMessage.java:1352) [artemis-amqp-protocol-2.19.1.jar:2.19.1]
at org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl.processRoute(PostOfficeImpl.java:1499) [artemis-server-2.19.1.jar:2.19.1]
at org.apache.activemq.artemis.core.server.cluster.impl.Redistributor$1.run(Redistributor.java:169) [artemis-server-2.19.1.jar:2.19.1]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42) [artemis-commons-2.19.1.jar:2.19.1]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31) [artemis-commons-2.19.1.jar:2.19.1]
at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:65) [artemis-commons-2.19.1.jar:2.19.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_322]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_322]
at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118) [artemis-commons-2.19.1.jar:2.19.1]
Later I see broker is removing consumers with below warning since properties=null:
00:10:28,280 WARN [org.apache.activemq.artemis.core.server] AMQ222151: removing consumer which did not handle a message, consumer=ServerConsumerImpl [id=57, filter=null, binding=LocalQueueBinding [address=TEST_QUEUE, queue=QueueImpl[name=TEST_QUEUE, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::name=localhost], temp=false]#9c000b7, filter=null, name=TEST_QUEUE, clusterName=TEST_QUEUE1e359f55-c92b-11ec-b908-005056a3af3f]], message=Reference[157995028]:RELIABLE:AMQPLargeMessage( [durable=true, messageID=157995028, address=TEST_QUEUE, size=0, scanningStatus=SCANNED, applicationProperties={VER=05, trackingId=62701757c80c3004d037ded6}, messageAnnotations={}, properties=null, extraProperties = TypedProperties[_AMQ_AD=TEST_QUEUE]]: java.lang.IllegalArgumentException: Array must not be empty or null
at org.apache.qpid.proton.codec.CompositeReadableBuffer.append(CompositeReadableBuffer.java:688) [proton-j-0.33.10.jar:]
at org.apache.qpid.proton.engine.impl.DeliveryImpl.send(DeliveryImpl.java:345) [proton-j-0.33.10.jar:]
at org.apache.qpid.proton.engine.impl.SenderImpl.send(SenderImpl.java:74) [proton-j-0.33.10.jar:]
at org.apache.activemq.artemis.protocol.amqp.proton.ProtonServerSenderContext$LargeMessageDeliveryContext.deliverInitialPacket(ProtonServerSenderContext.java:686) [artemis-amqp-protocol-2.19.1.jar:2.19.1]
at org.apache.activemq.artemis.protocol.amqp.proton.ProtonServerSenderContext$LargeMessageDeliveryContext.deliver(ProtonServerSenderContext.java:587) [artemis-amqp-protocol-2.19.1.jar:2.19.1]
I have 6 consumers for this queue if one message out of many (let's say 1,000) is large - it should process other messages but, the processing is stopped completely with 0 consumers on the queue.
When you send large messages, within the first block sent the large message is parsed for properties. Most likely you are using large properties in a way the server is not being able to parse the first few bytes of the message. You should avoid using large properties and let the large portion only for the client.
We tried following up with many tests on this JIRA, and the only plausible scenarios would be through large properties, or some incomplete message your client is generating in a way the server can't parse it.
https://issues.apache.org/jira/browse/ARTEMIS-3837
if you provide a reproducer showing how the property is failing to be parsed we will follow up with a possible fix.
Please move any discussion regarding this to the JIRA. It could be a bug caused by your anti-pattern.
I am very new to IBM Bluemix and Logstash.
My application is based on Spring Boot + Log4j which was deployed into IBM BlueMix.
Goal:
Reading entire java stack trace + BlueMix log and write into a file in local Linux Server.
I don't want to use elastic search & Kibana. I think it is overkill for my requirement. I don't want fancy GUI thing, but a basic text file which contains all log information.
What I did till now:
Installed & Setup logstash.
Able to write bluemix log into a file in my local server.
Below is my Logstash conf file. As I want entire log, I am not using any filter. (based on my 1 day old logstash knowledge)
input {
tcp {
port => 5000
type => syslog
}
}
filter {
}
output {
file {
path => "/app/uot0/cloud/logstash/logstash-2.3.4/tmp/access_log"
}
}
Problem:
Logstash output file "access_log" is NOT containing full stack trace from a Java Exception, it contains only name of that exception.
(part of) Original log from BlueMix console:
2016-07-15T19:31:34.184-0400[App/0]OUT23:31:34.183 [36m[http-nio-61430-exec-10][0;39m [39mDEBUG[0;39m [36mo.s.w.s.m.m.a.ExceptionHandlerExceptionResolver[0;39m [30m- Resolving exception from handler [public org.springframework.http.ResponseEntity<?> com.abc.xyx.rest.TaskController.getTasks(org.springframework.web.context.request.WebRequest)]: java.lang.IllegalArgumentException: Invalid enum name:Other in com.abc.xyx.service.task.TaskType
2016-07-15T19:31:34.187-0400[App/0]OUT[0;39m23:31:34.186 [36m[http-nio-61430-exec-10][0;39m [39mDEBUG[0;39m [36mo.s.b.f.s.DefaultListableBeanFactory[0;39m [30m- Returning cached instance of singleton bean 'exceptionHandlingAdvice'
2016-07-15T19:31:34.189-0400[App/0]OUT[0;39m23:31:34.189 [36m[http-nio-61430-exec-10][0;39m [39mDEBUG[0;39m [36mo.s.w.s.m.m.a.ExceptionHandlerExceptionResolver[0;39m [30m- Invoking #ExceptionHandler method: public void com.abc.xyx.rest.ExceptionHandlingAdvice.systemException(java.lang.Exception)
2016-07-15T19:31:34.193-0400[App/0]OUT[0;39m23:31:34.192 [36m[http-nio-61430-exec-10][0;39m [1;31mERROR[0;39m [36mc.r.t.r.ExceptionHandlingAdvice[0;39m [30m- Unexpected system exception
2016-07-15T19:31:34.193-0400[App/0]OUT[0;39mjava.lang.IllegalArgumentException: Invalid enum name:Other in com.abc.xyx.service.task.TaskType
2016-07-15T19:31:34.194-0400[App/0]OUT at com.abc.xyx.service.task.TaskUtils.getEnum(TaskUtils.java:30) ~[xyx-core-1.0-SNAPSHOT.jar:na]
2016-07-15T19:31:34.194-0400[App/0]OUT at com.abc.xyx.service.task.Task.init(Task.java:179) ~[xyx-core-1.0-SNAPSHOT.jar:na]
2016-07-15T19:31:34.194-0400[App/0]OUT at com.abc.xyx.service.task.TaskService.searchOnePageTasks(TaskService.java:558) ~[xyx-core-1.0-SNAPSHOT.jar:na]
2016-07-15T19:31:34.194-0400[App/0]OUT at com.abc.xyx.service.task.Task.<init>(Task.java:93) ~[xyx-core-1.0-SNAPSHOT.jar:na]
2016-07-15T19:31:34.194-0400[App/0]OUT at com.abc.xyx.service.task.TaskService$$FastClassBySpringCGLIB$$bb02ea04.invoke(<generated>) ~[xyx-core-1.0-SNAPSHOT.jar:na]
2016-07-15T19:31:34.194-0400[App/0]OUT at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) ~[spring-core-4.2.4.RELEASE.jar:4.2.4.RELEASE]
(entire) Equivalent log which was written into access.log by logstash is below:
{"message":"549 <14>1 2016-07-15T23:31:34.208666+00:00 loggregator c022c216-4373-418e-bb4d-fbde0b41d720 [App/0] - - \u001B[0;39m23:31:34.208 \u001B[36m[http-nio-61430-exec-10]\u001B[0;39m \u001B[39mDEBUG\u001B[0;39m \u001B[36mo.s.w.s.m.m.a.HttpEntityMethodProcessor\u001B[0;39m \u001B[30m- Written [{timestamp=Fri Jul 15 23:31:34 UTC 2016, status=500, error=Internal Server Error, exception=java.lang.IllegalArgumentException, message=System exception, path=/api/tasksearch}] as \"application/json;charset=UTF-8\" using
Question:
How can I get the full stack trace for Java Exceptions, so that it will be useful for debugging?
Any help will be appreciated...
I have a problem which I think is the same as that described here:
Error when opening a lucene index: Map failed
However the solution does not apply in this case so I am providing more details and asking again.
The index is created using Solr 5.3
The line of code causing the exception is:
IndexReader indexReader = DirectoryReader.open(FSDirectory.open(Paths.get("the_path")));
The exception stacktrace is:
Exception in thread "main" java.io.IOException: Map failed: MMapIndexInput(path="/mnt/fastdata/ac1zz/JATE/solr-5.3.0/server/solr/jate/data_aclrd/index/_5t.tvd") [this may be caused by lack of enough unfragmented virtual address space or too restrictive virtual memory limits enforced by the operating system, preventing us to map a chunk of 434505698 bytes. Please review 'ulimit -v', 'ulimit -m' (both should return 'unlimited'), and 'sysctl vm.max_map_count'. More information: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907)
at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:265)
at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:239)
at org.apache.lucene.codecs.compressing.CompressingTermVectorsReader.<init>(CompressingTermVectorsReader.java:144)
at org.apache.lucene.codecs.compressing.CompressingTermVectorsFormat.vectorsReader(CompressingTermVectorsFormat.java:91)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:120)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:58)
at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:50)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:731)
at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:50)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
at uk.ac.shef.dcs.jate.app.AppATTF.extract(AppATTF.java:39)
at uk.ac.shef.dcs.jate.app.AppATTF.main(AppATTF.java:33)
The suggested solutions as in the exception message do not work in this case because I am running the application on a server and I do not have permissions to change those.
Namely,
ulimit -v unlimited
prints: "-bash: ulimit: virtual memory: cannot modify limit: Operation not permitted"
and
sysctl -w vm.max_map_count=10000000
gives:"error: permission denied on key 'vm.max_map_count'"
Is there anyway I can solve this?
Thanks
I have found a solution and so I am answering myself.
If you really cannot set ulimit or vm.max_map_count, the only solution, according to http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html, is to configure your solr (or if you work with Lucene api, choose explicitly) to use SimpleFSDirectory (if windows) or NIOFSDirectory, both are slower than the default.
For example
DirectoryReader.open(new NIOFSDirectory(Paths.get("path_to_index"), FSLockFactory.getDefault()))
I have a java webapp using an ibatis row handler to load a very large dataset (1 million rows in an innodb table). The process is run as a nightly cron job by quartz scheduler. However, after it processes for 6 minutes, it dies with the following stack trace:
WARN [DefaultQuartzScheduler_Worker-8] MethodInvokingJobDetailFactoryBean$MethodInvokingJob.executeInternal(168) | Could not invoke method 'doBatch' on target object [org.myCron#4adb34]
org.springframework.jdbc.UncategorizedSQLException: SqlMapClient operation: encountered SQLException [
--- The error occurred in org/myCron/mySqlMap.xml.
--- The error occurred while applying a result map.
--- Check the mySqlMap.outputMapping.
--- The error happened while setting a property on the result object.
--- Cause: com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception:
** BEGIN NESTED EXCEPTION **
java.io.EOFException
STACKTRACE:
java.io.EOFException
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:1903)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2402)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2860)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:771)
at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1289)
at com.mysql.jdbc.RowDataDynamic.nextRecord(RowDataDynamic.java:362)
at com.mysql.jdbc.RowDataDynamic.next(RowDataDynamic.java:352)
at com.mysql.jdbc.ResultSet.next(ResultSet.java:6106)
at org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:168)
at sun.reflect.GeneratedMethodAccessor71.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:592)
at com.ibatis.common.jdbc.logging.ResultSetLogProxy.invoke(ResultSetLogProxy.java:47)
at $Proxy10.next(Unknown Source)
at com.ibatis.sqlmap.engine.execution.SqlExecutor.handleResults(SqlExecutor.java:380)
at com.ibatis.sqlmap.engine.execution.SqlExecutor.handleMultipleResults(SqlExecutor.java:301)
at com.ibatis.sqlmap.engine.execution.SqlExecutor.executeQuery(SqlExecutor.java:190)
at com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.sqlExecuteQuery(GeneralStatement.java:205)
at com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryWithCallback(GeneralStatement.java:173)
at com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryWithRowHandler(GeneralStatement.java:133)
at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryWithRowHandler(SqlMapExecutorDelegate.java:649)
at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryWithRowHandler(SqlMapSessionImpl.java:156)
at com.ibatis.sqlmap.engine.impl.SqlMapClientImpl.queryWithRowHandler(SqlMapClientImpl.java:133)
at org.springframework.orm.ibatis.SqlMapClientTemplate$5.doInSqlMapClient(SqlMapClientTemplate.java:267)
at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:165)
at org.springframework.orm.ibatis.SqlMapClientTemplate.queryWithRowHandler(SqlMapClientTemplate.java:265)
at org.myCron.doBatch(MyCron.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:592)
at org.springframework.util.MethodInvoker.invoke(MethodInvoker.java:248)
at org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean$MethodInvokingJob.executeInternal(MethodInvokingJobDetailFactoryBean.java:165)
at org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:66)
at org.quartz.core.JobRunShell.run(JobRunShell.java:191)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:516)
** END NESTED EXCEPTION **
The stack trace is very vague. The only hints that I see are 'the error happened while setting a property on the result object'. There are only two properties on the result object: a String and an Integer. Both of them permit null values, but my select statements indicate that neither of them have any null values. They both have a proper gettter/setter (which makes sense since the process runs for a while successfully before dying). Every time that the cron runs, it dies at a random point (so it isn't stuck on a particular row).
Note - The method 'doBatch' does exist since that is the method that starts the cron process. If it couldn't find doBatch, it couldn't successfully process the first thousand rows.
I've also tried runnning the job outside of quartz and it also fails there as well. We tried increasing our MySQL net_read_timeout, net_write_timeout, and delayed_insert_timeout but none of these settings helped with the problem. I also tried setting my log4j setting to DEBUG and I did not get any helpful info.
Any other ideas about what I could try?
Sounds like MySQL closed the connection for some reason. Check the MySQL log see if anything shows up. Turn on various logging options for MySQL if necessary.
Also, start printing debug data (including timestamp) from your app - just print everything, then see what the last action was - perhaps you have some rarely triggered conditions in your code that has a bug.
I.e. every single time you talk to MySQL log it before AND after.