Send log4j2 stack traces over syslog - java

I am trying to log stack traces into Logstash.
The logging stack is ELK (ElasticSearch, Logstash, Kibana).
The application producing logs is a Java application, using slf4j as a logging interface, and log4j2 as the logging implementation.
The log4j2.xml declares this syslog Appender, with the RFC5424 format:
<Appenders>
<Syslog name="RFC5424" format="RFC5424" host="localhost" port="8514"
protocol="TCP" appName="MyApp" includeMDC="true" mdcId="mdc"
facility="LOCAL0" enterpriseNumber="18060" newLine="true"
messageId="Audit" id="App">
<LoggerFields>
<KeyValuePair key="thread" value="%t"/>
<KeyValuePair key="priority" value="%p"/>
<KeyValuePair key="category" value="%c"/>
<KeyValuePair key="exception" value="%ex{full}"/>
</LoggerFields>
</Syslog>
</Appenders>
I log a Throwable from the Java application like so:
org.slf4j.LoggerFactory.getLogger("exception_test").error("Testing errors", new RuntimeException("Exception message"));
When an exception is logged, Logstash traces something like this to show me what it persists:
{
"#timestamp":"2016-11-08T11:08:10.387Z",
"port":60397,
"#version":"1",
"host":"127.0.0.1",
"message":"<131>1 2016-11-08T11:08:10.386Z MyComputer.local MyApp - Audit [mdc#18060 category=\"exception_test\" exception=\"java.lang.RuntimeException: Exception message",
"type":"syslog",
"tags":[
"_grokparsefailure"
]
}
And I confirm that Kibana displays exactly the same JSON within the _source field of one of its log entries.
There's a problem here: no stack trace is saved. And the message, "Testing errors", is lost.
The "tags":["_grokparsefailure"] is unfortunate but not related to this question.
I tried adding <ExceptionPattern/> to see if it would change anything:
<Syslog name="RFC5424" format="RFC5424" host="localhost" port="8514"
protocol="TCP" appName="MyApp" includeMDC="true" mdcId="mdc"
facility="LOCAL0" enterpriseNumber="18060" newLine="true"
messageId="Audit" id="App">
<LoggerFields>
<KeyValuePair key="thread" value="%t"/>
<KeyValuePair key="priority" value="%p"/>
<KeyValuePair key="category" value="%c"/>
<KeyValuePair key="exception" value="%ex{full}"/>
</LoggerFields>
<ExceptionPattern>%ex{full}</ExceptionPattern>
</Syslog>
<ExceptionPattern/> replaces the log message, and also (sadly) omits all loggerFields. But it does give me a class name and line number:
{
"#timestamp":"2016-11-08T11:54:03.835Z",
"port":60397,
"#version":"1",
"host":"127.0.0.1",
"message":"at com.stackoverflow.LogTest.throw(LogTest.java:149)",
"type":"syslog",
"tags":[
"_grokparsefailure"
]
}
Again: no stack trace. And again: the message, "Testing errors", is lost.
How can I use log4j2 to log stack traces into Logstash? I don't necessarily have to use the syslog appender.
Essentially the constraints are:
Not be locked in to any particular logging infrastructure (this is why I used syslog)
Multi-line stack traces need to be understood as being a single log entry. It's undesirable for "each line of the stack trace" to be "a separate log message"
Stack traces must be able to be subjected to filters. A typical exception of mine can have a page-long stack trace. I want to filter out frames like Spring.

Log4j 2.5's SyslogAppender can only send stack traces over UDP.
<Syslog name="RFC5424" format="RFC5424" host="localhost" port="8514"
protocol="UDP" appName="MyApp" includeMDC="true" mdcId="mdc"
facility="LOCAL0" enterpriseNumber="18060" newLine="true"
messageId="LogTest" id="App">
<LoggerFields>
<KeyValuePair key="thread" value="%t"/>
<KeyValuePair key="priority" value="%p"/>
<KeyValuePair key="category" value="%c"/>
<KeyValuePair key="exception" value="%ex{full}"/>
</LoggerFields>
<ExceptionPattern>%ex{full}</ExceptionPattern>
</Syslog>
With UDP: both ExceptionPattern and LoggerFields.KeyValuePair["exception"] start working as solutions for multiline stack traces.
This is what logstash prints when I sent an exception over UDP via syslog:
{
"#timestamp" => 2016-11-14T13:23:38.304Z,
"#version" => "1",
"host" => "127.0.0.1",
"message" => "<131>1 2016-11-14T13:23:38.302Z BirchBox.local MyApp - LogTest [mdc#18060 category=\"com.stackoverflow.Deeply\" exception=\"java.lang.RuntimeException: Exception message\n\tat com.stackoverflow.Deeply.complain(Deeply.java:10)\n\tat com.stackoverflow.Nested.complain(Nested.java:8)\n\tat com.stackoverflow.Main.main(Main.java:20)\n\" priority=\"ERROR\" thread=\"main\"] Example error\njava.lang.RuntimeException: Exception message\n\tat com.stackoverflow.Deeply.complain(Deeply.java:10)\n\tat com.stackoverflow.Nested.complain(Nested.java:8)\n\tat com.stackoverflow.Main.main(Main.java:20)",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
Inside [mdc#18060 exception=\"…\"] we get the LoggerFields.KeyValuePair["exception"] stack trace.
In addition to this: the stack trace is inserted into the logged message itself, thanks to ExceptionPattern.
For reference: this is what logstash prints when I send the exception over TCP via syslog (i.e. the same SyslogAppender as described above, but with protocol="TCP" instead):
{
"#timestamp" => 2016-11-14T19:56:30.293Z,
"port" => 63179,
"#version" => "1",
"host" => "127.0.0.1",
"message" => "<131>1 2016-11-14T19:56:30.277Z BirchBox.local MyApp - Audit [mdc#18060 category=\"com.stackoverflow.Deeply\" exception=\"java.lang.RuntimeException: Exception message",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"#timestamp" => 2016-11-14T19:56:30.296Z,
"port" => 63179,
"#version" => "1",
"host" => "127.0.0.1",
"message" => "at com.stackoverflow.Deeply.complain(Deeply.java:10)",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"#timestamp" => 2016-11-14T19:56:30.296Z,
"port" => 63179,
"#version" => "1",
"host" => "127.0.0.1",
"message" => "at com.stackoverflow.Nested.complain(Nested.java:8)",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"#timestamp" => 2016-11-14T19:56:30.296Z,
"port" => 63179,
"#version" => "1",
"host" => "127.0.0.1",
"message" => "at com.stackoverflow.Main.main(Main.java:20)",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"#timestamp" => 2016-11-14T19:56:30.296Z,
"port" => 63179,
"#version" => "1",
"host" => "127.0.0.1",
"message" => "\" priority=\"ERROR\" thread=\"main\"] Example error",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"#timestamp" => 2016-11-14T19:56:30.296Z,
"port" => 63179,
"#version" => "1",
"host" => "127.0.0.1",
"message" => "java.lang.RuntimeException: Exception message",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"#timestamp" => 2016-11-14T19:56:30.297Z,
"port" => 63179,
"#version" => "1",
"host" => "127.0.0.1",
"message" => "at com.stackoverflow.Deeply.complain(Deeply.java:10)",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"#timestamp" => 2016-11-14T19:56:30.298Z,
"port" => 63179,
"#version" => "1",
"host" => "127.0.0.1",
"message" => "at com.stackoverflow.Nested.complain(Nested.java:8)",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"#timestamp" => 2016-11-14T19:56:30.298Z,
"port" => 63179,
"#version" => "1",
"host" => "127.0.0.1",
"message" => "at com.stackoverflow.Main.main(Main.java:20)",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"#timestamp" => 2016-11-14T19:56:30.299Z,
"port" => 63179,
"#version" => "1",
"host" => "127.0.0.1",
"message" => "",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
It looks like TCP does actually "work", but splits the single log message into many syslog messages (for example when \n is encountered).

Related

Wildfly Logs are not saved

After restarting my Ubuntu VM, Wildfly 18 automatically starts.
From ps aux
wildfly 1031 0.0 0.0 20048 3508 ? Ss Dez18 0:00 /bin/bash /opt/wildfly/bin/launch.sh standalone standalone.xml 0.0.0.0
wildfly 1067 0.0 0.0 4628 1756 ? S Dez18 0:00 /bin/sh /opt/wildfly/bin/standalone.sh -c standalone.xml -b 0.0.0.0
wildfly 1482 35.2 7.0 1658040 572176 ? Sl Dez18 0:36 /opt/jdk-13.0.1/bin/java -D[Standalone] -server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs
But my server.log is empty (cat /opt/wildfly/standalone/log/server.log gives me no messages about startup process of wildfly etc.).
When I make a "service wildfly restart" no additional Entries are inserted into server.log. How can I get access to my Log?
I think I`ve changed nothing in comparison to the standard config.
/standalone/configuration/logging.properties
loggers=sun.rmi,io.jaegertracing.Configuration,org.jboss.as.config,com.arjuna
logger.level=INFO
logger.handlers=FILE,CONSOLE
logger.sun.rmi.level=WARN
logger.sun.rmi.useParentHandlers=true
logger.io.jaegertracing.Configuration.level=WARN
logger.io.jaegertracing.Configuration.useParentHandlers=true
logger.org.jboss.as.config.level=DEBUG
logger.org.jboss.as.config.useParentHandlers=true
logger.com.arjuna.level=WARN
logger.com.arjuna.useParentHandlers=true
handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler
handler.CONSOLE.level=INFO
handler.CONSOLE.formatter=COLOR-PATTERN
handler.CONSOLE.properties=enabled,autoFlush,target
handler.CONSOLE.enabled=true
handler.CONSOLE.autoFlush=true
handler.CONSOLE.target=SYSTEM_OUT
handler.FILE=org.jboss.logmanager.handlers.PeriodicRotatingFileHandler
handler.FILE.level=ALL
handler.FILE.formatter=PATTERN
handler.FILE.properties=append,autoFlush,enabled,suffix,fileName
handler.FILE.append=true
handler.FILE.autoFlush=true
handler.FILE.enabled=true
handler.FILE.suffix=.yyyy-MM-dd
handler.FILE.fileName=/opt/wildfly/standalone/log/server.log
formatter.PATTERN=org.jboss.logmanager.formatters.PatternFormatter
formatter.PATTERN.properties=pattern
formatter.PATTERN.pattern=%d{yyyy-MM-dd HH\:mm\:ss,SSS} %-5p [%c] (%t) %s%e%n
formatter.COLOR-PATTERN=org.jboss.logmanager.formatters.PatternFormatter
formatter.COLOR-PATTERN.properties=pattern
formatter.COLOR-PATTERN.pattern=%K{level}%d{HH\:mm\:ss,SSS} %-5p [%c] (%t) %s%e%n
Output from standalone-cli
[standalone#localhost:9990 /] /subsystem=logging:read-resource(recursive=true)
{
"outcome" => "success",
"result" => {
"add-logging-api-dependencies" => true,
"use-deployment-logging-config" => true,
"async-handler" => undefined,
"console-handler" => {"CONSOLE" => {
"autoflush" => true,
"enabled" => true,
"encoding" => undefined,
"filter" => undefined,
"filter-spec" => undefined,
"formatter" => "%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n",
"level" => "INFO",
"name" => "CONSOLE",
"named-formatter" => "COLOR-PATTERN",
"target" => "System.out"
}},
"custom-formatter" => undefined,
"custom-handler" => undefined,
"file-handler" => undefined,
"filter" => undefined,
"json-formatter" => undefined,
"log-file" => undefined,
"logger" => {
"com.arjuna" => {
"category" => "com.arjuna",
"filter" => undefined,
"filter-spec" => undefined,
"handlers" => undefined,
"level" => "WARN",
"use-parent-handlers" => true
},
"io.jaegertracing.Configuration" => {
"category" => "io.jaegertracing.Configuration",
"filter" => undefined,
"filter-spec" => undefined,
"handlers" => undefined,
"level" => "WARN",
"use-parent-handlers" => true
},
"org.jboss.as.config" => {
"category" => "org.jboss.as.config",
"filter" => undefined,
"filter-spec" => undefined,
"handlers" => undefined,
"level" => "DEBUG",
"use-parent-handlers" => true
},
"sun.rmi" => {
"category" => "sun.rmi",
"filter" => undefined,
"filter-spec" => undefined,
"handlers" => undefined,
"level" => "WARN",
"use-parent-handlers" => true
}
},
"logging-profile" => undefined,
"pattern-formatter" => {
"PATTERN" => {
"color-map" => undefined,
"pattern" => "%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n"
},
"COLOR-PATTERN" => {
"color-map" => undefined,
"pattern" => "%K{level}%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n"
}
},
"periodic-rotating-file-handler" => {"FILE" => {
"append" => true,
"autoflush" => true,
"enabled" => true,
"encoding" => undefined,
"file" => {
"relative-to" => "jboss.server.log.dir",
"path" => "server.log"
},
"filter" => undefined,
"filter-spec" => undefined,
"formatter" => "%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n",
"level" => "ALL",
"name" => "FILE",
"named-formatter" => "PATTERN",
"suffix" => ".yyyy-MM-dd"
}},
"periodic-size-rotating-file-handler" => undefined,
"root-logger" => {"ROOT" => {
"filter" => undefined,
"filter-spec" => undefined,
"handlers" => [
"CONSOLE",
"FILE"
],
"level" => "INFO"
}},
"size-rotating-file-handler" => undefined,
"socket-handler" => undefined,
"syslog-handler" => undefined,
"xml-formatter" => undefined
}
}
My problem was that server.log has owner root because of prior testings ... just deleted it and now its re-created with owner wildfly.

java.lang.OutOfMemoryError: Java heap space when transferring data from jdbc to elasticsearch via logstash [duplicate]

This question already has answers here:
How to deal with "java.lang.OutOfMemoryError: Java heap space" error?
(31 answers)
Closed 1 year ago.
I have a huge postgres database with 20 million rows and i want to transfer it to elasticsearch via logstash . I followed the advice mentioned here and I test it for a simple database with 300 rows and all things worked fine but when i tested it for my main database i allways cross with error:
nargess#nargess-Surface-Book:/usr/share/logstash/bin$ sudo ./logstash -w 1 -f students.conf --path.data /usr/share/logstash/data/students/ --path.settings /etc/logstash
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid3453.hprof ...
Heap dump file created [13385912484 bytes in 53.304 secs]
Exception in thread "Ruby-0-Thread-11: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/thread_pool.rb:216" java.lang.ArrayIndexOutOfBoundsException: -1
at org.jruby.runtime.ThreadContext.popRubyClass(ThreadContext.java:729)
at org.jruby.runtime.ThreadContext.postYield(ThreadContext.java:1292)
at org.jruby.runtime.ContextAwareBlockBody.post(ContextAwareBlockBody.java:29)
at org.jruby.runtime.Interpreted19Block.yield(Interpreted19Block.java:198)
at org.jruby.runtime.Interpreted19Block.call(Interpreted19Block.java:125)
at org.jruby.runtime.Block.call(Block.java:101)
at org.jruby.RubyProc.call(RubyProc.java:300)
at org.jruby.RubyProc.call(RubyProc.java:230)
at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:103)
at java.lang.Thread.run(Thread.java:748)
The signal INT is in use by the JVM and will not work correctly on this platform
Error: Your application used more memory than the safety cap of 12G.
Specify -J-Xmx####m to increase it (#### = cap size in MB).
Specify -w for full OutOfMemoryError stack trace
Although I go to file /etc/logstash/jvm.options and set -Xms256m
-Xmx12000m, but I have had these errors yet. I have 13g memory free. how can i send my data to elastic search with this memory ?
this is the student-index.json that i use in elasticsearch
{
"aliases": {},
"warmers": {},
"mappings": {
"tab_students_dfe": {
"properties": {
"stcode": {
"type": "text"
},
"voroodi": {
"type": "integer"
},
"name": {
"type": "text"
},
"family": {
"type": "text"
},
"namp": {
"type": "text"
},
"lastupdate": {
"type": "date"
},
"picture": {
"type": "text"
},
"uniquename": {
"type": "text"
}
}
}
},
"settings": {
"index": {
"number_of_shards": "5",
"number_of_replicas": "1"
}
}
}
then i try to insert this index in elastic search by :
curl -XPUT --header "Content-Type: application/json"
http://localhost:9200/students -d #postgres-index.json
and next, this is my configuration fil in /usr/shar/logstash/bin/students.conf file :
input {
jdbc {
jdbc_connection_string => "jdbc:postgresql://localhost:5432/postgres"
jdbc_user => "postgres"
jdbc_password => "postgres"
# The path to downloaded jdbc driver
jdbc_driver_library => "./postgresql-42.2.1.jar"
jdbc_driver_class => "org.postgresql.Driver"
# The path to the file containing the query
statement => "select * from students"
}
}
filter {
aggregate {
task_id => "%{stcode}"
code => "
map['stcode'] = event.get('stcode')
map['voroodi'] = event.get('voroodi')
map['name'] = event.get('name')
map['family'] = event.get('family')
map['namp'] = event.get('namp')
map['uniquename'] = event.get('uniquename')
event.cancel()
"
push_previous_map_as_event => true
timeout => 5
}
}
output {
elasticsearch {
document_id => "%{stcode}"
document_type => "postgres"
index => "students"
codec => "json"
hosts => ["127.0.0.1:9200"]
}
}
Thank you for your help
This is a bit old, but I just had the same issue and increasing the heap size of logstash helped me here. I added this to my logstash service in the docker-compose file:
environment:
LS_JAVA_OPTS: "-Xmx2048m -Xms2048m"
Further read: What are the -Xms and -Xmx parameters when starting JVM?

Java creating dynamic multidimensional array using recursion

Who can help me with creating dynamic multidimensional array using recursion.
I need to create a function with the depth of the array and the number of children
The depth of the array may be of different sizes
class Item {
public int id;
public Map<Integer, Item>[] children;
}
Map<Integer, Item> items = new HashMap<>();
function void build(int depth, int countChildren) {
...
}
function Item createItem() {
Item item = new Item();
item.id = Math.Random(0,10000);
return item;
}
Array looks like this
Image link
[
[
'id' => 1,
'children' => [
[
'id' => 10,
'children' => [
[
'id' => 100,
'children' => [
[
'id' => 1000,
],
[
'id' => 2000,
],
]
],
[
'id' => 200,
'children' => [
[
'id' => 1000,
],
[
'id' => 2000,
],
]
],
]
],
[
'id' => 20,
'children' => [
[
'id' => 100,
'children' => [
[
'id' => 1000,
],
[
'id' => 2000,
],
]
],
[
'id' => 200,
'children' => [
[
'id' => 1000,
],
[
'id' => 2000,
],
]
],
]
],
]
],
[
'id' => 1,
'children' => [
[
'id' => 10,
'children' => [
[
'id' => 100,
'children' => [
[
'id' => 1000,
],
[
'id' => 2000,
],
]
],
[
'id' => 200,
'children' => [
[
'id' => 1000,
],
[
'id' => 2000,
],
]
],
]
],
[
'id' => 20,
'children' => [
[
'id' => 100,
'children' => [
[
'id' => 1000,
],
[
'id' => 2000,
],
]
],
[
'id' => 200,
'children' => [
[
'id' => 1000,
],
[
'id' => 2000,
],
]
],
]
],
]
],
];

Log analysis to query logs based on a log message

I have a Java application that ouputs log in the format
timestamp UUID1 some information
timestamp UUID1 some more information
timestamp UUID1 x = 1
timestamp UUID2 some information
timestamp UUID2 some more information
timestamp UUID2 x = 2
timestamp UUID3 some information
timestamp UUID3 some more information
timestamp UUID3 x = 1
I want to implement a log analysis framework using Elsatic Search, LogStash and Kibana. Is it possible to get the logs only according to X value?
For example:-
If I query X = 1, I should get only the following logs.
timestamp UUID1 some information
timestamp UUID1 some more information
timestamp UUID1 x = 1
timestamp UUID3 some information
timestamp UUID3 some more information
timestamp UUID3 x = 1
If I query X = 2, I should get only the following logs.
timestamp UUID2 some information
timestamp UUID2 some more information
timestamp UUID2 x = 2
I am in control of the log message format. If it is not directly popssible to do this query, I can change the message format also.
UPDATE 1:
I will be a little more specific.
The following are my log statements.
MDC.put("uuid", UUID.randomUUID().toString());
logger.info("Assigning value to the variable : {}", name);
this.setVal(value.getVal());
logger.info("{} = {}", name, value.getVal());
logger.info("Assigned value {} to the variable : {}", value.getVal(),
name);
MDC.clear();
I received the log statements in Logstash using UDP. And I am getting the messages like.
{
"#timestamp" => "2015-04-01T10:23:37.846+05:30",
"#version" => 1,
"message" => "Assigning value to the variable : X",
"logger_name" => "com.example.logstash.Variable",
"thread_name" => "pool-1-thread-1",
"level" => "INFO",
"level_value" => 20000,
"HOSTNAME" => "pnibinkj-W7-1",
"uuid" => "ab17b842-8348-4474-98e4-8bc2b8dd6781",
"host" => "127.0.0.1"
}
{
"#timestamp" => "2015-04-01T10:23:37.846+05:30",
"#version" => 1,
"message" => "Assigning value to the variable : Y",
"logger_name" => "com.example.logstash.Variable",
"thread_name" => "pool-1-thread-2",
"level" => "INFO",
"level_value" => 20000,
"HOSTNAME" => "pnibinkj-W7-1",
"uuid" => "d5513e4c-de3b-4144-87e4-87b077ac8056",
"host" => "127.0.0.1"
}
{
"#timestamp" => "2015-04-01T10:23:37.862+05:30",
"#version" => 1,
"message" => "Y = 1",
"logger_name" => "com.example.logstash.Variable",
"thread_name" => "pool-1-thread-2",
"level" => "INFO",
"level_value" => 20000,
"HOSTNAME" => "pnibinkj-W7-1",
"uuid" => "d5513e4c-de3b-4144-87e4-87b077ac8056",
"host" => "127.0.0.1"
}
{
"#timestamp" => "2015-04-01T10:23:37.863+05:30",
"#version" => 1,
"message" => "X = 1",
"logger_name" => "com.example.logstash.Variable",
"thread_name" => "pool-1-thread-1",
"level" => "INFO",
"level_value" => 20000,
"HOSTNAME" => "pnibinkj-W7-1",
"uuid" => "ab17b842-8348-4474-98e4-8bc2b8dd6781",
"host" => "127.0.0.1"
}
{
"#timestamp" => "2015-04-01T10:23:37.863+05:30",
"#version" => 1,
"message" => "Assigned value 1 to the variable : X",
"logger_name" => "com.example.logstash.Variable",
"thread_name" => "pool-1-thread-1",
"level" => "INFO",
"level_value" => 20000,
"HOSTNAME" => "pnibinkj-W7-1",
"uuid" => "ab17b842-8348-4474-98e4-8bc2b8dd6781",
"host" => "127.0.0.1"
}
{
"#timestamp" => "2015-04-01T10:23:37.863+05:30",
"#version" => 1,
"message" => "Assigned value 1 to the variable : Y",
"logger_name" => "com.example.logstash.Variable",
"thread_name" => "pool-1-thread-2",
"level" => "INFO",
"level_value" => 20000,
"HOSTNAME" => "pnibinkj-W7-1",
"uuid" => "d5513e4c-de3b-4144-87e4-87b077ac8056",
"host" => "127.0.0.1"
}
There are 2 UUIDs
"d5513e4c-de3b-4144-87e4-87b077ac8056" for "Y = 1"
"ab17b842-8348-4474-98e4-8bc2b8dd6781" for "X = 1"
There are two other messages for each UUID. I want to combine them into a single event.
I am not sure, how to write the multiline filter for this case.
filter {
multiline {
pattern => "."
what => "previous"
stream_identity => "%{uuid}"
}
}
"pattern" and "what" are required fields, it seems. What should I provide for these fields. How do I use Stream Identity?
Please point me in right direction.
Thanks,
Paul
You would need to combine your messages (see multiline{} filter, which supports stream_identity), and then a regular query would return the appropriate message.
this should be possible using the kibana filters if X is some unique value, but with the logs in the format shown you'd need to use the multiline filter to join the entries together.
With that in place, you could probably use a query something like
message: "X=1"

HornetQ not honoring Last Value in a Queue when Wildfly is restarted

Following is the Queue Definition I have in my standalone.xml. Queue which I am having is persistent.
<jms-queue name="CEComputeQueue">
<entry name="queue/CEComputeQueue"/>
<entry name="java:jboss/exported/jms/queue/CEComputeQueue"/>
</jms-queue>
With following Address Settings.
<address-setting match="jms.queue.CEComputeQueue">
<last-value-queue>true</last-value-queue> </address-setting>
While pushing to queue HornetQ is not retaining multiple last values as you can see the output which I have taken from the JMX Console where "_HQ_LVQ_NAME (51)" is repeated.
To reproduce this follow the steps below:
1. First I pushed a value to the queue (51) and it is not yet processed and I stop the Wildfly Server.
2. I restarted Server.
3. I pushed another value to the queue (51).
Note: Although the queue is LAST Value Queue , it still has multiple same entries shown below.
RESOLUTION ?
** How can I get it resolved, is it a Bug of HornetQ or Behaviour and
what is the possible solution to the problem.**
Output using JMX:
[standalone#localhost:9990 /] /subsystem=messaging/hornetq-server=default/jms-queue=CEComputeQueue:list-messages
{
"outcome" => "success",
"result" => [
{
"JMSMessageID" => "ID:b620436a-ce84-11e4-a3d7-f9d18c2c2348",
"JMSExpiration" => 0,
"messageID" => 34360041009L,
"_HQ_LVQ_NAME" => "51",
"address" => "jms.queue.CEComputeQueue",
"JMSDeliveryMode" => "PERSISTENT",
"__HQ_CID" => "60fe5c1a-ce52-11e4-a3d7-f9d18c2c2348",
"JMSPriority" => 4,
"JMSTimestamp" => 1426803011453L
},
{
"JMSMessageID" => "ID:c7a3aaee-ce8d-11e4-a3d7-f9d18c2c2348",
"JMSExpiration" => 0,
"messageID" => 34360041166L,
"_HQ_LVQ_NAME" => "49",
"address" => "jms.queue.CEComputeQueue",
"JMSDeliveryMode" => "PERSISTENT",
"__HQ_CID" => "6112f59e-ce52-11e4-a3d7-f9d18c2c2348",
"JMSPriority" => 4,
"JMSTimestamp" => 1426806906306L
},
{
"JMSMessageID" => "ID:4c4952f8-ce95-11e4-a3d7-f9d18c2c2348",
"JMSExpiration" => 0,
"messageID" => 34360041269L,
"_HQ_LVQ_NAME" => "51",
"address" => "jms.queue.CEComputeQueue",
"JMSDeliveryMode" => "PERSISTENT",
"__HQ_CID" => "60fe5c1a-ce52-11e4-a3d7-f9d18c2c2348",
"JMSPriority" => 4,
"JMSTimestamp" => 1426810135328L
},
{
"JMSMessageID" => "ID:2a4048fd-cea1-11e4-a3d7-f9d18c2c2348",
"JMSExpiration" => 0,
"messageID" => 34360041517L,
"_HQ_LVQ_NAME" => "51",
"address" => "jms.queue.CEComputeQueue",
"JMSDeliveryMode" => "PERSISTENT",
"__HQ_CID" => "61105d84-ce52-11e4-a3d7-f9d18c2c2348",
"JMSPriority" => 4,
"JMSTimestamp" => 1426815232187L
},
{
"JMSMessageID" => "ID:cdc0d5f8-cea5-11e4-a3d7-f9d18c2c2348",
"JMSExpiration" => 0,
"messageID" => 34360041946L,
"_HQ_LVQ_NAME" => "49",
"address" => "jms.queue.CEComputeQueue",
"JMSDeliveryMode" => "PERSISTENT",
"__HQ_CID" => "60fe5c1a-ce52-11e4-a3d7-f9d18c2c2348",
"JMSPriority" => 4,
"JMSTimestamp" => 1426817224485L
},
{
"JMSMessageID" => "ID:0e169a9e-cea7-11e4-a3d7-f9d18c2c2348",
"JMSExpiration" => 0,
"messageID" => 34360042115L,
"_HQ_LVQ_NAME" => "50",
"address" => "jms.queue.CEComputeQueue",
"JMSDeliveryMode" => "PERSISTENT",
"__HQ_CID" => "6112f59e-ce52-11e4-a3d7-f9d18c2c2348",
"JMSPriority" => 4,
"JMSTimestamp" => 1426817761918L
},
{
"JMSMessageID" => "ID:185fd030-cea7-11e4-a3d7-f9d18c2c2348",
"JMSExpiration" => 0,
"messageID" => 34360042124L,
"_HQ_LVQ_NAME" => "16",
"address" => "jms.queue.CEComputeQueue",
"JMSDeliveryMode" => "PERSISTENT",
"__HQ_CID" => "60fe5c1a-ce52-11e4-a3d7-f9d18c2c2348",
"JMSPriority" => 4,
"JMSTimestamp" => 1426817779175L
},
{
"JMSMessageID" => "ID:4c614265-cea7-11e4-a3d7-f9d18c2c2348",
"JMSExpiration" => 0,
"messageID" => 34360042157L,
"_HQ_LVQ_NAME" => "51",
"address" => "jms.queue.CEComputeQueue",
"JMSDeliveryMode" => "PERSISTENT",
"__HQ_CID" => "6112f59e-ce52-11e4-a3d7-f9d18c2c2348",
"JMSPriority" => 4,
"JMSTimestamp" => 1426817866426L
},
{
"JMSMessageID" => "ID:5b14c783-cead-11e4-92c2-e36be9318636",
"JMSExpiration" => 0,
"messageID" => 36507524460L,
"_HQ_LVQ_NAME" => "49",
"address" => "jms.queue.CEComputeQueue",
"JMSDeliveryMode" => "PERSISTENT",
"__HQ_CID" => "a3519e4e-cea8-11e4-92c2-e36be9318636",
"JMSPriority" => 4,
"JMSTimestamp" => 1426820468071L
},
{
"JMSMessageID" => "ID:5e94c684-cead-11e4-92c2-e36be9318636",
"JMSExpiration" => 0,
"messageID" => 36507524462L,
"_HQ_LVQ_NAME" => "51",
"address" => "jms.queue.CEComputeQueue",
"JMSDeliveryMode" => "PERSISTENT",
"__HQ_CID" => "a3519e4e-cea8-11e4-92c2-e36be9318636",
"JMSPriority" => 4,
"JMSTimestamp" => 1426820473943L
},
{
"JMSMessageID" => "ID:a5bed858-cea9-11e4-92c2-e36be9318636",
"JMSExpiration" => 0,
"messageID" => 36507523986L,
"_HQ_LVQ_NAME" => "50",
"address" => "jms.queue.CEComputeQueue",
"JMSDeliveryMode" => "PERSISTENT",
"__HQ_CID" => "a3519e4e-cea8-11e4-92c2-e36be9318636",
"JMSPriority" => 4,
"JMSTimestamp" => 1426818875350L
},
{
"JMSMessageID" => "ID:20a629be-ceaa-11e4-92c2-e36be9318636",
"JMSExpiration" => 0,
"messageID" => 36507524057L,
"_HQ_LVQ_NAME" => "16",
"address" => "jms.queue.CEComputeQueue",
"JMSDeliveryMode" => "PERSISTENT",
"__HQ_CID" => "a3519e4e-cea8-11e4-92c2-e36be9318636",
"JMSPriority" => 4,
"JMSTimestamp" => 1426819081548L
}
]
}
This is a bug , raised it in Wildfy Jira.
Here is the link.
https://issues.jboss.org/browse/WFLY-4479

Categories