writing logs on appengine java to stackdriver-logging - java

I have a java app running on appengine.
I'm logging my logs in json structure and then I can see my logs on stack driver (as in the docs)
package com.foo.bar;
public class MyClass {
private static final Logger log = Logger.getLogger(MyClass.class.getName());
public void myFunc() {
log.info("{msg: 'hello', corId: '123'}");
}
here is the message I get on stackdriver-logging:
com.foo.bar.MyClass myFunc: {msg: 'hello', corId: '123'}
and in the log-request object:
protoPayload.line[].logMessage = "com.foo.bar.MyClass myFunc: {msg: 'hello', corId: '123'}"
how can I make the log message to be only the message I am logging - without the class prefix:
{msg: 'hello', corId: '123'}
protoPayload.line[].logMessage = "{msg: 'hello', corId: '123'}"

I ended up shipping the logs from stackdriver to alasticsearch via logstash
in logstash I parsed my logs, I also seperated my logs tol have each log as its own record and not a nested array
see:
How to ship logs from pods on Kubernetes running on top of GCP to elasticsearch/logstash?
my config in logstash for parsing the logs:
filter {
if [resource][type] == "gae_app" {
# split the protoPayload.line array, so each log message is a separate entry in Elasticsearch
split {
field => "[protoPayload][line]"
target => "line"
remove_field => [ "httpRequest", "operation", "protoPayload"]
}
# extract `line.logMessage` and `line.severity` fields
mutate {
add_field => {"logMessage" => "%{[line][logMessage]}"}
replace => {"severity" => "%{[line][severity]}"}
remove_field => ["line"]
}
# remove the `com.example.MyClass myFunc: ` prefix from log
grok {
match => { "logMessage" => "^%{DATA}: %{GREEDYDATA:parsedMessage}"}
}
# parse the log message into json, json fields will be located in root
json {
source => "parsedMessage"
target => "jsonPayload"
add_field => {"[jsonPayload][level]" => "%{severity}"}
remove_field => ["parsedMessage", "logMessage"]
}
# uniform GAE logs to the structure of GKE logs
grok {
match => { # check..
"[resource][labels][version_id]" =>
"^%{DATA:[resource][labels][container_name]}-%{GREEDYDATA:[resource][labels][namespace_id]}"}
}
}
}

Related

File beat is not transferring data to logstash

Beat input is not transferring to logstash. I have provided filebeat and logstash configuration files below.
Input Test.cs file
Date,Open,High,Low,Close,Volume,Adj Close
2015-04-02,125.03,125.56,124.19,125.32,32120700,125.32
2015-04-01,124.82,125.12,123.10,124.25,40359200,124.25
filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
-C:/ELK Stack/filebeat-8.2.0-windows-x86_64/filebeat-8.2.0-windows-x86_64/Test.csv
output.logstash:
hosts: ["localhost:5044"]
logstash.conf
input {
beats {
port => 5044
}
}
filter {
csv {
separator => ","
columns => ["Date","Open","High","Low","Close","Volume","Adj Close"]
}
mutate {convert => ["High", "float"]}
mutate {convert => ["Open", "float"]}
mutate {convert => ["Low", "float"]}
mutate {convert => ["Close", "float"]}
mutate {convert => ["Volume", "float"]}
}
output {
stdout {}
}
Kindly to check the filebeat yml file as there an issue with indentation
filebeat documentation
filebeat.inputs:
- type: log
paths:
- /var/log/messages
- /var/log/*.log
your filebeat
filebeat.inputs:
- type: log
enabled: true
paths:
-C:/ELK Stack/filebeat-8.2.0-windows-x86_64/filebeat-8.2.0-windows-x86_64/Test.csv
output.logstash:
hosts: ["localhost:5044"]
and for information log input is deprecated, instead use filesream input

XML data display in grid using kibana and logstash

I wanted to display XML data using logstash and Kibana in grid format. using below conf file I am able to display data into grid but not able to split row data.
Example:
Output
logstash.conf file :
input {
file {
path => "C:/ELK Stack/logstash-8.2.0-windows-x86_64/logstash-8.2.0/Test.xml"
start_position => "beginning"
sincedb_path => "NUL"
codec => multiline {
pattern => "^<?stations.*>"
negate => "true"
what => "previous"
auto_flush_interval => 1
max_lines => 3000
}}}
filter
{
xml
{
source => "message"
target => "parsed"
store_xml => "false"
xpath => [
"/stations/station/id/text()", "station_id",
"/stations/station/name/text()", "station_name"
]
}
mutate {
remove_field => [ "message"]
}
}
output {
elasticsearch {
action => "index"
hosts => "localhost:9200"
index => "logstash_index123xml"
workers => 1
}
stdout {
codec => rubydebug
}
}
xpath will always return arrays, to associate the members of the two arrays you are going to need to use a ruby filter. To get multiple events you can use a split filter to split an array which you build in the ruby filter. If you start with
<stations>
<station>
<id>1</id>
<name>a</name>
<id>2</id>
<name>b</name>
</station>
</stations>
then if you use
xml {
source => "message"
store_xml => "false"
xpath => {
"/stations/station/id/text()" => "[#metadata][station_id]"
"/stations/station/name/text()" => "[#metadata][station_name]"
}
remove_field => [ "message" ]
}
ruby {
code => '
ids = event.get("[#metadata][station_id]")
names = event.get("[#metadata][station_name]")
if ids.is_a? Array and names.is_a? Array y and ids.length == names.length
a = []
ids.each_index { |x|
a << { "station_name" => names[x], "station_id" => ids[x] }
}
event.set("[#metadata][theData]", a)
end
'
}
if [#metadata][theData] {
split {
field => "[#metadata][theData]"
add_field => {
"station_name" => "%{[#metadata][theData][station_name]}"
"station_id" => "%{[#metadata][theData][station_id]}"
}
}
}
You will get two events
{
"station_name" => "a",
"station_id" => "1",
...
}
{
"station_name" => "b",
"station_id" => "2",
...
}

Failed in loading csv data to Elasticsearch, translation issue

I am trying to import CSV file to elastic file, but it is failed and threw an error
Pipeline aborted due to error {:pipeline_id=>"main",
:exception=>#,
:backtrace=>["/usr/local/Cellar/logstash/7.6.1/libexec/vendor/bundle/jruby/2.5.0/gems/logstash-filter-mutate-3.5.0/lib/logstash/filters/mutate.rb:222:in
block in register'", "org/jruby/RubyHash.java:1428:ineach'",
"/usr/local/Cellar/logstash/7.6.1/libexec/vendor/bundle/jruby/2.5.0/gems/logstash-filter-mutate-3.5.0/lib/logstash/filters/mutate.rb:220:in
register'",
"org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:56:in
register'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:200:in
block in register_plugins'", "org/jruby/RubyArray.java:1814:in
each'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:199:in
register_plugins'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:502:in
maybe_setup_out_plugins'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:212:in
start_workers'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:154:in
run'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:109:in
`block in start'"],
"pipeline.sources"=>["/Users/user/Document/Esk-Data/xudaxia.conf"],
:thread=>"#"}
Below is the conf file
input
{
file{
path => ["/test.csv"]
start_position => "beginning"
}
}
filter{
csv{
separator => ","
columns => ["comment_time","comment", "id", "video_time"]
}
mutate{
convert => {
"comment_time" => "date_time"
"comment" => "string"
"id" => "integer"
"video_time" => "float"
}
}
}
output{
elasticsearch{
hosts => ["localhost:9200"]
index => "test"
}
}
test.csv
comment_time comment id video_time
2020/03/22 15:59:41 バイ a 123.100
2020/03/22 15:59:45 บาย b 100.100
2020/04/22 15:59:50 ByeBye c 80.210
can anyone help?
According the documentation the option date_time doesn't exist for convert action on mutate plugin - doc here.
However this plugin is used to cast a type into another one, that it isn't your use case. If comment_time is not recognized as date field you should trasform it with date plugin - doc here.
So you should remove this block:
mutate{
convert => {
"comment_time" => "date_time"
"comment" => "string"
"id" => "integer"
"video_time" => "float"
}
}
and replace it with this one:
date {
match => [ "comment_time", "yyyy/MM/dd HH:mm:ss"
}

best practices to managing and loading properties

I am looking forward to know some best approaches to manage properties files.
We have a set of devices (say N). Each of these devices has certain properties.
e.g.
Device A has properties
A.a11=valuea11
A.a12=valuea12
.
Device B has properties
B.b11=valueb11
B.b12=valueb12
.
Apart from this they have some common properties applicable for all devices.
X.x11=valuex11
X.x12=valuex12
I am writing an automation for running some test suites on these devices. At a time, test script on run on a single device. The device name will be passed as a argument. Based on the device name, code will grab the respective properties and common properties and update the device with these properties. e.g. for device A, code will grab the A.a11, A.a12 (device A specific) and X.x11, X.x12 (common) properties and upload it to the device before running test script.
So, in code, I need to manage these properties so that only device specific and common properties will be uploaded to the device, ignoring the rest one. I am managing it like this
if ($device eq 'A') then
upload A's properties
elsif ($device eq 'B') then
upload B's properties
endif
upload Common (X) properties.
Managing device in this way is becoming little bit difficult as the number of devices are keep on increasing.
So I am looking forward for some other best approach to manage these properties.
This is a good case where v (aka traits in generalized OOP literature) will be useful.
Instead of the classical object is a class, with roles an object *does a * role.
Check out the appropriate Moose docs for way more information.
Example:
package Device::ActLikeA;
use Moose::Role;
has 'attribute' => (
isa => string,
is => 'rw',
default => 'Apple',
);
sub an_a_like_method {
my $self = shift;
# foo
}
1;
So now I have a role called Device::ActLikeA, what do I do with it?
Well, I can apply the role to a class, and the code and attributes defined in ActLikeA will be available in the class:
package Device::USBButterChurn;
use Moose;
does 'Device::ActLikeA';
# now has an attribute 'attribute' and a method 'an_a_like_method'
1;
You can also apply roles to individual instances of a class.
package Device;
use Moose;
has 'part_no' => (
isa => 'Str',
is => 'ro',
required => 1,
);
has 'serial' => {
isa => 'Str',
is => 'ro',
lazy => 1,
build => '_build_serial',
);
1;
And then main code that looks at the part and applies appropriate roles:
my #PART_MATCH = (
[ qr/Foo/, 'Device::MetaSyntacticVariable' ],
[ qr/^...-[^_]*[A][^-], 'Device::ActLikeA; ],
[ qr/^...-[^_]*[B][^-], 'Device::ActLikeB; ],
# etc
);
my $parts = load_parts($config_file);
for my $part ( #$parts ) {
my $part_no = $part->part_number();
for my $_ (#PART_MATCH) {
my ($match, $role) = #$_;
$part->apply_role($role)
if $part_no =~ /$match/;
}
}
Here is a very direct approach.
First of all you need a way to indicate that A "is-a" X and B "is-a" X, i.e. X is a parent of both A and B.
Then your upload_device routine will look something like this:
sub upload_properties {
my $device = shift;
... upload the "specific" properties of $device ...
for my $parent (parent's of $device) {
upload_properties($parent);
}
}
One implementation:
Indicate the "is-a" relationship with a line in your config file like:
A.isa = X
(Feel free to use some other syntax - what you use will depend on how you want to parse the file.)
From the config file, create a hash of all devices that looks like this:
$all_devices = {
A => { a11 => valuea11, a12 => valuea12, isa => [ 'X' ]},
B => { b11 => valueb11, b12 => valueb12, isa => [ 'X' ] },
X => { x11 => valuex11, x12 => valuex12, isa => [] },
}
The upload_properties routine:
sub upload_properties {
my ($device) = #_;
for my $key (keys %$device) {
next if $key eq "isa";
... upload property $key => $device->{$key} ...
}
my $isa = $device->{isa}; # this should be an array ref
for my $parent_name (#$isa) {
my $parent = $all_devices->{$parent_name};
upload_properties($parent);
}
}
# e.g. to upload device 'A':
upload_properties( $all_devices->{'A'} );
You can eliminate the large if-else chain by storing the device properties in a hash.
Then you need only ensure that the particular $device appears in that hash.
#!/usr/bin/perl
use warnings;
use strict;
my %vals = (
A => {
a11 => 'valuea11',
a12 => 'valuea12',
},
B => {
b11 => 'valueb11',
b12 => 'valueb12',
},
);
foreach my $device qw(A B C) {
if (exists $vals{$device}) {
upload_properties($vals{$device});
}
else {
warn "'$device' is not a valid device\n";
}
}
sub upload_properties {
my($h) = #_;
print "setting $_=$h->{$_}\n" for sort keys %$h; # simulate upload
print "\n";
}

Name of applications running on port in Perl or Java

Xampp comes with a neat executable called xampp-portcheck.exe. This responds with if the ports required are free, and if not, which applications are running on those ports.
I can check if something is running on a port, by accessing the netstat details, but how do I find out the application running on the port within Windows?
The CPAN module Win32::IPHelper provides access to GetExtendedTcpTable which provides the ProcessID for each connection.
Win32::Process::Info gives information about all running processes.
Combining the two, we get:
#!/usr/bin/perl
use strict;
use warnings;
use Win32;
use Win32::API;
use Win32::IPHelper;
use Win32::Process::Info qw( NT );
use Data::Dumper;
my #tcptable;
Win32::IPHelper::GetExtendedTcpTable(\#tcptable, 1);
my $pi = Win32::Process::Info->new;
my %pinfo = map {$_->{ProcessId} => $_ } $pi->GetProcInfo;
for my $conn ( #tcptable ) {
my $pid = $conn->{ProcessId};
$conn->{ProcessName} = $pinfo{$pid}->{Name};
$conn->{ProcessExecutablePath} = $pinfo{$pid}->{ExecutablePath};
}
#tcptable =
sort { $a->[0] cmp $b->[0] }
map {[ sprintf("%s:%s", $_->{LocalAddr}, $_->{LocalPort}) => $_ ]}
#tcptable;
print Dumper \#tcptable;
Output:
[
'0.0.0.0:135',
{
'RemotePort' => 0,
'LocalPort' => 135,
'LocalAddr' => '0.0.0.0',
'State' => 'LISTENING',
'ProcessId' => 1836,
'ProcessName' => 'svchost.exe',
'ProcessExecutablePath' => 'C:\\WINDOWS\\system32\\svchost.exe',
'RemoteAddr' => '0.0.0.0'
}
],
...
[
'192.168.169.150:1841',
{
'RemotePort' => 80,
'LocalPort' => 1841,
'LocalAddr' => '192.168.169.150',
'State' => 'ESTABLISHED',
'ProcessId' => 1868,
'ProcessName' => 'firefox.exe',
'ProcessExecutablePath' => 'C:\\Program Files\\Mozilla Firefox\\firefox.exe',
'RemoteAddr' => '69.59.196.211'
}
],
Phewwww it was exhausting connecting all these dots.

Categories