Beat input is not transferring to logstash. I have provided filebeat and logstash configuration files below.
Input Test.cs file
Date,Open,High,Low,Close,Volume,Adj Close
2015-04-02,125.03,125.56,124.19,125.32,32120700,125.32
2015-04-01,124.82,125.12,123.10,124.25,40359200,124.25
filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
-C:/ELK Stack/filebeat-8.2.0-windows-x86_64/filebeat-8.2.0-windows-x86_64/Test.csv
output.logstash:
hosts: ["localhost:5044"]
logstash.conf
input {
beats {
port => 5044
}
}
filter {
csv {
separator => ","
columns => ["Date","Open","High","Low","Close","Volume","Adj Close"]
}
mutate {convert => ["High", "float"]}
mutate {convert => ["Open", "float"]}
mutate {convert => ["Low", "float"]}
mutate {convert => ["Close", "float"]}
mutate {convert => ["Volume", "float"]}
}
output {
stdout {}
}
Kindly to check the filebeat yml file as there an issue with indentation
filebeat documentation
filebeat.inputs:
- type: log
paths:
- /var/log/messages
- /var/log/*.log
your filebeat
filebeat.inputs:
- type: log
enabled: true
paths:
-C:/ELK Stack/filebeat-8.2.0-windows-x86_64/filebeat-8.2.0-windows-x86_64/Test.csv
output.logstash:
hosts: ["localhost:5044"]
and for information log input is deprecated, instead use filesream input
Related
I am trying to import CSV file to elastic file, but it is failed and threw an error
Pipeline aborted due to error {:pipeline_id=>"main",
:exception=>#,
:backtrace=>["/usr/local/Cellar/logstash/7.6.1/libexec/vendor/bundle/jruby/2.5.0/gems/logstash-filter-mutate-3.5.0/lib/logstash/filters/mutate.rb:222:in
block in register'", "org/jruby/RubyHash.java:1428:ineach'",
"/usr/local/Cellar/logstash/7.6.1/libexec/vendor/bundle/jruby/2.5.0/gems/logstash-filter-mutate-3.5.0/lib/logstash/filters/mutate.rb:220:in
register'",
"org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:56:in
register'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:200:in
block in register_plugins'", "org/jruby/RubyArray.java:1814:in
each'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:199:in
register_plugins'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:502:in
maybe_setup_out_plugins'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:212:in
start_workers'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:154:in
run'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:109:in
`block in start'"],
"pipeline.sources"=>["/Users/user/Document/Esk-Data/xudaxia.conf"],
:thread=>"#"}
Below is the conf file
input
{
file{
path => ["/test.csv"]
start_position => "beginning"
}
}
filter{
csv{
separator => ","
columns => ["comment_time","comment", "id", "video_time"]
}
mutate{
convert => {
"comment_time" => "date_time"
"comment" => "string"
"id" => "integer"
"video_time" => "float"
}
}
}
output{
elasticsearch{
hosts => ["localhost:9200"]
index => "test"
}
}
test.csv
comment_time comment id video_time
2020/03/22 15:59:41 バイ a 123.100
2020/03/22 15:59:45 บาย b 100.100
2020/04/22 15:59:50 ByeBye c 80.210
can anyone help?
According the documentation the option date_time doesn't exist for convert action on mutate plugin - doc here.
However this plugin is used to cast a type into another one, that it isn't your use case. If comment_time is not recognized as date field you should trasform it with date plugin - doc here.
So you should remove this block:
mutate{
convert => {
"comment_time" => "date_time"
"comment" => "string"
"id" => "integer"
"video_time" => "float"
}
}
and replace it with this one:
date {
match => [ "comment_time", "yyyy/MM/dd HH:mm:ss"
}
I am using Ganymed ssh lib in Java to connect to Linux machine and execute some unix scripts, and display their output.
I am running a parent shell script which in turn runs few other sub-scripts and finally a perl script. All work well for the shell scripts, but when it reaches the perl script, I stop getting any output.
If I run the parent script manually on Linux server I see output from perl without issues.
Here's the relevant java code, connecting to the machine and calling the shell script, and returning a BufferedReader from where the output could be read line by line :
try {
conn = new Connection(server);
conn.connect();
boolean isAuthenticated = conn.authenticateWithPublicKey(user, keyfile, keyfilePass);
if (isAuthenticated == false) {
throw new IOException("Authentication failed.");
}
sess = conn.openSession();
if (param == null) {
sess.execCommand(". ./.bash_profile; cd $APP_HOME; ./parent_script.sh");
}
else {...}
InputStream stdout = new StreamGobbler(sess.getStdout());
reader = new BufferedReader(new InputStreamReader(stdout));
} catch (IOException e) {
e.printStackTrace();
}
My parent shell script called looks like this :
./start1 #script1 output OK
./start2 #script2 output OK
./start3 #script3 output OK
/u01/app/perl_script.pl # NO OUTPUT HERE :(
Would anyone have any idea why this happens ?
EDIT : Adding perl script
#!/u01/app/repo/code/sh/perl.sh
use FindBin qw/ $Bin /;
use File::Basename qw/ dirname /;
use lib (dirname($Bin). "/pm" );
use Capture::Tiny qw/:all/;
use Data::Dumper;
use Archive::Zip;
use XML::Simple;
use MXA;
my $mx = new MXA;
chdir $mx->config->{$APP_HOME};
warn Dumper { targets => $mx->config->{RTBS} };
foreach my $target (keys %{ $mx->config->{RTBS}->{Targets} }) {
my $cfg = $mx->config->{RTBS}->{Targets}->{$target};
my #commands = (
[
...
],
[
'unzip',
'-o',
"$cfg->{ConfigName}.zip",
'Internal/AdapterConfig/Driver.xml'
],
[
'zip',
"$cfg->{ConfigName}.zip",
'Internal/AdapterConfig/Driver.xml'
],
[
'rm -rf Internal'
],
[
"rm -f $cfg->{ConfigName}.zip"
],
);
foreach my $cmnd (#commands) {
warn Dumper { command => $cmnd };
my ($stdout, $stderr, $status) = capture {
system(#{ $cmnd });
};
warn Dumper { stdout => $stdout,
stderr => $stderr,
status => $status };
}
=pod
warn "runnnig -scriptant /ANT_FILE:mxrt.RT_${target}argets_std.xml /ANT_TARGET:exportConfig -jopt:-DconfigName=Fixing -jopt:-DfileName=Fixing.zip');
($stdout, $stderr, $status) = capture {
system('./command.app', '-scriptant -parameters');
}
($stdout, $stderr, $status) = capture {
system('unzip Real-time.zip Internal/AdapterConfig/Driver.xml');
};
my $xml = XMLin('Internal/AdapterConfig/MDPDriver.xml');
print Dumper { xml => $xml };
[[ ${AREAS} == "pr" ]] && {
${PREFIX}/substitute_mdp_driver_parts Internal/AdapterConfig/Driver.xml 123 controller#mdp-a-n1,controller#mdp-a-n2
} || {
${PREFIX}/substitute_mdp_driver_parts Internal/AdapterConfig/Driver.xml z8pnOYpulGnWrR47y5UH0e96IU0OLadFdW/Bm controller#md-uat-n1,controller#md-uat-n2
}
zip Real-time.zip Internal/AdapterConfig/Driver.xml
rm -rf Internal
rm -f Real-time.zip
print $mx->Dump( stdout => $stdout,
stderr => $stderr,
status => $status );
=cut
}
The part of your Perl code that produces the output is:
warn Dumper { stdout => $stdout,
stderr => $stderr,
status => $status };
Looking at the documentation for warn() we see:
Emits a warning, usually by printing it to STDERR
But your Java program is reading from STDOUT.
InputStream stdout = new StreamGobbler(sess.getStdout());
You have a couple of options.
Change your Perl code to send the output to STDOUT instead of STDERR. This could be a simple as changing warn() to print().
When calling the Perl program in the shell script, redirect STDERR to STDOUT.
/u01/app/perl_script.pl 2>&1
I guess you could also set up your Java program to read from STDERR as well. But I'm not a Java programmer, so I wouldn't be able to advise you on the best way to do that.
Set<String> graphNames = JanusGraphFactory.getGraphNames();
for(String name:graphNames) {
System.out.println(name);
}
The above snippet produces the following exception
java.lang.IllegalStateException: Gremlin Server must be configured to use the JanusGraphManager.
at com.google.common.base.Preconditions.checkState(Preconditions.java:173)
at org.janusgraph.core.JanusGraphFactory.getGraphNames(JanusGraphFactory.java:175)
at com.JanusTest.controllers.JanusController.getPersonDetail(JanusController.java:66)
my.properties
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=cql
storage.hostname=127.0.0.1
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.5
index.search.backend=elasticsearch
index.search.hostname=127.0.0.1
gremlin-server.yaml
host: 0.0.0.0
port: 8182
scriptEvaluationTimeout: 30000
channelizer: org.apache.tinkerpop.gremlin.server.channel.WebSocketChannelizer
graphManager: org.janusgraph.graphdb.management.JanusGraphManager
graphs: {
ConfigurationManagementGraph: conf/my.properties,
}
plugins:
- janusgraph.imports
scriptEngines: {
gremlin-groovy: {
imports: [java.lang.Math],
staticImports: [java.lang.Math.PI],
scripts: [scripts/empty-sample.groovy]}}
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoLiteMessageSerializerV1d0, config: {ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
processors:
- { className: org.apache.tinkerpop.gremlin.server.op.session.SessionOpProcessor, config: { sessionTimeout: 28800000 }}
- { className: org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor, config: { cacheExpirationTime: 600000, cacheMaxSize: 1000 }}
metrics: {
consoleReporter: {enabled: true, interval: 180000},
csvReporter: {enabled: true, interval: 180000, fileName: /tmp/gremlin-server-metrics.csv},
jmxReporter: {enabled: true},
slf4jReporter: {enabled: true, interval: 180000},
gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST},
graphiteReporter: {enabled: false, interval: 180000}}
maxInitialLineLength: 4096
maxHeaderSize: 8192
maxChunkSize: 8192
maxContentLength: 65536
maxAccumulationBufferComponents: 1024
resultIterationBatchSize: 64
writeBufferLowWaterMark: 32768
writeBufferHighWaterMark: 65536
This answer to this is similar to this other question.
The call to JanusGraphFactory.getGraphNames() needs to be sent to the remote server. If you're working in the Gremlin Console, first establish a remote sessioned connection then set remote console mode.
gremlin> :remote connect tinkerpop.server conf/remote.yaml session
==>Configured localhost/127.0.0.1:8182
gremlin> :remote console
==>All scripts will now be sent to Gremlin Server - [localhost:8182]-[5206cdde-b231-41fa-9e6c-69feac0fe2b2] - type ':remote console' to return to local mode
Then as described in the JanusGraph docs for "Listing the Graphs":
ConfiguredGraphFactory.getGraphNames() will return a set of graph names for which you have created configurations using the ConfigurationManagementGraph APIs.
JanusGraphFactory.getGraphNames() on the other hand returns a set of graph names for which you have instantiated and the references are stored inside the JanusGraphManager.
If you are not using the Gremlin Console, then you should be using a remote client, such as the TinkerPop gremlin-driver (Java), to send your requests to the Gremlin Server.
I have a java app running on appengine.
I'm logging my logs in json structure and then I can see my logs on stack driver (as in the docs)
package com.foo.bar;
public class MyClass {
private static final Logger log = Logger.getLogger(MyClass.class.getName());
public void myFunc() {
log.info("{msg: 'hello', corId: '123'}");
}
here is the message I get on stackdriver-logging:
com.foo.bar.MyClass myFunc: {msg: 'hello', corId: '123'}
and in the log-request object:
protoPayload.line[].logMessage = "com.foo.bar.MyClass myFunc: {msg: 'hello', corId: '123'}"
how can I make the log message to be only the message I am logging - without the class prefix:
{msg: 'hello', corId: '123'}
protoPayload.line[].logMessage = "{msg: 'hello', corId: '123'}"
I ended up shipping the logs from stackdriver to alasticsearch via logstash
in logstash I parsed my logs, I also seperated my logs tol have each log as its own record and not a nested array
see:
How to ship logs from pods on Kubernetes running on top of GCP to elasticsearch/logstash?
my config in logstash for parsing the logs:
filter {
if [resource][type] == "gae_app" {
# split the protoPayload.line array, so each log message is a separate entry in Elasticsearch
split {
field => "[protoPayload][line]"
target => "line"
remove_field => [ "httpRequest", "operation", "protoPayload"]
}
# extract `line.logMessage` and `line.severity` fields
mutate {
add_field => {"logMessage" => "%{[line][logMessage]}"}
replace => {"severity" => "%{[line][severity]}"}
remove_field => ["line"]
}
# remove the `com.example.MyClass myFunc: ` prefix from log
grok {
match => { "logMessage" => "^%{DATA}: %{GREEDYDATA:parsedMessage}"}
}
# parse the log message into json, json fields will be located in root
json {
source => "parsedMessage"
target => "jsonPayload"
add_field => {"[jsonPayload][level]" => "%{severity}"}
remove_field => ["parsedMessage", "logMessage"]
}
# uniform GAE logs to the structure of GKE logs
grok {
match => { # check..
"[resource][labels][version_id]" =>
"^%{DATA:[resource][labels][container_name]}-%{GREEDYDATA:[resource][labels][namespace_id]}"}
}
}
}
I am looking forward to know some best approaches to manage properties files.
We have a set of devices (say N). Each of these devices has certain properties.
e.g.
Device A has properties
A.a11=valuea11
A.a12=valuea12
.
Device B has properties
B.b11=valueb11
B.b12=valueb12
.
Apart from this they have some common properties applicable for all devices.
X.x11=valuex11
X.x12=valuex12
I am writing an automation for running some test suites on these devices. At a time, test script on run on a single device. The device name will be passed as a argument. Based on the device name, code will grab the respective properties and common properties and update the device with these properties. e.g. for device A, code will grab the A.a11, A.a12 (device A specific) and X.x11, X.x12 (common) properties and upload it to the device before running test script.
So, in code, I need to manage these properties so that only device specific and common properties will be uploaded to the device, ignoring the rest one. I am managing it like this
if ($device eq 'A') then
upload A's properties
elsif ($device eq 'B') then
upload B's properties
endif
upload Common (X) properties.
Managing device in this way is becoming little bit difficult as the number of devices are keep on increasing.
So I am looking forward for some other best approach to manage these properties.
This is a good case where v (aka traits in generalized OOP literature) will be useful.
Instead of the classical object is a class, with roles an object *does a * role.
Check out the appropriate Moose docs for way more information.
Example:
package Device::ActLikeA;
use Moose::Role;
has 'attribute' => (
isa => string,
is => 'rw',
default => 'Apple',
);
sub an_a_like_method {
my $self = shift;
# foo
}
1;
So now I have a role called Device::ActLikeA, what do I do with it?
Well, I can apply the role to a class, and the code and attributes defined in ActLikeA will be available in the class:
package Device::USBButterChurn;
use Moose;
does 'Device::ActLikeA';
# now has an attribute 'attribute' and a method 'an_a_like_method'
1;
You can also apply roles to individual instances of a class.
package Device;
use Moose;
has 'part_no' => (
isa => 'Str',
is => 'ro',
required => 1,
);
has 'serial' => {
isa => 'Str',
is => 'ro',
lazy => 1,
build => '_build_serial',
);
1;
And then main code that looks at the part and applies appropriate roles:
my #PART_MATCH = (
[ qr/Foo/, 'Device::MetaSyntacticVariable' ],
[ qr/^...-[^_]*[A][^-], 'Device::ActLikeA; ],
[ qr/^...-[^_]*[B][^-], 'Device::ActLikeB; ],
# etc
);
my $parts = load_parts($config_file);
for my $part ( #$parts ) {
my $part_no = $part->part_number();
for my $_ (#PART_MATCH) {
my ($match, $role) = #$_;
$part->apply_role($role)
if $part_no =~ /$match/;
}
}
Here is a very direct approach.
First of all you need a way to indicate that A "is-a" X and B "is-a" X, i.e. X is a parent of both A and B.
Then your upload_device routine will look something like this:
sub upload_properties {
my $device = shift;
... upload the "specific" properties of $device ...
for my $parent (parent's of $device) {
upload_properties($parent);
}
}
One implementation:
Indicate the "is-a" relationship with a line in your config file like:
A.isa = X
(Feel free to use some other syntax - what you use will depend on how you want to parse the file.)
From the config file, create a hash of all devices that looks like this:
$all_devices = {
A => { a11 => valuea11, a12 => valuea12, isa => [ 'X' ]},
B => { b11 => valueb11, b12 => valueb12, isa => [ 'X' ] },
X => { x11 => valuex11, x12 => valuex12, isa => [] },
}
The upload_properties routine:
sub upload_properties {
my ($device) = #_;
for my $key (keys %$device) {
next if $key eq "isa";
... upload property $key => $device->{$key} ...
}
my $isa = $device->{isa}; # this should be an array ref
for my $parent_name (#$isa) {
my $parent = $all_devices->{$parent_name};
upload_properties($parent);
}
}
# e.g. to upload device 'A':
upload_properties( $all_devices->{'A'} );
You can eliminate the large if-else chain by storing the device properties in a hash.
Then you need only ensure that the particular $device appears in that hash.
#!/usr/bin/perl
use warnings;
use strict;
my %vals = (
A => {
a11 => 'valuea11',
a12 => 'valuea12',
},
B => {
b11 => 'valueb11',
b12 => 'valueb12',
},
);
foreach my $device qw(A B C) {
if (exists $vals{$device}) {
upload_properties($vals{$device});
}
else {
warn "'$device' is not a valid device\n";
}
}
sub upload_properties {
my($h) = #_;
print "setting $_=$h->{$_}\n" for sort keys %$h; # simulate upload
print "\n";
}