I am trying to insert a key index to FILE array. I just want to know that is this work in FILE array or not ?
$files = $_FILES;
print_r($files);
Array
(
[image] => Array
(
[name] => 400.png
[type] => image/png
[tmp_name] => /tmp/php5Wx0aJ
[error] => 0
[size] => 15726
)
)
what i want is like this below:
$files = $_FILES;
print_r($files);
Array
(
[image] => Array
(
[name] => 400.png
[type] => image/png
[tmp_name] => /tmp/php5Wx0aJ
[error] => 0
[size] => 15726
[myid]=> my value
)
)
Is it possible to push key index into $_FILE array with the PHP function?
Yes, you can do it as follows:
$files['image']['myid'] = 'my value';
IIRC, the 'image' key to the $_FILES array is the name of the control in your HTML, if you were just looking for a way to identify which of multiple files is which.
Related
I wanted to display XML data using logstash and Kibana in grid format. using below conf file I am able to display data into grid but not able to split row data.
Example:
Output
logstash.conf file :
input {
file {
path => "C:/ELK Stack/logstash-8.2.0-windows-x86_64/logstash-8.2.0/Test.xml"
start_position => "beginning"
sincedb_path => "NUL"
codec => multiline {
pattern => "^<?stations.*>"
negate => "true"
what => "previous"
auto_flush_interval => 1
max_lines => 3000
}}}
filter
{
xml
{
source => "message"
target => "parsed"
store_xml => "false"
xpath => [
"/stations/station/id/text()", "station_id",
"/stations/station/name/text()", "station_name"
]
}
mutate {
remove_field => [ "message"]
}
}
output {
elasticsearch {
action => "index"
hosts => "localhost:9200"
index => "logstash_index123xml"
workers => 1
}
stdout {
codec => rubydebug
}
}
xpath will always return arrays, to associate the members of the two arrays you are going to need to use a ruby filter. To get multiple events you can use a split filter to split an array which you build in the ruby filter. If you start with
<stations>
<station>
<id>1</id>
<name>a</name>
<id>2</id>
<name>b</name>
</station>
</stations>
then if you use
xml {
source => "message"
store_xml => "false"
xpath => {
"/stations/station/id/text()" => "[#metadata][station_id]"
"/stations/station/name/text()" => "[#metadata][station_name]"
}
remove_field => [ "message" ]
}
ruby {
code => '
ids = event.get("[#metadata][station_id]")
names = event.get("[#metadata][station_name]")
if ids.is_a? Array and names.is_a? Array y and ids.length == names.length
a = []
ids.each_index { |x|
a << { "station_name" => names[x], "station_id" => ids[x] }
}
event.set("[#metadata][theData]", a)
end
'
}
if [#metadata][theData] {
split {
field => "[#metadata][theData]"
add_field => {
"station_name" => "%{[#metadata][theData][station_name]}"
"station_id" => "%{[#metadata][theData][station_id]}"
}
}
}
You will get two events
{
"station_name" => "a",
"station_id" => "1",
...
}
{
"station_name" => "b",
"station_id" => "2",
...
}
I am trying to import CSV file to elastic file, but it is failed and threw an error
Pipeline aborted due to error {:pipeline_id=>"main",
:exception=>#,
:backtrace=>["/usr/local/Cellar/logstash/7.6.1/libexec/vendor/bundle/jruby/2.5.0/gems/logstash-filter-mutate-3.5.0/lib/logstash/filters/mutate.rb:222:in
block in register'", "org/jruby/RubyHash.java:1428:ineach'",
"/usr/local/Cellar/logstash/7.6.1/libexec/vendor/bundle/jruby/2.5.0/gems/logstash-filter-mutate-3.5.0/lib/logstash/filters/mutate.rb:220:in
register'",
"org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:56:in
register'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:200:in
block in register_plugins'", "org/jruby/RubyArray.java:1814:in
each'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:199:in
register_plugins'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:502:in
maybe_setup_out_plugins'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:212:in
start_workers'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:154:in
run'",
"/usr/local/Cellar/logstash/7.6.1/libexec/logstash-core/lib/logstash/java_pipeline.rb:109:in
`block in start'"],
"pipeline.sources"=>["/Users/user/Document/Esk-Data/xudaxia.conf"],
:thread=>"#"}
Below is the conf file
input
{
file{
path => ["/test.csv"]
start_position => "beginning"
}
}
filter{
csv{
separator => ","
columns => ["comment_time","comment", "id", "video_time"]
}
mutate{
convert => {
"comment_time" => "date_time"
"comment" => "string"
"id" => "integer"
"video_time" => "float"
}
}
}
output{
elasticsearch{
hosts => ["localhost:9200"]
index => "test"
}
}
test.csv
comment_time comment id video_time
2020/03/22 15:59:41 バイ a 123.100
2020/03/22 15:59:45 บาย b 100.100
2020/04/22 15:59:50 ByeBye c 80.210
can anyone help?
According the documentation the option date_time doesn't exist for convert action on mutate plugin - doc here.
However this plugin is used to cast a type into another one, that it isn't your use case. If comment_time is not recognized as date field you should trasform it with date plugin - doc here.
So you should remove this block:
mutate{
convert => {
"comment_time" => "date_time"
"comment" => "string"
"id" => "integer"
"video_time" => "float"
}
}
and replace it with this one:
date {
match => [ "comment_time", "yyyy/MM/dd HH:mm:ss"
}
I read this document
http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Condition.html
maybe no condition equivalent not in of RDB just only exists in condition.
How can I implement not in or is there an equivalent to not in?
Here an implementation In PHP
public function getRecordsByInExpression($tableName, $providerCode, $fieldName, $values, $logicalOp = self::IN_EXP)
{
$queryExp =[
'TableName' => $tableName,
'KeyConditionExpression' => 'ProviderCode = :pc',
'FilterExpression' => $logicalOp == self::IN_EXP ? "$fieldName IN (:list)" : "NOT ($fieldName IN (:list))",
'ExpressionAttributeValues' => [
':pc' => $providerCode,
':list' => implode(',', $values)
]
];
return $this->query($queryExp)->get('Items');
}
At the end a NOT IN = NOT (field IN (:list))
I am getting this JSON response form my web server. as i searched that this format is json decode format. how i will convert this into json encode format.
stdClass Object
(
[id] => 4ffc88e7-1413-fa9c-423c-53fc701b1044
[entry_list] => stdClass Object
(
[first_name] => stdClass Object
(
[name] => first_name
[value] => dharmendra
)
[last_name] => stdClass Object
(
[name] => last_name
[value] => singh
)
[primary_address_city] => stdClass Object
(
[name] => primary_address_city
[value] => gwalior
)
[primary_address_street] => stdClass Object
(
[name] => primary_address_street
[value] => chinchwad
)
[primary_address_state] => stdClass Object
(
[name] => primary_address_state
[value] => mp
)
[phone_mobile] => stdClass Object
(
[name] => phone_mobile
[value] => 55555555
)
[primary_address_country] => stdClass Object
(
[name] => primary_address_country
[value] => in
)
[primary_address_postalcode] => stdClass Object
(
[name] => primary_address_postalcode
[value] => 4444444
)
)
)
this is json response which came from the SugerCRM RESTapi
the solution is write print(json_encode(set_result)) in the place of
print(set_result)
I am looking forward to know some best approaches to manage properties files.
We have a set of devices (say N). Each of these devices has certain properties.
e.g.
Device A has properties
A.a11=valuea11
A.a12=valuea12
.
Device B has properties
B.b11=valueb11
B.b12=valueb12
.
Apart from this they have some common properties applicable for all devices.
X.x11=valuex11
X.x12=valuex12
I am writing an automation for running some test suites on these devices. At a time, test script on run on a single device. The device name will be passed as a argument. Based on the device name, code will grab the respective properties and common properties and update the device with these properties. e.g. for device A, code will grab the A.a11, A.a12 (device A specific) and X.x11, X.x12 (common) properties and upload it to the device before running test script.
So, in code, I need to manage these properties so that only device specific and common properties will be uploaded to the device, ignoring the rest one. I am managing it like this
if ($device eq 'A') then
upload A's properties
elsif ($device eq 'B') then
upload B's properties
endif
upload Common (X) properties.
Managing device in this way is becoming little bit difficult as the number of devices are keep on increasing.
So I am looking forward for some other best approach to manage these properties.
This is a good case where v (aka traits in generalized OOP literature) will be useful.
Instead of the classical object is a class, with roles an object *does a * role.
Check out the appropriate Moose docs for way more information.
Example:
package Device::ActLikeA;
use Moose::Role;
has 'attribute' => (
isa => string,
is => 'rw',
default => 'Apple',
);
sub an_a_like_method {
my $self = shift;
# foo
}
1;
So now I have a role called Device::ActLikeA, what do I do with it?
Well, I can apply the role to a class, and the code and attributes defined in ActLikeA will be available in the class:
package Device::USBButterChurn;
use Moose;
does 'Device::ActLikeA';
# now has an attribute 'attribute' and a method 'an_a_like_method'
1;
You can also apply roles to individual instances of a class.
package Device;
use Moose;
has 'part_no' => (
isa => 'Str',
is => 'ro',
required => 1,
);
has 'serial' => {
isa => 'Str',
is => 'ro',
lazy => 1,
build => '_build_serial',
);
1;
And then main code that looks at the part and applies appropriate roles:
my #PART_MATCH = (
[ qr/Foo/, 'Device::MetaSyntacticVariable' ],
[ qr/^...-[^_]*[A][^-], 'Device::ActLikeA; ],
[ qr/^...-[^_]*[B][^-], 'Device::ActLikeB; ],
# etc
);
my $parts = load_parts($config_file);
for my $part ( #$parts ) {
my $part_no = $part->part_number();
for my $_ (#PART_MATCH) {
my ($match, $role) = #$_;
$part->apply_role($role)
if $part_no =~ /$match/;
}
}
Here is a very direct approach.
First of all you need a way to indicate that A "is-a" X and B "is-a" X, i.e. X is a parent of both A and B.
Then your upload_device routine will look something like this:
sub upload_properties {
my $device = shift;
... upload the "specific" properties of $device ...
for my $parent (parent's of $device) {
upload_properties($parent);
}
}
One implementation:
Indicate the "is-a" relationship with a line in your config file like:
A.isa = X
(Feel free to use some other syntax - what you use will depend on how you want to parse the file.)
From the config file, create a hash of all devices that looks like this:
$all_devices = {
A => { a11 => valuea11, a12 => valuea12, isa => [ 'X' ]},
B => { b11 => valueb11, b12 => valueb12, isa => [ 'X' ] },
X => { x11 => valuex11, x12 => valuex12, isa => [] },
}
The upload_properties routine:
sub upload_properties {
my ($device) = #_;
for my $key (keys %$device) {
next if $key eq "isa";
... upload property $key => $device->{$key} ...
}
my $isa = $device->{isa}; # this should be an array ref
for my $parent_name (#$isa) {
my $parent = $all_devices->{$parent_name};
upload_properties($parent);
}
}
# e.g. to upload device 'A':
upload_properties( $all_devices->{'A'} );
You can eliminate the large if-else chain by storing the device properties in a hash.
Then you need only ensure that the particular $device appears in that hash.
#!/usr/bin/perl
use warnings;
use strict;
my %vals = (
A => {
a11 => 'valuea11',
a12 => 'valuea12',
},
B => {
b11 => 'valueb11',
b12 => 'valueb12',
},
);
foreach my $device qw(A B C) {
if (exists $vals{$device}) {
upload_properties($vals{$device});
}
else {
warn "'$device' is not a valid device\n";
}
}
sub upload_properties {
my($h) = #_;
print "setting $_=$h->{$_}\n" for sort keys %$h; # simulate upload
print "\n";
}