I have a Spring Boot app, and an Elastic stack (Elasticsearch + Kibana + Filebeat, no Logstash). I want to log some information whenever a request comes to my Spring app (say, the request url and request user id).
A first naive way is to simply log that (log.info("request comes url={} user={}", url, user);). Then filebeat will happily collect that into my elasticsearch and I can visualize it using Kibana.
However, I do not want these data to be mixed with all the other logs in the filebeat-* index pattern. I want them to be, say, request-info (or request-info-*) while the other normal log data in filebeat-* index pattern. Is there any way to do so? Thank you very much!
You can use a conditional output in your filebeat.yml based on a string present in your message.
Something like:
output.elasticsearch:
hosts: ["http://localhost:9200"]
indices:
- index: "request-info-%{+yyyy.MM.dd}"
when.contains:
message: "request comes"
If your message contains the string request comes it will be sent to the index request-info-2020.11.28 for example, if it does not contains, it will be sent to the default index.
You can read more about the options to the elasticsearch output in this documentation link
Related
I have sent the message attribute to AWS sqs along with body using Apache camel below command
to("aws-sqs://{{queue.name}}?amazonSQSClient=#sqsClient&attributeNames=#systemName")
Message sent successfully. Now i want to retrieve the the message attribute systemName using Camel DSL java. But not able to retrieve it. CamelAwsSqsAttributes and CamelAwsSqsMessageAttributes both are coming blank in the header. Below are the code of Consumer
Main main = new Main();
main.bind("sqsAttributeNames", Collections.singletonList("All"));
main.bind("sqsMessageAttributeNames", Collections.singletonList("All"));
from("aws-sqs://a{{queue.name}}?" +
"amazonSQSClient=#sqsClient&attributeNames=#sqsAttributeNames&messageAttributeNames=#sqsMessageAttributeNames")
.log("We have a failed request message in queue ${headers}")
Can someone please help me on this??
The config looks okay, except the Collection should be replaced by comma-separated Strings (make sure there are no space between the Strings).
Also, please mention the attributes that you want. All may not work.
Main main = new Main();
main.bind("sqsAttributeNames", "Attr1,Attr2");
main.bind("sqsMessageAttributeNames", "Attr1,Attr2");
Please follow latest Camel-SQS-Component.
I have a DB in which I saved a list of message that I have to send using firebase cloud messaging.
If I want to send up to 100 messages in batch for better efficient how can I understand which of the message in my db was sended correctly and which one give me an error?
I have seen that the response are something like this for the error response:
error: {"error":{"code":400,"message":"Invalid condition expression provided.","status":"INVALID_ARGUMENT","details":[{"#type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"field":"message.condition","description":"Invalid condition expression provided."}]},{"#type":"type.googleapis.com/google.firebase.fcm.v1.FcmError","errorCode":"INVALID_ARGUMENT"}]}}
for the message accepted we have something like this:
id: projects/id_project/messages/0:1563809489349852%31bd1c9631bd1c96
How can I understand which message had an error so I can try to send it again or handle that error. Moreover I want to understand even which message was sended correctly.
any advice?
Thanks in advance.
The order of the responses corresponds to the order of the input messages in a batch. From the API docs of the Java Admin SDK:
The responses list obtained by calling getResponses() on the return value corresponds to the order of input messages.
Same is true for other implementations of the Admin SDK that supports FCM batch messaging.
In Jmeter I created 7 Threads to login multiple users.I put 7 usernames and 7 passwords in csv file then I created CSV Data set config in JMeter and it works well but when I start to test it shows fail 7 threads pls see the image.
My expectation is that you need to perform correlation of the SessionId parameter, like:
Add a relevant Post-Processor as a child of the first request (the recommended one for HTML response type is CSS/JQuery Extractor) and configure it to fetch SessionId value from the previous response
Substitute recorded value of 467132418 with the JMeter Variable originating form the Post-Processor in the subsequent request
Repeat steps 1 and 2 for all samplers where SessionId is being used
This is a problem i've been trying to deal with for almost a week without finding a real solution , here's the problem .
On my Angular client's side I have a button to generate a CSV file which works this way :
User clicks a button.
A POST request is sent to a REST JAX-RS webservice.
Webservice launches a database query and returns a JSON with all the lines needed to the client.
The AngularJS client receives a JSON processes it and generates the CSV.
All good here when there's a low volume of data to return , problems start when I have to return big amounts of data .Starting from 2000 lines I fell like the JBOSS server starts to struggle to send the data like i've reached a certain limit in data capacities (my eclipse where the server is running becomes very slow until the end of the data transmission )
The thing is that after testing i've found out it's not the Database query or the formating of the data that takes time but rather the sending of the data (3000 lines that are 2 MB in size take around 1 minute to reach the client) even though on my developper setup both the ANGULAR client And the JBOSS server are running on the same machine .
This is my Server side code :
#POST
#GZIP
#Path("/{id_user}/transactionsCsv")
#Produces(MediaType.APPLICATION_JSON)
#ApiOperation(value = "Transactions de l'utilisateur connecté sous forme CSV", response = TransactionDTO.class, responseContainer = "List")
#RolesAllowed(value = SecurityRoles.PORTAIL_ACTIVITE_RUBRIQUE)
public Response getOperationsCsv(#PathParam("id_user") long id_user,
#Context HttpServletRequest request,
#Context HttpServletResponse response,
final TransactionFiltreDTO filtre) throws IOException {
final UtilisateurSession utilisateur = (UtilisateurSession) request.getSession().getAttribute(UtilisateurSession.SESSION_CLE);
if (!utilisateur.getId().equals(id_user)) {
return genererReponse(new ResultDTO(Status.UNAUTHORIZED, null, null));
}
//database query
transactionDAO.getTransactionsDetailLimite(utilisateur.getId(), filtre);
//database query
List<Transaction> resultat = detailTransactionDAO.getTransactionsByUtilisateurId(utilisateur.getId(), filtre);
// To format the list to the export format
List<TransactionDTO> liste = Lists.transform(resultat, TransactionDTO.transactionToDTO);
return Response.ok(liste).build();
}
Do you guys have any idea about what is causing this problem or know another way to do things that might not cause this problem ? I would be grateful .
thank you :)
Here's the link for the JBOSS thread Dump :
http://freetexthost.com/y4kpwbdp1x
I've found in other contexts (using RMI) that the more local you are, the less worth it compression is. Your machine is probably losing most of its time on the processing work that compression and decompression require. The larger the amount of data, the greater the losses here.
Unless you really need to send this as one list, you might consider sending lists of entries. Requesting them page-wise to reduce the amount of data sent with one response. Even if you really need a single list on the client-side, you could assemble it after transport.
I'm convinced that the problem comes from the server trying to send big amount of data at once . Is there a way i can send the http answer in several small chunks instead of a single big one ?
To measure performance, we need to check the complete trace.
Many ways to do it, one of the way I find it easier.
Compress the output to ZIP, this reduces the data transfer over the network.
Index the column in Database, so that the query execution time decreases.
Check the processing time between several modules if any between different layers of code (REST -> Service -> DAO -> DB and vice versa)
If there wouldnt be much changes in the database, then you can introduce secondary caching mechanism and lower the cache eviction time or prefer the cache eviction policy as per your requirement.
To find the exact reason:
Collect the thread dump from a single run of the process.From that thread dump, we can check the exact time consumption of layers and pinpoint the problem.
Hope that helps !
[EDIT]
You should analyse the stack trace in dump and not the one added in the link.
If the larger portion of data is not able to process by the request,
Pagination, page size with number of pages might help(Only in case of non CSV file)
Limit, number of lines that can be processed.
Additional Query criteria like dates, users etc.
Sample REST URL :
http://localhost:8080/App/{id_user}/transactionCSV?limit=1000
http://localhost:8080/App/{id_user}/transactionCSV?fromDate=2011-08-01&toDate=2016-08-01
http://localhost:8080/App/{id_user}/transactionCSV?user=Admin
I was trying to see if there was a way to search an email inbox from javax.mail. Say I wanted to send a query and have the it return emails to us. Can we parse returned HTML and extract data. Also, if the above is possible how would I "translate" those messages returned by that server to POP3 messages? E.g. we have extracted:
Subject: Foo
Body: Bar
but to open same message using POP3 I need to know it's POP3 uid, or number. I don't think we'll be able to get UID but perhaps we can figure out the number.
I guess the question is:
Can I send a query to email server (such as Hotmail or Yahoo) and get returned emails?
Unfortunately, the POP3 protocol doesn't support that. It is not like SQL or so. You need to mirror the complete mailbox yourself in some kind of a datastore (SQL database?) and execute the search on that. You can eventually keep/cache the data so that you don't need to retrieve the whole inbox everytime, but only the unread items.