I want my discordbot to send send a message with an attached file in it and a text. Then the bot has to edit this text a couple of times but the problem is that when bot eddits message 5 times then it waits some time and then edits again 5 times etc etc. How can i make it edit messages without stopping?
if(msg.content.includes("letter")){
msg.channel.send("alphabet", { files: ["/Users/48602/Videos/discordbot/aaa.png"]})}
if(msg.content === 'alphabet'){
msg.edit("**a**")
msg.edit("**b**")
msg.edit("**c**")
msg.edit("**d**") // Here bot stop for a 2 seconds and i dont know why
msg.edit("**e**")
msg.edit("**f**")
msg.edit("**g**")
msg.edit("**h**")
msg.edit("**i**")
msg.edit("**j**")// Here bot stop for a 2 seconds and i dont know why
msg.edit("**k**")
msg.edit("**l**")
msg.edit("**m**")
msg.edit("**n**")
msg.edit("**o**") // Here bot stop for a 2 seconds and i dont know why
msg.delete()
}
Discord has a rate limit of 5 in each request. Trying to bypass this would be considered API abuse (the solutions later is not API abuse).
Exceeding this limit will pause other requests until a certain number of seconds has passed. Along with my research, I came across this simple explanation:
5 anything per 5 seconds per server (if you did not understand what I said above).
On Discord's Developer guide on rate limits, it tells you this:
There is currently a single exception to the above rule [rate limits] regarding different HTTP methods sharing the same rate limit, and that is for the deletion of messages. Deleting messages falls under a separate, higher rate limit so that bots are able to more quickly delete content from channels (which is useful for moderation bots).
One workaround, without API abusing, would be to send messages, and delete the previous messages since there is a higher limit for deleting messages.
Another workaround would be to add intermediate timeouts to your animation.
A simple method such as:
function async wait = { require("util").promisify(setTimeout); };
//syntax: await wait(1000); to "pause" for 1 second
You will need to play around with the timings so it fits your intended animation speed, and without pausing due to the rate limit.
Related
I have a typical kafka consumer/producer app that is polling all the time for data. Sometimes, there might be no data for hours, but sometimes there could be thousands of messages per second. Because of this, the application is built so it's always polling, with a 500ms duration timeout.
However, I've noticed that sometimes, if the kafka cluster goes down, the consumer client, once started, won't throw an exception, it will simply timeout at 500ms, and continue returning empty ConsumerRecords<K,V>. So, as far as the application is concerned, there is no data to consume, when in reality, the whole Kafka cluster could be unreachable, but the app itself has no idea.
I checked the docs, and I couldn't find a way to validate consumer health, other than maybe closing the connection and subscribing to the topic every single time, but I really don't want to do that on a long-running application.
What's the best way to validate that the consumer is active and healthy while polling, ideally from the same thread/client object, so that the app can distinguish between no data and an unreachable kafka cluster situation?
I am sure this is not the best way to achieve what you are looking for.
But one simple way which I had implemented in my application is by maintaining a static counter in the application indicating emptyRecordSetReceived. Whenever I receive an empty record set by the poll operation I increment this counter.
This counter was emitted to the Graphite at periodic interval (say every minute) with the help of the Metric registry from the application.
Now let's say you know the maximum time frame for which the message will not be available to consume by this application. For example, say 6 hours. Given that you are polling every 500 Millisecond, you know that if we do not receive the message for 6 hours, the counter would increase by
2 poll in 1 second * 60 seconds * 60 minutes * 6 hours = 43200.
We had placed an alerting check based on this counter value reported to Graphite. This metric used to give me a decent idea if it is a genuine problem from the application or something else is down from the Broker or producer side.
This is just the naive way I had solved this use case to some extent. I would love to hear how it is actually done without maintaining these counters.
I am doing API load testing with JMeter. I have a Macbook Air (client) connected with ethernet to a machine being tested with the load (server).
I wanted to do a simple test. Hit the server with 5 requests per second (RPS). I create a concurrency thread group with 60 threads, a throughput shaping timer with 5 RPS for one minute, my HTTP request and hit the play button and run the test.
I expect to see my Hits per Second listener indicating a flat line of 5 hits per second, instead I see a variable rate, starting with 5 and then dropping to 2 and then later to 4... Sometimes there is more than the specified 5 RPS (e.g. 6 RPS) the point is that it's not a constant 5. It's too much of a variable rate - it's all over the place. And I don't get any errors.
My server, takes between 500ms to 3s to return an answer based on how much load is present - this is what I am testing. What I want to achieve with this test is to return as much as possible a response in 500ms time under load and I am not getting that. I have to start wondering if it's JMeter's fault in some way, but that's a topic for another day.
When I replace my HTTP sample request with a dummy sampler, I get the RPS I desire.
I thought I had a problem with JMeter resources, so I change heap size/memory to 1GB, use the -XX:+ DisableExplicitGC and -d64 flag and run in CLI mode. I never got any errors, not before setting the flags and not after. Also, I believe that 5 RPS is a small number so I don't expect resources to be a problem.
Something worth noting is that sometimes, the threads start executing towards the end of the test rather than at the start, I find this very odd behaviour.
What's next? Time to move to a new tool?
This question is a follow up to How to implement an atomic integer in Java App Engine?. Basically I am create a push Task Queue to implement SMS verification. I am using Twilio to send the SMS. Each SMS is a five digit pin number. The following is my queue.xml file for app-engine.
<queue-entries>
<queue>
<name>sms-verification</name>
<rate>200/s</rate>
<bucket-size>100</bucket-size>
<max-concurrent-requests>10</max-concurrent-requests>
</queue>
</queue-entries>
I want the best rate I can get without creating a new instance. I believe instance creation is expensive on app-engine, though I am not sure if it's the same for task queues. So is this configuration file good? Is it missing anything? This is my first time creating one so thanks for any guidance.
There is no right or wrong answer to this question. You will have to play with the configuration settings to get the optimal results for your requirements. You need to take the following into account:
You load throughout the day/week: more or less even or with sharp peaks.
Delay tolerance: how long it is acceptable to wait until the message is sent.
Obviously, it will be more expensive if you want to send all messages immediately, and less expensive if you can tolerate even a small delay (e.g. 1 minutes) as it would smooth out at least some sudden peaks.
Note that the higher the volume, the less important these optimizations become, as 1 new instance over 20 live is not as expensive as 1 new instance over 1.
Basic Info:
REST Request, Using Jersey (Java)
I'm working on a project where there's a list of numbers that refer to an individual item.
The user can click on an item number and the corresponding item/data is loaded and presented.
We're having this odd issue where after about the 14th click or so, (direction is irrelevant), a singular REST call
takes forever.
We're talking another 500ms to 1s for each additional click after that 14th (or so) click.
I've been patient enough to drive it up to 15 seconds.
Chrome displays < 2 seconds for the "waiting" portion of the event and 2+ seconds in the receiving state for 360 bytes.
Any ideas on what could possible cause this?
I wrote a test page that just hammered the server with dozens and dozens of requests. As expected, the browser prevented more than 6 at a time being loaded.
The individual set of 6 requests behaved normally.
I've also tried making the same REST request sequentially, waiting till one was done, then waiting 500 ms, then calling it again to simulate the user click on an additional item.
Behaved as expected.
There's only two differences between my test page and the actual deployed version.
1) We make 3 ajax calls (2 to the same rest service, one to a different one) that always complete on time. These 3 are finished before the 4th (the trouble one) even begins.
2) We have a "auto" save feature that does the above on a 30 second timer. This never has issues and always completes on time as expected.
Thanks SO community. Been banging my head against this for a couple days now and I'm at my wits end. :P
do you have the access to the server side?
you seen anything unusual in the logs?
Did you try and measuring execution time of the each method in the service tier?
You might want to take a look :
http://codemate.wordpress.com/2009/05/08/cpu-profiling-explained/
Maybe also memory profiling but not necessarily since you don't have the out of memory exception
I'm using C2DM in my application, and it works well, but sometimes, when I'm sending lots of messages, the delay is appearing, in it is up to 5 minutes.
All of my messages have the same collape key. Is it normal for C2DM?
EDIT: I'm sending approximately 1-2 messages per second
EDIT2: It is slow only for one device; another device receives notifications instantly
It is slow only for one device; another device receives notifications instantly
Probably due to network lags, you have to take network transience into account.
By the way, if you are sending 2 messages per second, your are sending 172800 messages to one device per day. You have a limit of 200,000 messages per day for one C2DM account. Clearly you aren't using C2DM the way it's supposed to be used. :)
Keep the application state in the server, not in the device, using a collapse key. So that only the most fresh result is delivered. Or else attenuation will be used to save battery.
Yep, as Reno linked to:
There is a attenuation. One post on Google Group suggest that each device has 20 tokens, and a new token is created every three minutes. So when you hit the limit, it'll take 3 minutes before you get the next token, thus the delay.
https://groups.google.com/forum/#!topic/android-c2dm/gY2RZBoFth4