I am using the LatencyUtils package for tracking and reporting on the behavior of latencies across measurements:
https://github.com/LatencyUtils/LatencyUtils/blob/d8f51f39f6146e1ad9a263dc916bcbc0ec06e16d/src/main/java/org/LatencyUtils/LatencyStats.java#L196
For recording the time by this method, the time unit should be nanosecond, but in my case, the time recorded is in milliseconds.
I want to know if there is a better way to record time in milliseconds?
The solution I use now is to multiply all the recorded time by one million. But I still hope that the results are in microseconds, so for the results I get, I divide it by one million.
public void addValue(Long val, long sampleCount) {
sum += val * sampleCount;
for (int i = 0; i < sampleCount; i++) {
latencyStats.recordLatency(val*1000000);
}
histogram.add(latencyStats.getIntervalHistogram());
max = Math.max(val, max);
min = Math.min(val, min);
updateValueCount(val,sampleCount);
}
#Override
public double getStandardDeviation() {
return histogram.getStdDeviation()/1000000;
}
And the default constructor of LatencyUtil is like this:
private long lowestTrackableLatency = 1000L; /* 1 usec */
private long highestTrackableLatency = 3600000000000L; /* 1 hr */
private int numberOfSignificantValueDigits = 2;
private int intervalEstimatorWindowLength = 1024;
private long intervalEstimatorTimeCap = 10000000000L; /* 10 sec */
private PauseDetector pauseDetector = null;
public LatencyStats() {
this(
defaultBuilder.lowestTrackableLatency,
defaultBuilder.highestTrackableLatency,
defaultBuilder.numberOfSignificantValueDigits,
defaultBuilder.intervalEstimatorWindowLength,
defaultBuilder.intervalEstimatorTimeCap,
defaultBuilder.pauseDetector
);
}
So, in fact, the lowest trackable latency of LatencyUtil is also in nanoseconds. If I put a value in milliseconds, I am afraid that will affect the results of the record.
I can offer you another Open Source Library that provides the Utility that can convert values from one time unit to another. Its called MgntUtils. See this javadoc for class TimeInterval. This is convenience class that holds time interval as numerical value and its associated TimeUnit. The class also provides methods of retrieval of its value as a long in needed scale (nanoseconds, milliseconds, seconds, minutes, hours or days) see methods toNanos(), toMillis(), toSeconds(), toMinutes(), toHours(), toDays(). You can find the library itself on Github and on Maven Central as maven artifacts. Here is an article about the library
Related
This is just a hypothetical question, but could be a way to get around an issue I have been having.
Imagine you want to be able to time a calculation function based not on the answer, but on the time it takes to calculating. So instead of finding out what a + b is, you wish to continue perform some calculation while time < x seconds.
Look at this pseudo code:
public static void performCalculationsForTime(int seconds)
{
// Get start time
int millisStart = System.currentTimeMillis();
// Perform calculation to find the 1000th digit of PI
// Check if the given amount of seconds have passed since millisStart
// If number of seconds have not passed, redo the 1000th PI digit calculation
// At this point the time has passed, return the function.
}
Now I know that I am horrible, despicable person for using precious CPU cycles to simple get time to pass, but what I am wondering is:
A) Is this possible and would JVM start complaining about non-responsiveness?
B) If it is possible, what calculations would be best to try to perform?
Update - Answer:
Based on the answers and comments, the answer seems to be that "Yes, this is possible. But only if it is not done in Android main UI thread, because the user's GUI will be become unresponsive and will throw an ANR after 5 seconds."
A) Is this possible and would JVM start complaining about non-responsiveness?
It is possible, and if you run it in the background, neither JVM nor Dalvik will complain.
B) If it is possible, what calculations would be best to try to perform?
If the objective is to just run any calculation for x seconds, just keep adding 1 to a sum until the required time has reached. Off the top of my head, something like:
public static void performCalculationsForTime(int seconds)
{
// Get start time
int secondsStart = System.currentTimeMillis()/1000;
int requiredEndTime = millisStart + seconds;
float sum = 0;
while(secondsStart != requiredEndTime) {
sum = sum + 0.1;
secondsStart = System.currentTimeMillis()/1000;
}
}
You can and JVM won't complain if your code is not part of some complex system that actually tracks thread execution time.
long startTime = System.currentTimeMillis();
while(System.currentTimeMillis() - startTime < 100000) {
// do something
}
Or even a for loop that checks time only every 1000 cycles.
for (int i = 0; ;i++) {
if (i % 1000 == 0 && System.currentTimeMillis() - startTime < 100000)
break;
// do something
}
As for your second question, the answer is probably calculating some value that can always be improved upon, like your PI digits example.
So I have a problem I've been wracking my brain over for about a week now. The situation is:
Consider a checkout line at the grocery store. During any given
second, the probability that a new customer joins the line is 0.02 (no
more than one customer joins the line during any given second). The
checkout clerk takes a random amount of time between 20 seconds to 75
seconds to serve each customer. Write a program to simulate this
scenario for about ten million seconds and print out the average
number of seconds that a customer spends waiting in line before the
clerk begins to serve the customer. Note that since you do not know
the maximum number of customers that may be in line at any given time,
you should use an ArrayList and not an array.
The expected average wait time is supposed to be between 500 and 600 seconds. However, I have not gotten an answer anywhere close to this range. Given that the probability of a new customer in the line is only 2%, I would expect the line to never have more than 1 person in it, so the average wait time would be about 45-50 secs. I have asked a friend (who is a math major) what his view on this problem, and he agreed that 45 seconds is a reasonable average given the 2% probability. My code so far is:
package grocerystore;
import java.util.ArrayList;
import java.util.Random;
public class GroceryStore {
private static ArrayList<Integer> line = new ArrayList();
private static Random r = new Random();
public static void addCustomer() {
int timeToServe = r.nextInt(56) + 20;
line.add(timeToServe);
}
public static void removeCustomer() {
line.remove(0);
}
public static int sum(ArrayList<Integer> a) {
int sum = 0;
for (int i = 0; i < a.size(); i++) {
sum += a.get(i);
}
return sum;
}
public static void main(String[] args) {
int waitTime = 0;
int duration = 10000;
for (int i = 0; i < duration; i++) {
double newCust = r.nextDouble();
if (newCust < .02) {
addCustomer();
}
try {
for (int j = 0; j < line.get(0); j++) {
waitTime = waitTime + sum(line);
}
} catch (IndexOutOfBoundsException e) {}
if (line.isEmpty()) {}
else {
removeCustomer();
}
}
System.out.println(waitTime/duration);
}
}
Any advice about this would be appreciated.
Here's some pseudocode to help you plan it out
for each second that goes by:
generate probability
if probability <= 0.02
add customer
if wait time is 0
if line is not empty
remove customer
generate a new wait time
else
decrement wait time
There's actually a very easy implementation of single server queueing systems where you don't need an ArrayList or Queue to stash customers who are in line. It's based on a simple recurrence relation described below.
You need to know the inter-arrival times' distribution, i.e., the distribution of times between one arrival and the next. Yours was described in time-stepped fashion as a probability of 0.02 of having a new arrival in a given tick of the clock. That equates to a Geometric distribution with p = 0.02. You already know the service time distribution - Uniform(20,75).
With those two pieces of info, and a bit of thought, you can deduce that for any given customer the arrival time is the previous customer's arrival-time plus a (generated) interarrival time; this customer can begin being served at either their arrival-time or the departure-time of the prior customer, whichever comes later; and they finish up with the server and depart at their begin-service time plus a (generated) service-time. You'll need to initialize the arrival-time and departure time of an imaginary zeroth customer to kick-start the whole thing, but then it's a simple loop to calculate the recurrence.
Since this looks like homework I'm giving you an implementation in Ruby. If you don't know Ruby, think of this as pseudo-code. It should be very straightforward for you to translate to Java. I've left out details such as how to generate the distributions, but I have actually run the complete implementation of this, replacing the commented lines with statistical tallies, and it gives average wait times around 500.
interarrival_time = Geometric.new(p_value)
service_time = Uniform.new(service_min, service_max)
arrival_time = depart_time = 0.0 # initialize zeroth customer
loop do
arrival_time += interarrival_time.generate
break if arrival_time > 10_000_000
start_time = [arrival_time, depart_time].max
depart_time = start_time + service_time.generate
delay_in_queue = start_time - arrival_time
# do anything you want with the delay_in_queue value:
# print it, tally it for averaging, whatever...
end
Note that this approach skips over the large swathes of time where nothing is happening, so it's a quite efficient little program compared to time-stepping through every tick of the simulated clock and storing things in dynamically sized containers.
One final note - you may want to ignore the first few hundred or thousand observations due to initialization bias. Simulation models usually need a "warm-up" period to remove the effect of the programmatically necessary initialization of variables to arbitrary values.
Instead of using an ArrayList, a Queue might be better suited for managing the customers. Also, remove the try/catch clause and a throws IndexOutOfBoundsException to the main function definition.
Apologies if this is seen as a duplicate. This question is very similar but not the same. I'm interested in controlling the overall number of files (really I want to control the total size used for all logs).
I want the following from log back:
For the current file, roll over every 12 hours or the file hits 100MB.
No matter what, don't use more than 600MB of space total (I could also say this like: don't have more than 5 backup files).
The first point seems easy. I'm aware of TimeBasedRollingPolicy, and I know I can limit the per file size, but that's not enough. I need a way of limiting the total number of log files, not just the total number of time periods.
It's been a long time and someone asked for the code so I thought I would post it as an answer. I haven't tested it but it should serve as a reasonable example.
/**
* Extends SizeBasedTriggeringPolicy to trigger based on size or time in minutes
*
* Example:
* <triggeringPolicy class="com.foo.bar.TimeSizeBasedTriggeringPolicy">
* <MaxFileSize>10MB</MaxFileSize>
* <RolloverInMinutes>35</RolloverInMinutes>
* </triggeringPolicy>
*
* This would rollover every 35min or 10MB, whichever comes first. Because isTriggeringEvent()
* is only called when a log line comes in, these limits will not be obeyed perfectly.
*/
public class TimeSizeBasedTriggeringPolicy<E> extends SizeBasedTriggeringPolicy<E> {
private long MILLIS_IN_A_MINUTE = 60 * 1000;
private long rolloverPeriod;
private long nextRolloverTime;
#Override
public void start() {
nextRolloverTime = System.currentTimeMillis() + rolloverPeriod;
super.start();
}
#Override
public boolean isTriggeringEvent(File activeFile, final E event) {
long currentTime = System.currentTimeMillis();
boolean isRolloverTime = super.isTriggeringEvent(activeFile, event) ||
currentTime >= nextRolloverTime;
// Reset the rollover period regardless of why we're rolling over.
if (isRolloverTime) {
nextRolloverTime = currentTime + rolloverPeriod;
}
return isRolloverTime;
}
public long getRolloverInMinutes() {
return rolloverPeriod / MILLIS_IN_A_MINUTE;
}
public void setRolloverInMinutes(long rolloverPeriodInMinutes) {
this.rolloverPeriod = rolloverPeriodInMinutes * MILLIS_IN_A_MINUTE;
}
}
import java.math.BigDecimal;
public class testtest {
public static final BigDecimal TWO = new BigDecimal(2);
public static final int digits = 1000;
public static final BigDecimal TOLERANCE = BigDecimal.ONE.scaleByPowerOfTen(-digits);
public static double MidpointMethod = 0;
public static long MidpointMethod(int n) {
BigDecimal k = new BigDecimal(n);
BigDecimal a = BigDecimal.ONE; // set a to be one
BigDecimal b = k; // set b to be two
long start = System.nanoTime(); // start the timer
while(a.multiply(a).subtract(k).abs().compareTo(TOLERANCE) >= 0) { // while our decimals aren't close enough to the square root of two
if(a.multiply(a).subtract(k).abs().compareTo(b.multiply(b).subtract(k).abs()) > 0) // if a is farther from the square root of two than b
a = a.add(b).divide(TWO); // set a to be the average of a and b
else // if a is closer to the square root of two than b
b = a.add(b).divide(TWO); // set b to be the average of a and b
}
return System.nanoTime() - start; // return the time taken
}
public static void main(String[] args) {
System.out.println(MidpointMethod(2)/10e6);
}
}
This program outputs 6224.5209, but when I ran it it took way, way over 20 seconds to run. Why does it display 6 seconds when it actually took more than 20 seconds?
is the 6 seconds an accurate and precise measure of how long the program took?
To convert nanoseconds to milliseconds (which I assume you were trying), divide by 1e6, not 10e6. You are off by a factor of 10.
The Syste.nanoTime() is fully accurate given that you work on a decent PC, which I'll assume you are. The problem is any kind of initialisation before you call for the first time the method, including JVM start up, stack set up, heap set up, the Big decimals initialisation takes some time. Also if you are using a lot of your RAM and it is almost full that boot up time can go even more.
I'm writing a fairly simple 2D multiplayer-over-network game. Right now, I find it nearly impossible for myself to create a stable loop. By stable I mean such kind of loop inside which certain calculations are done and which is repeated over strict periods of time (let's say, every 25 ms, that's what I'm fighting for right now). I haven't faced many severe hindrances this far except for this one.
In this game, several threads are running, both in server and client applications, assigned to various tasks. Let's take for example engine thread in my server application. In this thread, I try to create game loop using Thread.sleep, trying to take in account time taken by game calculations. Here's my loop, placed within run() method. Tick() function is payload of the loop. It simply contains ordered calls to other methods doing constant game updating.
long engFPS = 40;
long frameDur = 1000 / engFPS;
long lastFrameTime;
long nextFrame;
<...>
while(true)
{
lastFrameTime = System.currentTimeMillis();
nextFrame = lastFrameTime + frameDur;
Tick();
if(nextFrame - System.currentTimeMillis() > 0)
{
try
{
Thread.sleep(nextFrame - System.currentTimeMillis());
}
catch(Exception e)
{
System.err.println("TSEngine :: run :: " + e);
}
}
}
The major problem is that Thread.sleep just loves to betray your expectations about how much it will sleep. It can easily put thread to rest for much longer or much shorter time, especially on some machines with Windows XP (I've tested it myself, WinXP gives really nasty results compared to Win7 and other OS). I've poked around internets quite a lot, and result was disappointing. It seems to be fault of the thread scheduler of the OS we're running on, and its so-called granularity. As far as I understood, this scheduler constantly, over certain amount of time, checks demands of every thread in system, in particular, puts/awakes them from sleep. When re-checking time is low, like 1ms, things may seem smooth. Although, it is said that WinXP has granularity as high as 10 or 15 ms. I've also read that not only Java programmers, but those using other languages face this problem as well.
Knowing this, it seems almost impossible to make a stable, sturdy, reliable game engine. Nevertheless, they're everywhere.
I'm highly wondering by which means this problem can be fought or circumvented. Could someone more experienced give me a hint on this?
Don't rely on the OS or any timer mechanism to wake your thread or invoke some callback at a precise point in time or after a precise delay. It's just not going to happen.
The way to deal with this is instead of setting a sleep/callback/poll interval and then assuming that the interval is kept with a high degree of precision, keep track of the amount of time that has elapsed since the previous iteration and use that to determine what the current state should be. Pass this amount through to anything that updates state based upon the current "frame" (really you should design your engine in a way that the internal components don't know or care about anything as concrete as a frame; so that instead there is just state that moves fluidly through time, and when a new frame needs to be sent for rendering a snapshot of this state is used).
So for example, you might do:
long maxWorkingTimePerFrame = 1000 / FRAMES_PER_SECOND; //this is optional
lastStartTime = System.currentTimeMillis();
while(true)
{
long elapsedTime = System.currentTimeMillis() - lastStartTime;
lastStartTime = System.currentTimeMillis();
Tick(elapsedTime);
//enforcing a maximum framerate here is optional...you don't need to sleep the thread
long processingTimeForCurrentFrame = System.currentTimeMillis() - lastStartTime;
if(processingTimeForCurrentFrame < maxWorkingTimePerFrame)
{
try
{
Thread.sleep(maxWorkingTimePerFrame - processingTimeForCurrentFrame);
}
catch(Exception e)
{
System.err.println("TSEngine :: run :: " + e);
}
}
}
Also note that you can get greater timer granularity by using System.nanoTime() in place of System.currentTimeMillis().
You may getter better results with
LockSupport.parkNanos(long nanos)
altho it complicates the code a bit compared to sleep()
maybe this helps you.
its from david brackeen's bock developing games in java
and calculates average granularity to fake a more fluent framerate:
link
public class TimeSmoothie {
/**
How often to recalc the frame rate
*/
protected static final long FRAME_RATE_RECALC_PERIOD = 500;
/**
Don't allow the elapsed time between frames to be more than 100 ms
*/
protected static final long MAX_ELAPSED_TIME = 100;
/**
Take the average of the last few samples during the last 100ms
*/
protected static final long AVERAGE_PERIOD = 100;
protected static final int NUM_SAMPLES_BITS = 6; // 64 samples
protected static final int NUM_SAMPLES = 1 << NUM_SAMPLES_BITS;
protected static final int NUM_SAMPLES_MASK = NUM_SAMPLES - 1;
protected long[] samples;
protected int numSamples = 0;
protected int firstIndex = 0;
// for calculating frame rate
protected int numFrames = 0;
protected long startTime;
protected float frameRate;
public TimeSmoothie() {
samples = new long[NUM_SAMPLES];
}
/**
Adds the specified time sample and returns the average
of all the recorded time samples.
*/
public long getTime(long elapsedTime) {
addSample(elapsedTime);
return getAverage();
}
/**
Adds a time sample.
*/
public void addSample(long elapsedTime) {
numFrames++;
// cap the time
elapsedTime = Math.min(elapsedTime, MAX_ELAPSED_TIME);
// add the sample to the list
samples[(firstIndex + numSamples) & NUM_SAMPLES_MASK] =
elapsedTime;
if (numSamples == samples.length) {
firstIndex = (firstIndex + 1) & NUM_SAMPLES_MASK;
}
else {
numSamples++;
}
}
/**
Gets the average of the recorded time samples.
*/
public long getAverage() {
long sum = 0;
for (int i=numSamples-1; i>=0; i--) {
sum+=samples[(firstIndex + i) & NUM_SAMPLES_MASK];
// if the average period is already reached, go ahead and return
// the average.
if (sum >= AVERAGE_PERIOD) {
Math.round((double)sum / (numSamples-i));
}
}
return Math.round((double)sum / numSamples);
}
/**
Gets the frame rate (number of calls to getTime() or
addSample() in real time). The frame rate is recalculated
every 500ms.
*/
public float getFrameRate() {
long currTime = System.currentTimeMillis();
// calculate the frame rate every 500 milliseconds
if (currTime > startTime + FRAME_RATE_RECALC_PERIOD) {
frameRate = (float)numFrames * 1000 /
(currTime - startTime);
startTime = currTime;
numFrames = 0;
}
return frameRate;
}
}