I 'm trying to solve problem 739, Daily Temperatures on LeetCode.
https://leetcode.com/problems/daily-temperatures/
My code used Stack container provided by the JAVA. It takes 60 ms to run. This is my code:
class Solution {
public int[] dailyTemperatures(int[] T) {
int[] ret = new int[T.length];
Stack<Integer> stack = new Stack<Integer>();
for(int i=0; i < T.length; i++){
while(!stack.isEmpty() && T[i] > T[stack.peek()]){
int index = stack.pop();
ret[index] = i - index;
}
stack.push(i);
}
return ret;
}
}
Here is a code that only takes 6ms to run:
class Solution {
public int[] dailyTemperatures(int[] T) {
int[] temperatures = T;
if(temperatures == null) return null;
int[] result = new int[temperatures.length];
int[] stack = new int[temperatures.length];
int top = 0;
stack[top] = -1;
for(int i = 0; i < temperatures.length; i++) {
while(stack[top] != -1 && temperatures[i] > temperatures[stack[top]]) {
int index = stack[top--];
result[index] = i - index;
}
stack[++top] = i;
}
return result;
}
}
Why building the stack using an array is faster than using the stack container?
Java's Stack is a very old class, introduced back in JDK 1.0. It extends Vector, and all it's data manipulation methods are synchronized, creating a very sizeable performance overhead. While it isn't officially deprecated, it's outdated, and you really shouldn't be using it in this day and age. The modern ArrayDeque provides the same functionality without the synchronization overhead.
Tested in the leetcode environment:
the first Stack[Integer] solution takes 80ms to run, and changing Stack[Integer] to ArrayDeque[Integer] takes 31ms. Which is a great improvement, which can prove that Stack is much slower than the morden ArrayDeque.
Note that only the pop method and peek are synchronized, while the push is not.
the second array[] solution takes 10ms in my run. and chaning to Integer[]
takes 19ms. So I think autoboxing is also a factor.
So there are not a single reason for this.
Only profiling will show you exactly which effects are the source of the slower running time.
My best guest would be the creation of Boxing-Integer-Intances for the int values and the much more complex implementation of
stack.pop() / stack.peek() / stack.push() in contrast to elementary array-accesses.
You could try to change your array version to use Integer instead and look how the performance changes.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed last year.
Improve this question
LeetCode 485
Given a binary array nums, return the maximum number of consecutive 1's in the array.
Example 1:
Input: nums = [1,1,0,1,1,1]
Output: 3
Explanation: The first two digits or the last three digits are consecutive 1s. The maximum number of consecutive 1s is 3.
---------Solution:-------
public int findMaxConsecutiveOnes(int[] nums) {
int maxConsSize = Integer.MIN_VALUE;
int i = -1, j=-1, k=0;
while(k<nums.length){
while(k<nums.length && nums[k] == 1){
k++;
i++;
}
if(nums[k] == 0){
maxConsSize = Math.max(maxConsSize,i-j);
j = i;
}
}
maxConsSize = Math.max(maxConsSize,i-j);
return maxConsSize;
}
Warning: This is not direct answer (for this "do my homework" question)
You should use (or learn to use) debugger in your IDE (trust me, IDE, e.g. Eclipse will help you a lot in your beginnings).
The easiest (I'm not saying smartest) way, how to know what the program is doing (when you need to know, like in this case) is to add some print statements, e.g. add System.out.println("k=" + k) into your program (in a while loop).
You might want to watch this youtube video.
You have an infinity loop. Try run this:
public class Test {
public static void main(String[] args) {
int maxConsSize = Integer.MIN_VALUE;
int[] nums = {1,1,0,1,1,1};
int i = -1, j=-1, k=0;
System.out.println(nums.length);
while(k<nums.length){
while(k<nums.length && nums[k] == 1){
k++;
i++;
System.out.println("k = " + k);
}
if(nums[k] == 0){
maxConsSize = Math.max(maxConsSize,i-j);
j = i;
}
}
maxConsSize = Math.max(maxConsSize,i-j);
System.out.println(maxConsSize);
}
}
Output:
6
k = 1
k = 2
After reading the first 0 you are in infinite loop. You have made this task very complicated :)
It's probably not the best solution, but it should be faster
public int findMaxConsecutiveOnes(int[] nums) {
int maxCons = 0;
int currentCons = 0;
for (int i = 0; i < nums.length; i++) {
if (nums[i] == 0) {
if (currentCons > maxCons) {
maxCons = currentCons;
}
currentCons = 0;
} else {
currentCons++;
}
}
if (currentCons > maxCons) {
maxCons = currentCons;
}
return maxCons;
}
}
There are two basic forms of loops:
for-each, for-i or sometimes called ranged for
Use that for a countable number of iterations.
For example having an array or collection to loop through.
while and do-while (like until-loops in other programming languages)
Use that for something that has a dynamic exit-condition. Bears the risk for infinite-loops!
Your issue: infinite loop
You used the second form of a while for a typical use-case of the first. When iterating over an array, you would be better to use any kind of for loop.
The second bears always the risk of infinite-loops, without having a proper exit-condition, or when the exit-condition is not fulfilled (logical bug). The first is risk-free in that regard.
Recommendation to solve
Would recommend to start with a for-i here:
// called for-i because the first iterator-variable is usually i
for(int i=0; i < nums.length, i++) {
// do something with num[i]
System.out.println(num[i]):
}
because:
it is safer, no risk of infinite-loop
the iterations can be recognized from the first line (better readability)
no counting, etc. inside the loop-body
Even simpler and idiomatic pattern is actually to use a for each:
for(int n : nums) {
// do something with n
System.out.println(n):
}
because:
it is safer, no risk of infinite-loop
the iterations can be recognized from the first line (better readability)
no index required, suitable for arrays or lists
no counting at all
See also:
Java For Loop, For-Each Loop, While, Do-While Loop (ULTIMATE GUIDE), an in-depth tutorial covering all about loops in Java, including concepts, terminology, examples, risks
Here's two ways of approaching a method which merges two ordered sub-decks of playing cards into one ordered deck:
Approach 1:
public static Deck merge(Deck d1, Deck d2) {
Deck result = new Deck(d1.cards.length + d2.cards.length);
int i = 0;
int j = 0;
for (int k = 0; k < result.cards.length; k++) {
if (j >= d2.cards.length || i < d1.cards.length && d1.cards[i].compareTo(d2.cards[j]) <= 0) {
result.cards[k] = d1.cards[i];
i++;
} else {
result.cards[k] = d2.cards[j];
j++;
}
}
return result;
}
Approach 2:
public static Deck merge(Deck d1, Deck d2) {
Deck result = new Deck(l1+l2);
Card[] c1 = d1.getCards();
Card[] c2 = d2.getCards();
int l1 = c1.length;
int l2 = c2.length;
Card[] sorted = new Card[l1+l2];
int i = 0;
int j = 0;
for (int k = 0;k<sorted.length;k++){
if (j >= c2.length || i < c1.length && c1[i].compareTo(c2[j]) <= 0){
sorted[k] = c1[i];
i++;
}
else {
sorted[k] = c2[j];
j++;
}
}
}
result.cards = sorted;
return result;
}
Which method is more efficient? Is there actually any difference?
From what I can tell the first method would have to generate a much larger amount of objects to complete a run for, let's say, two 26 card sub decks. However, the method itself would store less information which makes me question which method would be more efficient.
I know on this scale it probably doesn't matter too much, but as someone who's very new to Java, I'm curious to know what is best practice and why. I've tried searching for similar scenarios but haven't managed to find any. If anyone could point me in the right direction, it'd be greatly appreciated.
Since getCards() is simply returning a reference variable (i.e. it is not copying the cards array) the performance difference is likely to be minimal.
The only way be sure is to benchmark the two versions of the application. But if the difference measured is anything more than a couple of percentage points, the benchmark1 is probably flawed!
I would advise not to waste your time "optimizing at this level" unless you have clear evidence that:
the code is too slow, and
you have clear evidence that the code that you are about to optimize is a substantial contributor to the slowness.
In other words, benchmark then profile then optimize the parts of the code that are worth optimizing.
1 - It is advisable to read up on how to write a sound benchmark before you leap into coding. Start with: How do I write a correct micro-benchmark in Java?.
Your first approach will create two objects - the new Deck object, and the array storing the cards (the array created by the Deck constructor). Your second approach will create 3 objects - the new Deck, the array created by the Deck constructor, and also your array named sorted.
However, this does not really matter because you are not creating Card objects at all. You are just copying a reference of the Card objects in the two decks to the new deck! Assigning Cards to arrays doesn't create new cards. It's just like the following won't create two objects:
Object obj = new Object();
Object obj2 = obj;
So practically the two approaches are the same. I suggest you use the one that you find most readable.
I am currently having heavy performance issues with an application I'm developping in natural language processing. Basically, given texts, it gathers various data and does a bit of number crunching.
And for every sentence, it does EXACTLY the same. The algorithms applied to gather the statistics do not evolve with previously read data and therefore stay the same.
The issue is that the processing time does not evolve linearly at all: 1 min for 10k sentences, 1 hour for 100k and days for 1M...
I tried everything I could, from re-implementing basic data structures to object pooling to recycles instances. The behavior doesn't change. I get non-linear increase in time that seem impossible to justify by a little more hashmap collisions, nor by IO waiting, nor by anything! Java starts to be sluggish when data increases and I feel totally helpless.
If you want an example, just try the following: count the number of occurences of each word in a big file. Some code is shown below. By doing this, it takes me 3 seconds over 100k sentences and 326 seconds over 1.6M ...so a multiplicator of 110 times instead of 16 times. As data grows more, it just get worse...
Here is a code sample:
Note that I compare strings by reference (for efficiency reasons), this can be done thanks to the 'String.intern()' method which returns a unique reference per string. And the map is never re-hashed during the whole process for the numbers given above.
public class DataGathering
{
SimpleRefCounter<String> counts = new SimpleRefCounter<String>(1000000);
private void makeCounts(String path) throws IOException
{
BufferedReader file_src = new BufferedReader(new FileReader(path));
String line_src;
int n = 0;
while (file_src.ready())
{
n++;
if (n % 10000 == 0)
System.out.print(".");
if (n % 100000 == 0)
System.out.println("");
line_src = file_src.readLine();
String[] src_tokens = line_src.split("[ ,.;:?!'\"]");
for (int i = 0; i < src_tokens.length; i++)
{
String src = src_tokens[i].intern();
counts.bump(src);
}
}
file_src.close();
}
public static void main(String[] args) throws IOException
{
String path = "some_big_file.txt";
long timestamp = System.currentTimeMillis();
DataGathering dg = new DataGathering();
dg.makeCounts(path);
long time = (System.currentTimeMillis() - timestamp) / 1000;
System.out.println("\nElapsed time: " + time + "s.");
}
}
public class SimpleRefCounter<K>
{
static final double GROW_FACTOR = 2;
static final double LOAD_FACTOR = 0.5;
private int capacity;
private Object[] keys;
private int[] counts;
public SimpleRefCounter()
{
this(1000);
}
public SimpleRefCounter(int capacity)
{
this.capacity = capacity;
keys = new Object[capacity];
counts = new int[capacity];
}
public synchronized int increase(K key, int n)
{
int id = System.identityHashCode(key) % capacity;
while (keys[id] != null && keys[id] != key) // if it's occupied, let's move to the next one!
id = (id + 1) % capacity;
if (keys[id] == null)
{
key_count++;
keys[id] = key;
if (key_count > LOAD_FACTOR * capacity)
{
resize((int) (GROW_FACTOR * capacity));
}
}
counts[id] += n;
total += n;
return counts[id];
}
public synchronized void resize(int capacity)
{
System.out.println("Resizing counters: " + this);
this.capacity = capacity;
Object[] new_keys = new Object[capacity];
int[] new_counts = new int[capacity];
for (int i = 0; i < keys.length; i++)
{
Object key = keys[i];
int count = counts[i];
int id = System.identityHashCode(key) % capacity;
while (new_keys[id] != null && new_keys[id] != key) // if it's occupied, let's move to the next one!
id = (id + 1) % capacity;
new_keys[id] = key;
new_counts[id] = count;
}
this.keys = new_keys;
this.counts = new_counts;
}
public int bump(K key)
{
return increase(key, 1);
}
public int get(K key)
{
int id = System.identityHashCode(key) % capacity;
while (keys[id] != null && keys[id] != key) // if it's occupied, let's move to the next one!
id = (id + 1) % capacity;
if (keys[id] == null)
return 0;
else
return counts[id];
}
}
Any explanations? Ideas? Suggestions?
...and, as said in the beginning, it is not for this toy example in particular but for the more general case. This same exploding behavior occurs for no reason in the more complex and larger program.
Rather than feeling helpless use a profiler! That would tell you where exactly in your code all this time is spent.
Bursting the processor cache and thrashing the Translation Lookaside Buffer (TLB) may be the problem.
For String.intern you might want to do your own single-threaded implementation.
However, I'm placing my bets on the relatively bad hash values from System.identityHashCode. It clearly isn't using the top bit, as you don't appear to get ArrayIndexOutOfBoundsExceptions. I suggest replacing that with String.hashCode.
String[] src_tokens = line_src.split("[ ,.;:?!'\"]");
Just an idea -- you are creating a new Pattern object for every line here (look at the String.split() implementation). I wonder if this is also contributing to a ton of objects that need to be garbage collected?
I would create the Pattern once, probably as a static field:
final private static Pattern TOKEN_PATTERN = Pattern.compile("[ ,.;:?!'\"]");
And then change the split line do this:
String[] src_tokens = TOKEN_PATTERN.split(line_src);
Or if you don't want to create it as a static field, as least only create it once as a local variable at the beginning of the method, before the while.
In get, when you search for a nonexistent key, search time is proportional to the size of the set of keys.
My advice: if you want a HashMap, just use a HashMap. They got it right for you.
You are filling up the Perm Gen with the string intern. Have you tried viewing the -Xloggc output?
I would guess it's just memory filling up, growing outside the processor cache, memory fragmentation and the garbage collection pauses kicking in. Have you checked memory use at all? Tried to change the heap size the JVM uses?
Try to do it in python, and run the python module from Java.
Enter all the keys in the database, and then execute the following query:
select key, count(*)
from keys
group by key
Have you tried to only iterate through the keys without doing any calculations? is it faster? if yes then go with option (2).
Can't you do this? You can get your answer in no time.
It's me, the original poster, something went wrong during registration, so I post separately. I'll try the various suggestions given.
PS for Tom Hawtin: thanks for the hints, perhaps the 'String.intern()' takes more and more time as vocabulary grows, i'll check that tomorrow, as everything else.
I've been give some lovely Java code that has a lot of things like this (in a loop that executes about 1.5 million times).
code = getCode();
for (int intCount = 1; intCount < vA.size() + 1; intCount++)
{
oA = (A)vA.elementAt(intCount - 1);
if (oA.code.trim().equals(code))
currentName= oA.name;
}
Would I see significant increases in speed from switching to something like the following
code = getCode();
//AMap is a HashMap
strCurrentAAbbreviation = (String)AMap.get(code);
Edit: The size of vA is approximately 50. The trim shouldn't even be necessary, but definitely would be nice to call that 50 times instead of 50*1.5 million. The items in vA are unique.
Edit: At the suggestion of several responders, I tested it. Results are at the bottom. Thanks guys.
There's only one way to find out.
Ok, Ok, I tested it.
Results follow for your enlightenment:
Looping: 18391ms
Hash: 218ms
Looping: 18735ms
Hash: 234ms
Looping: 18359ms
Hash: 219ms
I think I will be refactoring that bit ..
The framework:
public class OptimizationTest {
private static Random r = new Random();
public static void main(String[] args){
final long loopCount = 1000000;
final int listSize = 55;
long loopTime = TestByLoop(loopCount, listSize);
long hashTime = TestByHash(loopCount, listSize);
System.out.println("Looping: " + loopTime + "ms");
System.out.println("Hash: " + hashTime + "ms");
}
public static long TestByLoop(long loopCount, int listSize){
Vector vA = buildVector(listSize);
A oA;
StopWatch sw = new StopWatch();
sw.start();
for (long i = 0; i< loopCount; i++){
String strCurrentStateAbbreviation;
int j = r.nextInt(listSize);
for (int intCount = 1; intCount < vA.size() + 1; intCount++){
oA = (A)vA.elementAt(intCount - 1);
if (oA.code.trim().equals(String.valueOf(j)))
strCurrentStateAbbreviation = oA.value;
}
}
sw.stop();
return sw.getElapsedTime();
}
public static long TestByHash(long loopCount, int listSize){
HashMap hm = getMap(listSize);
StopWatch sw = new StopWatch();
sw.start();
String strCurrentStateAbbreviation;
for (long i = 0; i < loopCount; i++){
int j = r.nextInt(listSize);
strCurrentStateAbbreviation = (String)hm.get(j);
}
sw.stop();
return sw.getElapsedTime();
}
private static HashMap getMap(int listSize) {
HashMap hm = new HashMap();
for (int i = 0; i < listSize; i++){
String code = String.valueOf(i);
String value = getRandomString(2);
hm.put(code, value);
}
return hm;
}
public static Vector buildVector(long listSize)
{
Vector v = new Vector();
for (int i = 0; i < listSize; i++){
A a = new A();
a.code = String.valueOf(i);
a.value = getRandomString(2);
v.add(a);
}
return v;
}
public static String getRandomString(int length){
StringBuffer sb = new StringBuffer();
for (int i = 0; i< length; i++){
sb.append(getChar());
}
return sb.toString();
}
public static char getChar()
{
final String alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
int i = r.nextInt(alphabet.length());
return alphabet.charAt(i);
}
}
Eh, there's a good chance that you would, yes. Retrieval from a HashMap is going to be constant time if you have good hash codes.
But the only way you can really find out is by trying it.
This depends on how large your map is, and how good your hashCode implementation is (such that you do not have colisions).
You should really do some real profiling to be sure if any modification is needed, as you may end up spending your time fixing something that is not broken.
What actually stands out to me a bit more than the elementAt call is the string trimming you are doing with each iteration. My gut tells me that might be a bigger bottleneck, but only profiling can really tell.
Good luck
I'd say yes, since the above appears to be a linear search over vA.size(). How big is va?
Why don't you use something like YourKit (or insert another profiler) to see just how expensive this part of the loop is.
Using a Map would certainly be an improvement that helps maintaining that code later on.
If you can use a map depends on whether the (vector?) contains unique codes or not. The for loop given would remember the last object in the list with a given code, which would mean a hash is not the solution.
For small (stable) list sizes simply converting the list to an array of objects would show a performance increase on top of some better readability.
If none of the above holds, at least use an itarator to inspect the list, giving better readability and some (probable) performance increase.
Depends. How much memory you got?
I would guess much faster, but profile it.
I think the dominant factor here is how big vA is, since the loop needs to run n times, where n is the size of vA. With the map, there is no loop, no matter how big vA is. So if n is small, the improvement will be small. If it is huge, the improvement will be huge. This is especially true because even after finding the matching element the loop keeps going! So if you find your match at element 1 of a 2 million element list, you still need to check the last 1,999,999 elements!
Yes, it'll almost certainly be faster. Looping an average of 25 times (half-way through your 50) is slower than a hashmap lookup, assuming your vA contents decently hashable.
However, speaking of your vA contents, you'll have to trim them as you insert them into your aMap, because aMap.get("somekey") will not find an entry whose key is "somekey ".
Actually, you should do that as you insert into vA, even if you don't switch to the hashmap solution.
Write a program to print out all possible values of int data type from the smallest to the largest, using Java.
Some notable solutions as of 8th of May 2009, 10:44 GMT:
1) Daniel Lew was the first to post correctly working code.
2) Kris has provided the simplest solution for the given problem.
3) Tom Hawtin - tackline, came up arguably with the most elegant solution.
4) mmyers pointed out that printing is likely to become a bottleneck and can be improved through buffering.
5) Jay's brute force approach is notable since, besides defying the core point of programming, the resulting source code takes about 128 GB and will blow compiler limits.
As a side note I believe that the answers do demonstrate that it could be a good interview question, as long as the emphasis is not on the ability to remember trivia about the data type overflow and its implications (that can be easily spotted during unit testing), or the way of obtaining MAX and MIN limits (can easily be looked up in the documentation) but rather on the analysis of various ways of dealing with the problem.
class Test {
public static void main(String[] args) {
for (int a = Integer.MIN_VALUE; a < Integer.MAX_VALUE; a++) {
System.out.println(a);
}
System.out.println(Integer.MAX_VALUE);
}
}
Am I hired?
Simplest form (minimum code):
for (long i = Integer.MIN_VALUE; i <= Integer.MAX_VALUE; i++) {
System.out.println(i);
}
No integer overflow, no extra checks (just a little more memory usage, but who doesn't have 32 spare bits lying around).
While I suppose
for (long i = Integer.MIN_VALUE; i <= Integer.MAX_VALUE; i++)
System.out.println(i);
has fewer characters, I can't really say that it is simpler. Shorter isn't necessarily simpler, it does have less code though.
I just have to add an answer...
public class PrintInts {
public static void main(String[] args) {
int i = Integer.MIN_VALUE;
do {
System.out.println(i);
++i;
} while (i != Integer.MIN_VALUE);
}
}
We don't want the body repeated (think of the maintenance!)
It doesn't loop forever.
It uses an appropriate type for the counter.
It doesn't require some wild third-party weirdo library.
Ah, and here I had just started writing
System.out.println(-2147483648);
System.out.println(-2147483647);
System.out.println(-2147483646);
Okay, just give me a few weeks to finish typing this up ...
The instructions didn't say I have to use a loop, and at least this method doesn't have any overflow problems.
Is there something tricky that I'm not catching? There probably is... (edit: yes, there is!)
class AllInts {
public static void main(String[] args) {
// wrong -- i <= Integer.MAX_VALUE will never be false, since
// incrementing Integer.MAX_VALUE overflows to Integer.MIN_VALUE.
for (int i = Integer.MIN_VALUE; i <= Integer.MAX_VALUE; i++) {
System.out.println(i);
}
}
}
Since the printing is the bottleneck, a buffer would improve the speed quite a lot (I know because I just tried it):
class AllInts {
public static void main(String[] args) {
// a rather large cache; I did no calculations to optimize the cache
// size, but adding the first group of numbers will make the buffer
// as large as it will ever need to be.
StringBuilder buffer = new StringBuilder(10000000);
int counter = 0;
// note that termination check is now <
// this means Integer.MAX_VALUE won't be printed in the loop
for (int i = Integer.MIN_VALUE; i < Integer.MAX_VALUE; i++) {
buffer.append(i).append('\n');
if (++counter > 5000000) {
System.out.print(buffer);
buffer.delete(0, buffer.length()-1);
counter = 0;
}
}
// take care of the last value (also means we don't have to check
// if the buffer is empty before printing it)
buffer.append(Integer.MAX_VALUE);
System.out.println(buffer);
}
}
Also, this version will actually terminate (thanks to Daniel Lew for pointing out that there was, in fact, something tricky that I wasn't catching).
The total run time for this version (run with -Xmx512m) was 1:53. That's over 600000 numbers/second; not bad at all! But I suspect that it would have been slower if I hadn't run it minimized.
When I first looked at this, my first question was 'how do you define smallest and largest'. For what I thought was the most obvious definition ('smallest' == 'closest to 0') the answer would be
for (int i = 0; i >= 0; i++) {
System.out.println(i);
System.out.println(-i-1);
}
But everyone else seems to read 'smallest' as 'minimum' and 'largest' as 'maximum'
Come on folks, it said using java. It didn't say use an int in the for loop. :-)
public class Silly {
public static void main(String[] args) {
for (long x = Integer.MIN_VALUE; x <= Integer.MAX_VALUE; x++) {
System.out.println(x);
}
}
}
Another way to loop through every value using an int type.
public static void main(String[] args) {
int i = Integer.MIN_VALUE;
do {
System.out.println(i);
} while (i++ < Integer.MAX_VALUE);
}
Given the overview of the best answers, I realized that we're seriously lacking in the brute-force department. Jay's answer is nice, but it won't actually work. In the name of Science, I present - Bozo Range:
import java.util.Random;
import java.util.HashSet;
class Test {
public static void main(String[] args) {
Random rand = new Random();
HashSet<Integer> found = new HashSet<Integer>();
long range = Math.abs(Integer.MAX_VALUE - (long) Integer.MIN_VALUE);
while (found.size() < range) {
int n = rand.nextInt();
if (!found.contains(n)) {
found.add(n);
System.out.println(n);
}
}
}
}
Note that you'll need to set aside at least 4 GB of RAM in order to run this program. (Possibly 8 GB, if you're on a 64-bit machine, which you'll probably require to actually run this program...). This analysis doesn't count the bloat that the Integer class adds to any given int, either, nor the size of the HashSet itself.
The maximum value for int is Integer.MAX_VALUE and the minimum is Integer.MIN_VALUE. Use a loop to print all of them.
Package fj is from here.
import static fj.pre.Show.intShow;
import static fj.pre.Show.unlineShow;
import static fj.data.Stream.range;
import static java.lang.Integer.MIN_VALUE;
import static java.lang.Integer.MAX_VALUE;
public class ShowInts
{public static void main(final String[] args)
{unlineShow(intShow).println(range(MIN_VALUE, MAX_VALUE + 1L));}}
At 1000 lines/sec you'll be done in about 7 weeks. Should we get coffee now?
Just improving the StringBuilder's approach a little:
2 threads + 2 buffers (i.e. StringBuilder): The main idea is that one thread fills one buffer while the other thread dumps the content of the other buffer.
Obviously, the "dumper" thread will always run slower than the "filler" thread.
If the interviewer was looking for all the Integer values possible in Java, You might try giving him a solution using Long:
class AllIntegers{
public static void main(String[] args) {
for (int i = Integer.MIN_VALUE; i < Long.MAX_VALUE; i++) {
System.out.println(i);
}
System.out.println(Long.MAX_VALUE);
}
}
This should print a range from -9223372036854775808 to 9223372036854775807 which is way more than you would achieve using Integer.