I am writing a simulation that will use a lot of memory and needs to be fast. Instead of using ints I am using chars (8 bits not 32). I need to operate on them as if these chars were ints.
To achieve that I have done something like
char a = 1;
char b = 2;
System.out.println(a*1 + b*1); //it give me 3 in console so it has int-like behavior;
I don't know what's going on "under the mask" when I multiply char with an integer. Is this the fastest way to do it?
Thank you for help!
Performance wise it's not worth using char instead of int, because all modern hardware architectures are optimized for 32- or 64-bit wide memory and register access.
Only reason to use char would be if you want to reduce memory footprint, i.e. if you work with large amount of data.
Additional info: Performance of built-in types : char vs short vs int vs. float vs. double
A char is simply a number (not a 32bit int but a number) which is normally represented ascii-encoded. By multiply it with an integer the compiler does an implicit cast from char to int that's why you get printed 3 on the console instead of the ascii representative.
You should use ints. the size of the character will not affect the fetch speed of the data, nor the processing time unless it is composed of multiple computer words, which is unlikely. There is no reason to do this. Moreover, chars are stored as integers and are therefore the same size. an alternative would be to use small ints but again this is not helpful for what you want.
Also, I notice your code has 'System.Out.Println' which means you are using java. There are several implications of this, the first is that no, you are not going to be going fast or using very little memory. Period, it will not happen. There is a large amount of overhead involved in running the JVM, JIT, garbage collector and other parts of the Java platform. If efficiency is a relavent factor you are starting off wrong. The second implication is that your choice of datatypes will have no impact on processing time because they will be identical to the physical hardware. Only the Virtual machine will distinguish between them, and in the case of primitives, there is no difference anyways.
Related
In my java application one of my objects has exactly one value from a set of values. Now I wonder how to define them to increase the performance:
private static final String ITEM_TYPE1= "type1"
private static final int ITEM_TYPE1= 1
Does defining int better than string? (I should convert the value to string so I like to define as string but just fearing for performance reasons because comparing ints is simpler than srtings maybe)
EDIT: I am aware of enums but I just want to know whether ints has more performance than strings or not? This depends on how JDK and JRE handle the undergoing. (In Android dalvik or ART ..)
In my java application one of my objects has exactly one value from a set of values
That is what java enums are for.
Regarding the question "have ints more performance than strings", that is almost nonsensical.
You are talking about static constants. Even if they are used a 100 or a 1000 times in your app, performance doesn't matter here. What matters is to write code that is easy to read and maintain. Because then the JIT can kick in and turn it into nicely optimized machine code.
Please understand: premature optimisation is the root of all evil! Good or bad performance of your app depends on many other factors, definitely not on representing constants as ints or strings.
Beyond that: the type of some thing in Java should reflect its nature. If it is a string, make it a string (like when you want to mainly use it as string, and concatenate it to other strings). When you have numbers and deal with them as numbers, make it an int.
First of all, an int has always a fixed size which it uses in memory, on most systems it's 4 bytes (I guess on Java always).
A String is a complex type which means that it takes not only the bytes of the actual string data but also additional like the length of the string and so on.
So if you have the choice between String and int, you should always chose int. it does not take so much place and is faster to operate with.
Beginner here. Need some deeper insight. Four integer types: byte, short, int, and long. So, apart from their range, what should I know about their behavior.
Difference between int i = 1000 ; and long l = 1000 ;
By differences, I mean, the space allocated in the memory, speed when using them etc. Any thing, that I must keep in mind while designing an algorithm in real life.
In one line, why use int if long can do int and more than int.
Searched on internet but didn't find a precise answer.
long takes double the size of int, at least in Java and most C++ platforms (in C++ you'd actually have to define the length of long which depending on platform might be 32 or 64 bit, that's why there is a long long in C++).
Besides memory usage in general this might also affect processing times because more data might have to be sent over the bus resp. you could send 2 ints in parallel on a 64-bit machine.
But most likely you're not going to have to take all that into account since most systems are not that tight on resources, so choose whatever you find appropriate.
Edit:
If you're operating on huge datasets it might save you some space to use int over long but in those cases it might actually be wiser to design the algorithm in a way that it operates on only the data which is immediately necessary and release it as fast as possible, i.e. don't keep everything in memory.
I'm a pretty new programmer and I'm using Java. My teacher said that there was many types of integers. I don't know when to use them. I know they have different sizes, but why not use the biggest size all the time? Any reply would be awesome!!!
Sometimes, when you're building massive applications that could take up 2+ GB of memory, you really want to be restrictive about what primitive type you want to use. Remember:
int takes up 32 bits of memory
short takes up 16 bits of memory, 1/2 that of int
byte is even smaller, 8 bits.
See this java tutorial about primitive types: http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html
The space taken up by each type really matters if you're handling large data sets. For example, if your program has an array of 1 million ints, then you're taking up 3.81 MB of RAM. Now let's say you know for certain that those 1,000,000 numbers are only going to be in the range of 1-10. Why not, then use a byte array? 1 million bytes only take up 976 Kilobytes, less than 1 MB.
You always want to use the number type that is just "large" enough to fit, just as you wouldn't put an extra-large T-shirt on a newborn baby.
So you could. Memory is cheap these days and assuming you are writing a simple program it is probably not a big deal.
Back in the days when memory was expensive, you needed to be a lot more careful with how much memory you use.
Let's say if you are word processing on an IBM 5100 as in one of the first PCs in the 70s -- which has a minimum of 16KB RAM (unimaginable these days) -- if you use 64-bit values all day, you can keep at most 2048 characters without any memory for the word processing program itself, that's not enough to hold what I'm typing right now!
Knowing that English has a limited number of characters and symbols, if you choose to use ASCII to represent the text, you would use 8 bits (or 1 byte) per character which allows you to go up to about 16,000 characters, and that's quite a bit more room for typing.
Generally you will use a data type that's just big enough to hold the biggest number you might need to save on memory. Let's say you are writing a database for the IRS to keep track of all the tax IDs, if you are able to save 1 bit of memory per record that's billions of bits (gigabytes!) of memory savings.
The ones that can hold higher numbers use more memory. Using more memory is bad. One int verses one byte is not a big difference right now. But if you write big programs in the future, the used memory adds up.
Also you said double in the title. A double is not like an int. It does hold a number, but it can have decimal places (eg. 2.36), unlike a int which can only hold numbers like 8.
Because we like to be professional and also the byte uses less memory than int and double can include decimal places.
This may seem like a fairly basic question, for which I apologise in advance.
I'm writing an Android app that uses a set of predefined numbers. At the moment I'm working with int values, but at some point I will probably need to use float and double values, too.
The numbers are used for two things. First, I need to display them to the user, for which I need a String (I'm creating a custom View and drawing the String on a Canvas). Second, I need will be using them in a sort of calculator, for which they obviously need to be int (or float/double).
Since the numbers are the same whether they are used as String or int, I only want to store them once (this will also reduce errors if I need to change any of them; I'll only need to change them in the one place).
My question is: should I store them as String or as int? Is it faster to write an int as a String, or to parse an int from a String? My gut tells me that parsing would take more time/resources, so I should store them as ints. Am I right?
Actually, your gut may be wrong (and I emphasise may, see my comments below on measuring). To convert a string to an integer requires a series of multiply/add operations. To convert an integer to a string requires division/modulo. It may well be that the former is faster than the latter.
But I'd like to point out that you should measure, not guess! The landscape is littered with the corpses of algorithms that relied on incorrect assumptions.
I would also like to point out that, unless your calculator is expected to do huge numbers of calculations each second (and I'm talking millions if not billions), the difference will be almost certainly be irrelevant.
In the vast majority of user-interactive applications, 99% of all computer time is spent waiting for the user to do something.
My advice is to do whatever makes your life easier as a developer and worry about performance if (and only if) it becomes an issue. And, just to clarify, I would suggest that storing them in native form (not as strings) would be easiest for a calculator.
I did a test on a 1 000 000 size array of int and String. I only timed the parsing and results says :
Case 1, from int to String : 1 000 000 in an average of 344ms
Case 2, from String to int : 1 000 000 in an average of 140ms
Conclusion, you're guts were wrong :) !
And I join the others saying, this is not what is going to make you're application slow. Better concentrate on making it simpler and safer.
I'd say that's not really relevant. What should matter more is type safety: since you have numbers int (or float and double) would force you to use numbers and not store "arbitrary" data (which String would allow to some extent).
The best is to do a bench test. Write two loops
one that converts 100000 units from numeric to String
one that converts 100000 units from String to numeric
And measure the time elapsed by getting System.currentTimeMillis() before and after each loop.
But personally, if I would need to do calculation on these numbers, I would store them in their native format (int or float) and I would only convert them to String for display. This is more a question of design and maintainability than a question of execution speed. Focusing on execution speed is sometime counterproductive: to gain a few µSec nobody will notice is not worth sacrifying the design and the robustness (of course, some compromise may have to be done when this is a question of saving a lot of CPU time). This reading may interest you.
A human who is using the calculator will not notice a performance difference, but as others have said. Using strings as your internal representation is a bad idea since you don't get type safety in that case.
You will most likely get into maintenance problems later on if you decide to use strings.
It's better design practice to have the view displayed to the user being derived from the underlying data, rather than the other way around - at some point you might decide to render the calculator using your own drawing functions or fixed images, and having your data as strings would be a pain here.
That being said, neither of these operations are particularly time consuming using modern hardware.
Parsing is a slow thing, printing a number is not. The internal representation as number allows you to compute, which is probably what you intend to d with your numbers. Storing numbers as, well, numbers (ints, floats, decimals) also takes up less space than their string representations, so … you'll probably want to go with storing them as ints, floats, or whatever they are.
You are writing an application for mobile devices, where the memory comsumption is a huge deal.
Storing an int is cheap, storing a String is expensive. Go for int.
Edit: more explanation. Storing an int bteween -2^31 and 2^31-1 costs 32 bits. No matter what the number is. Storing it in a String is 16 bits per digit in its base 10 representation.
My question is:
If I got it right from the Java disassembly, when I use
byte a=3,b=5;
System.out.println(a+b);
would actually use int instead of byte. Also all local memory slots are 4B just as stack slots. I realize that allocating a byte array would probably act more efficiently, but is it true that using a single byte value is ultimately inefficient? (The same point for short)
+ is integer operation. that is why
byte c = a+b; // compile error
you should use
int c = a + b
or
byte c = (byte)(a+b);
My advice is use int in order not to cast every time. If you always deal with byte use byte otherwise use int
It's important to realize that in Java the existence of byte and short are not primarily to have an integer data type with a smaller range. In almost all cases where (sufficiently small) numeric values are stored an int will be used, even if the valid range is just 0-100.
byte and short are used when some external factor restricts the data to be handled to those ranges. They simply exist to simplify interaction with such systems.
For example, file systems these days store byte streams. You could use int for all those reading/writing operations, but having a byte data type simplifies the operation and makes that distinction explicit.
The first rule of performance tuning should be to write simple, clear code.
In this example, there is no performance difference and even if there were the println() takes 10,000x times longer making any difference notional.
How a program appears in byte-code and how it appears in native code is different.
Not all slots are 4-bytes in the JVM. e.g. a reference on a 64-bit machine can be 8-bytes but it still uses one "slot"
Your machine doesn't have slots. It does have registers which are typically 32-bit or 64-bit.
In your example, byte operations are used, which are just as efficient as int operations, and can produce a different result so they are still required.
Note: an object with byte or short fields can be smaller than one with a int fields.
In this example, the JVM can calculate c once so it doesn't need a or b
Declare byte a=3,b=5; as final byte a=3,b=5; so that when this statement byte c = a+b; will get executed it will take byte not int.
Define inefficient.
Yes, the memory footprint might seem inefficient, but keep in mind most processors today are 32bit (or even 64bit). Storing a byte in only 8 bits would require the processor to get the 32 bits of address space and shift and clean it so the necessary 8 bits are in the right spot.