Why is string concatenation faster in Delphi than in Java? - java

Problem
I wrote 2 programs, one in Delphi and one in Java, for string concatenation and I noticed a much faster string concatenation in Delphi compared to Java.
Java
String str = new String();
long t0 = System.nanoTime();
for (int i = 0; i < 50000; i++)
str += "abc";
long t1 = System.nanoTime();
System.out.println("String + String needed " + (t1 - t0) / 1000000 + "ms");
Delphi
Stopwatch.Start;
for i := 1 to 50000 do
str := str + 'abc';
Stopwatch.Stop;
ShowMessage('Time in ms: ' + IntToStr(Stopwatch.ElapsedMilliseconds));
Question
Both measure the time in milliseconds but the Delphi program is much faster with 1ms vs. Javas 2 seconds. Why is string concatenation so much faster in Delphi?
Edit: Looking back at this question with more experience I should have come to the conclusion that the main difference comes from Delphi being compiled and Java being compiled and then run in the JVM.

TLDR
There may be other factors, but certainly a big contributor is likely to be Delphi's default memory manager. It's designed to be a little wasteful of space in order to reduce how often memory is reallocated.
Considering memory manager overhead
When you have a straight-forward memory manager (you might even call it 'naive'), your loop concatenating strings would actually be more like:
//pseudo-code
for I := 1 to 50000 do
begin
if CanReallocInPlace(Str) then
//Great when True; but this might not always be possible.
ReallocMem(Str, Length(Str) + Length(Txt))
else
begin
AllocMem(NewStr, Length(Str) + Length(Txt))
Copy(Str, NewStr, Length(Str))
FreeMem(Str)
end;
Copy(Txt, NewStr[Length(NewStr)-Length(Txt)], Length(Txt))
end;
Notice that on every iteration you increase the allocation. And if you're unlucky, you very often have to:
Allocate memory in a new location
Copy the existing 'string so far'
Finally release the old string
Delphi (and FastMM)
However, Delphi has switched from the default memory manager used in it's early days to a previously 3rd party one (FastMM) that's designed run faster primarily by:
(1) Using a sub-allocator i.e. getting memory from the OS a 'large' page at a time.
Then performing allocations from the page until it runs out.
And only then getting another page from the OS.
(2) Aggressively allocating more memory than requested (anticipating small growth).
Then it becomes more likely the a slightly larger request can be reallocated in-place.
These techniques can thought it's not guaranteed increase performance.
But it definitely does waste space. (And with unlucky fragmentation, the wastage can be quite severe.)
Conclusion
Certainly the simple app you wrote to demonstrate the performance greatly benefits from the new memory manager. You run through a loop that incrementally reallocates the string on every iteration. Hopefully with as many in-place allocations as possible.
You could attempt to circumvent some of FastMM's performance improvements by forcing additional allocations in the loop. (Though sub-allocation of pages would still be in effect.)
So simplest would be to try an older Delphi compiler (such as D5) to demonstrate the point.
FWIW: String Builders
You said you "don't want to use the String Builder". However, I'd like to point out that a string builder obtains similar benefits. Specifically (if implemented as intended): a string builder wouldn't need to reallocate the substrings all the time. When it comes time to finally build the string; the correct amount of memory can be allocated in a single step, and all portions of the 'built string' copied to where they belong.

In Java (and C#) strings are immutable objects. That means that if you have:
string s = "String 1";
then the compiler allocates memory for this string. Haven then
s = s + " String 2"
gives us "String 1 String 2" as expected but because of the immutability of the strings, a new string was allocated, with the exactly size to contain "String 1 String 2" and the content of both strings is copied to the new location. Then the original strings are deleted by the garbage collector. In Delphi a string is more "copy-on-write" and reference counted, which is much faster.
C# and Java have the class StringBuilder with behaves a lot like Delphi strings and are quite faster when modifying and manipulating strings.

Related

Is LinkedList.toString().replace() O(2*k)?

I know that converting a suitable object (e.g., a linked list) to a string using the toString() method is an O(n) operation, where 'n' is the length of the linked list. However, if you wanted to then replace something in that that string using the replace(), method, is that also an o(k) method, where 'k' is the length of the string?
For example, for the line String str = path.toString().replace("[", "").replace("]", "").replace(",", "");, does this run through the length of the linked list 1 time, and then the length of the string an additional 3 times? If so, is there a more efficient way to do what that line of code does?
Yes, it would. replace has no idea that [ and ] are only found at the start and end. In fact, it's worse - you get another loop for copying the string over (the string has an underlying array and that needs to be cloned in its entirety to lop a character out of it).
If your intent is to replace every [ in the string, then, no, there is no faster way. However, if your actual intent is to simply not have the opening brace and closing brace, then either write your own loop to toString the contents. Something like:
LinkedList<Foo> foos = ...;
StringBuilder out = new StringBuilder();
for (Foo f : foos) out.append(out.length() == 0 ? "" : ", ").append(f);
return out.toString();
Or even:
String.join(", ", foos);
Or even:
foos.stream().collect(Collectors.joining(", "));
None of this is the same thing as .replace("[", "") - after all, if a [ symbol is part of the toString() of any Foo object, it would be stripped out as well with .replace("[", "") - though you probably didn't want that to happen.
Note that the way modern CPUs work, unless that list has over a few thousand elements in it, looping it 4 times is essentially free and takes no measurable time. The concept of O(n) 'kicks in' after a certain number of loops. On modern hardware, it tends to be a lot of loops before it matters. Often other concerns are much more important. As a simple example, linked list, in general? Horrible performance relative to something like ArrayList. Even in cases where O(k) wise it should be faster. It's due to the way linkedlists create extra objects and how these tend to be non-contiguous (not near each other in memory). Modern CPUs can't read memory at all. They can ask the memory controller to take one of the on-die cache pages and replace it with the contents of another memory page, which takes 500 to a 1000 cycles. The CPU will ask the memory controller to do that and then go to sleep for 1000 cycles. You can see how reducing the number of times it does this can have a rather marked effect on performance, and yet the O(k) business doesn't and cannot take it into account.
Do not worry about performance unless you have a real life scenario where the program appears to run slower than you think it should. Then, use a profiler to figure out which 1% of the code is eating 99% of the resources (because it's virtually always a 1% 'hot path' that is responsible) and then optimize just that 1%. It's pretty much impossible to predict what the 1% is going to be. So, don't bother trying to do so while writing code, it just leads you to writing harder to maintain, less flexible code - which ironically enough tends to lead to situations where adjusting the hot path is harder. Worrying about performance, in essence, slows down the code. Hence why it's very very important not to worry about that, and worry instead about code that is easy to read and easy to modify.

Difference between stringBuillder append(CONST) and append("new string")

Can I get some concrete explanation of memory and runtime overhead with the below two statements?
String CONST = "string constant";
StringBuilder sb1 = new StringBuilder();
sb1.append(CONST);
StringBuilder sb2 = new StringBuilder();
sb2.append("string constant");
Does second create string object and add in stringpool?
Is there any scenario(consider many string appends as well) where we can justify one is better than other?
There is no difference in memory or runtime overhead between these two versions.
Use whichever seems more readable or maintainable. If you're reusing the same string constant in many places, the constant is long, or might change, then pulling out a constant might be appropriate.
In reference to the runtime overhead, running a simulation of both methods yielded almost identical results.
My tests were done with 10,000,000,000 iterations and the runtime was:
Method 1 - 95109ms (~9.5ns average)
Method 2 - 95002ms (~9.5ns average)
So definitely no noticeable difference in performance.
Therefore, as #LouisWasserman said in their answer, just use the one that keeps your code clean and legible.

java String in following code leak memery? how to fix it

Here is the function:
String gen() {
String raw_content = "";
String line;
for (int i=0; i<10000; i++) {
line = randString();
raw_content += line + "\n";
}
return raw_content;
}
When I call gen() for 100 times in main(), my program will stuck.
I suspect this is related to memory leak caused by Java String. So will the no-longer used memory be freed by JVM automatically? How to fix this?
Thanks!
To make a long story short, in java (and other JVM languages), you don't have to care about memory allocation at all. You really shouldn't be worrying about it - at some time after all references to it have been lost, it'll be freed up by the garbage collecting thread. See: Garbage Collection in Java.
Your problem has less to do with memory and more that your function is just really time intensive (as Hot Licks said in comment). Strings in Java are immutable, so when you say raw_content += line + "\n"; you're really creating a new string of raw_content + line + "\n" and setting raw_content equal to that. If rand_string() returns long results, this will become an egregiously long string. If you really want to perform this function, StringBuilders are the way to go to at least reduce it from O(N^2) to O(N). If you're just looking for a memory exercise, you don't have to actually do any changes - just read the above article.

Performance comparison of immutable string concatenation between Java and Python

UPDATES: thanks a lot to Gabe and Glenn for the detailed explanation. The test is wrote not for language comparison benchmark, just for my studying on VM optimization technologies.
I did a simple test to understand the performance of string concatenation between Java and Python.
The test is target for the default immutable String object/type in both languages. So I don't use StringBuilder/StringBuffer in Java test.
The test simply adds strings for 100k times. Java consumes ~32 seconds to finish, while Python only uses ~13 seconds for Unicode string and 0.042 seconds for non Unicode string.
I'm a bit surprise about the results. I thought Java should be faster than Python. What optimization technology does Python leverage to achieve better performance? Or String object is designed too heavy in Java?
OS: Ubuntu 10.04 x64
JDK: Sun 1.6.0_21
Python: 2.6.5
Java test did use -Xms1024m to minimize GC activities.
Java code:
public class StringConcateTest {
public static void test(int n) {
long start = System.currentTimeMillis();
String a = "";
for (int i = 0; i < n; i++) {
a = a.concat(String.valueOf(i));
}
long end = System.currentTimeMillis();
System.out.println(a.length() + ", time:" + (end - start));
}
public static void main(String[] args) {
for (int i = 0; i < 10; i++) {
test(1000 * 100);
}
}
}
Python code:
import time
def f(n):
start = time.time()
a = u'' #remove u to use non Unicode string
for i in xrange(n):
a = a + str(i)
print len(a), 'time', (time.time() - start)*1000.0
for j in xrange(10):
f(1000 * 100)
#Gabe's answer is correct, but needs to be shown clearly rather than hypothesized.
CPython (and probably only CPython) does an in-place string append when it can. There are limitations on when it can do this.
First, it can't do it for interned strings. That's why you'll never see this if you test with a = "testing"; a = a + "testing", because assigning a string literal results in an interned string. You have to create the string dynamically, as this code does with str(12345). (This isn't much of a limitation; once you do an append this way once, the result is an uninterned string, so if you append string literals in a loop this will only happen the first time.)
Second, Python 2.x only does this for str, not unicode. Python 3.x does do this for Unicode strings. This is strange: it's a major performance difference--a difference in complexity. This discourages using Unicode strings in 2.x, when they should be encouraging it to help the transition to 3.x.
And finally, there can be no other references to the string.
>>> a = str(12345)
>>> id(a)
3082418720
>>> a += str(67890)
>>> id(a)
3082418720
This explains why the non-Unicode version is so much faster in your test than the Unicode version.
The actual code for this is string_concatenate in Python/ceval.c, and works for both s1 = s1 + s2 and s1 += s2. The function _PyString_Resize in Objects/stringobject.c also says explicitly: The following function breaks the notion that strings are immutable. See also http://bugs.python.org/issue980695.
My guess is that Python just does a realloc on the string rather than create a new one with a copy of the old one. Since realloc takes no time when there is enough empty space following the allocation, it is very fast.
So how come Python can call realloc and Java can't? Python's garbage collector uses reference counting so it can tell that nobody else is using the string and it won't matter if the string changes. Java's garbage collector doesn't maintain reference counts so it can't tell whether any other reference to the string is extant, meaning it has no choice but to create a whole new copy of the string on every concatenation.
EDIT: Although I don't know that Python actually does call realloc on a concat, here's the comment for _PyString_Resize in stringobject.c indicating why it can:
The following function breaks the notion that strings are immutable:
it changes the size of a string. We get away with this only if there
is only one module referencing the object. You can also think of it
as creating a new string object and destroying the old one, only
more efficiently. In any case, don't use this if the string may
already be known to some other part of the code...
I don't think your test means a lot, since Java and Python handle strings differently (I am no expert in Python but I do know my way in Java). StringBuilders/Buffers exists for a reason in Java. The language designers didn't do any kind of more efficient memory management/manipulation exactly for this reason: there are other tools than the "String" object to do this kind of manipulation and they expect you to use them when you code.
When you do things the way they are meant to be done in Java, you will be surprised how fast the platform is... But I have to admit that I have been pretty much impressed by the performance of some Python applications I have tried recently.
I do not know the answer for sure. But here are some thoughts. First, Java internally stores strings as char [] arrays containing the UTF-16 encoding of the string. This means that every character in the strings takes at least two bytes. So just in terms of raw storage, Java would have to copy around twice as much data as python strings. Python unicode strings are therefore the better test because they are similarly capable. Perhaps python stores unicode strings as UTF-8 encoded bytes. In that case, if all you are storing in these are ASCII characters, then again you'd have Java using twice as much space and therefore doing twice as much copying. To get a better comparison you should concatenate strings containing more interesting characters that require two or more bytes in their UTF-8 encoding.
I ran Java code with a StringBuilder in place of a String and saw an average finish time of 10ms (high 34ms, low 5ms).
As for the Python code, using "Method 6" here (found to be the fastest method), I was able to achieve an average of 84ms (high 91ms, low 81ms) using unicode strings. Using non-unicode strings reduced these numbers by ~25ms.
As such, it can be said based on these highly unscientific tests that using the fastest available method for string concatenation, Java is roughly an order of magnitude faster than Python.
But I still <3 Python ;)

Compile time optimization of String concatenation

I'm curious, how far the optimization of the following code snippet will go.
To what I know, whenever the capacity of StringBuffer is extended, it costs some CPU work, because its content is required to be reallocated. However, I guess Java compiler optimization can precalculate the required capacity instead of doing multiple reallocations.
The question is: will the following snippet of code be optimized so?
public static String getGetRequestURL(String baseURL, Map<String, String> parameters) {
StringBuilder stringBuilder = new StringBuilder();
parameters.forEach(
(key, value) -> stringBuilder.append(key).append("=").append(value).append("&"));
return baseURL + "?" + stringBuilder.delete(stringBuilder.length(),1);
}
In Java, most optimization is performed by the runtime's just in time compiler, so generally javac optimizations don't matter much.
As a consequence, a Java compiler is not required to optimize string concatenation, though all tend to do so as long as no loops are involved. You can check the extent of such compile time optimizations by using javap (the java decompiler included with the JDK).
So, could javac conceivably optimize this? To determine the length of the string builder, it would have to iterate the map twice. Since java does not feature const references, and the compiler has no special treatment for Map, the compiler can not determine that this rewrite would preserve the meaning of the code. And even if it could, it's not at all clear that the gains would be worth the cost of iterating twice. After all, modern processors can copy 4 to 8 characters in a single cpu instruction. Since memory access is sequential, there won't be any cache missing while growing the buffer. On the other hand, iterating the map a second time will likely cause additional cache misses, because the Map entries (and the strings they reference) can be scattered all over main memory.
In any case, I would not worry about the efficiency of this code. Even if your URL is 1000 characters long, resizing the buffer will take about 0.1 micro seconds. Unless you have evidence that this really is a performance hotspot, your time is probably better spent elsewhere.
First of all:
You can find out what (javac) compile time optimizations occur by looking at the bytecodes using the javap tool.
You can find out what JIT compiler optimizations are performed by getting the JVM to dump the native code.
So, if you need to know how your code has been optimized (on a particular platform) for practical reasons, then you should check.
In reality, the optimizations by javac are pretty simple-minded, and do not go to the extent of precalculating buffer sizes. I haven't checked, but I expect that the same is true for the JIT compiler. I doubt that it makes any attempt to preallocate a StringBuilder with an "optimal" size.
Why?
The reasons include the following:
An inaccurate precalculation (on average) doesn't help, and may be worse than doing nothing.
An accurate precalculation typically involves measuring the (dynamic) lengths of the actual strings to be joined.
Implementing the optimization logic would be complicated, and would make the optimizers slower and more effort to maintain.
At runtime, the String mensuration introduces overheads. Whether you would come out ahead often enough to make a difference is difficult to determine. (People don't like optimizations that make their code run slower ...)
There are better (more efficient) ways to do large scale text assembly than using String concatenation. The programmer (who has more knowledge of the problem domain and application logic) can optimize this better than a compiler. If it matters enough to spend the developer effort on this.
One optimization is to set the baseURL and ampersand in the stringBuilder instead of using the String concatenate at the end, such as:
public static String getGetRequestURL(String baseURL, Map<String, String> parameters) {
StringBuilder stringBuilder = new StringBuilder(baseURL);
stringBuilder.append("&");
parameters.forEach((key, value) -> stringBuilder.append(key).append("=").append(value).append("&"));
stringBuilder.setLength(stringBuilder.length() - 1);
return stringBuilder.toString();
}
If you want a little more speed and since javac or JIT will not optimize based potential string size, you can track that yourself without incurring much overhead, but adding a max size tracker, such as this:
protected static URL_SIZE = 256;
public static String getGetRequestURL(String baseURL, Map<String, String> parameters) {
StringBuilder stringBuilder = new StringBuilder(URL_SIZE);
stringBuilder.append(baseURL);
stringBuilder.append("&");
parameters.forEach((key, value) -> stringBuilder.append(key).append("=").append(value).append("&"));
int size = stringBuilder.length();
if (size > URL_SIZE) {
URL_SIZE = size;
}
stringBuilder.setLength(size - 1);
return stringBuilder.toString();
}
That said, with some testing of 1 million calls, I found that the different version preformed as (in milliseconds):
Your version: total = 1151, average = 230
Above version 1: total = 936, average = 187
Above version 2: total = 839, average = 167

Categories