Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I was wondering if a simple program with no threads can run faster on some computers which have many cores? or on a cluster of linux servers?
Recently I have run my algorithm which has to process billions of IP packets on my PC(core i7 with 16GB RAM) and it took 1881 minutes to finish processing. Then I thought its good to run the algorithm on clusters of linux servers each node with 10 processors and 48GB RAM to get the results quicker. However, there is no big difference between the two experiments.
Can someone comments what I am missing?
Unless your algorithm actually makes use of those multiple instances and extra memory, there shouldn't be a lot of difference. Parallel programming is an art of its own, and a "regular", single-threaded program doesn't just change into parallel one by itself.
If you have a single thread of execution more cores, CPU or machines won't help. Only a faster CPU would speed things up, and that only if your process is CPU-bound, and not IO-bound.
First you should check, where your processing time is spent, in CPU, or waiting for IO. If you have a significant amount of CPU usage, you can try to parallelize your work, i.e. split the data into chunks, and have different thread resp. machines process them in parallel.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I'm new to ForkJoinPool framework. Don't quite get it, how is that achieved that each thread in ForkJoinPool is certainly run by separate processor/core to provide real parallelism considering that there are many other threads outside the ForkJoinPool instance in the runtime executing concurrently. I have a clue that it has something to do with Thread Affinity. Can anyone share some ideas/links?
P.S. Of course, I meant the case when number of threads is no greater than Runtime.getRuntime().availableProcessors()
You asked:
how is that achieved that each thread in ForkJoinPool is certainly run by separate processor/core to provide real parallelism
There is no way within plain Java code to make certain cores run certain threads at certain times. A Java programmer has no direct control over parallelism.
When a Java thread is scheduled for execution on a CPU core, and for how long that execution runs, is up to the host OS thread technology being leveraged by your Java implementation.
As for processor affinity, also known as CPU pinning, see How to set a Java thread's cpu core affinity?. Beware the notice in Answer by rdalmeida:
… thread affinity is pointless unless you have previously isolated the core from kernel/user threads and hardware interrupts
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
If a distributed computing framework spins up nodes for running Java/ Scala operations then it has to include the JVM in every container. E.g. every Map and Reduce step spawns its own JVM.
How does the efficiency of this instantiation compare to spinning up containers for languages like Python? Is it a question of milliseconds, few seconds, 30 seconds? Does this cost add up in frameworks like Kubernetes where you need to spin up many containers?
I've heard that, much like Alpine Linux is just a few MB, there are stripped down JVMs, but still, there must be a cost. Yet, Scala is the first class citizen in Spark and MR is written in Java.
Linux container technology uses layered filesystems so bigger container images don't generally have a ton of runtime overhead, though you do have to download the image the first time it is used on a node which can potentially add up on truly massive clusters. In general this is not usually a thing to worry about, aside from the well known issues of most JVMs being a bit slow to start up. Spark, however, does not spin up a new container for every operation as you describe. It creates a set of executor containers (pods) which are used for the whole Spark execution run.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm currently rewriting a Ruby on Rails web app in Spring Boot. A big part of the move is for performance.
Whilst developing the app, when I hit run in IntelliJ the first response time is typically around 1s which I assume is JVM startup, after a refresh it'll jump down to 300ms~ then 150ms for 4-5 further requests, after that it settles on 50-75ms for the most part. Randomly though later on I'll get a 150ms response again.
As a JVM novice I'm wondering what factors are at play here in the varying response times? which would be closer to the standard "hot" response times that I could expect in production? I realise I'm unlikely to get an accurate depiction of production performance on my local dev machine but would like to understand the variance seen above so I can at least gauge a little better what affect my incremental changes are having.
As a JVM novice I'm wondering what factors are at play here in the varying response times?
startup:
jit warmup
lazy initialization as part of your application
GC needing to settle on some heap size
steady state:
GC pauses
application behavior, e.g. cache entries expiring every now and then
varying load
JIT deoptimizations/recompilations due to some uncommon paths being taken
thermal CPU throttling, especially on but not exclusive to laptops
For server applications you should ignore the ramp-up behavior and focus on steady state. And guessing what the issue might be will not help, measurements are king.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
We know that a processor can process only one thread at a time. So when we say multitasking on a single processor, means the processor switches between the threads/ processes and gives the end-user a feeling of multitasking. Related to it, I've some questions to better understand the concept of multi-threading -
I think, totally it will take the same amount of time to execute all
the processes in switching manner and executing them one by one, am I
right?
If yes, is there any benefit of dividing a task in multiple
threads?
What does it mean, when we say quad-core, octa-core processor? Does it mean a system has 4, 8 processors respectively and it can process 4/ 8 threads at a time?
Yes, it will take the same amount of time. In fact, it will take a bit longer, because the switching takes time.
Dividing a task into multiple threads lets you use multiple processors.
Also, the CPU isn't always the slowest thing. Imagine you have a chat server - you could use one thread for each client, and each thread would spend most of its time doing nothing (waiting for the user to type a message).
A quad-core processor has 4 cores. An octa-core processor has 8 cores. Cores are pretty much separate processors, but on the same chip instead of separate chips.
1 Imagine your task is to read several files. If you run several reading threads it may significantly improve performance
2 Imagine your task is to calculate a sum of a large array of numbers, if you run 2 threads on 2 parts of the array in parallel, the speed will increase by 2
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've been writing some console applications in c++ for working with audio for a little while now and I'm interested in running them on a website. Most of my programs are quite resource-hungry, however, some with execution times of up to 5-10 min, reading and writing several gigabytes to and from disk, and requiring several gigabytes of memory. I've done a few simple php-mysql pages before, but nothing like this, so before i get my hopes up and dive into learning how to get an application running on a website, i figure i should ask a few questions:
Is it even feasible to run a program like this on the web? How would performance on a server compare to my PC?
Do web hosts typically allow a single user to use this kind of memory?
I realize c++ isn't usually the first choice for web programming, but since performance will be critical would it be better than Java?
I know nothing about this, so i'm just trying to get my expectations straight.
This is my opinion:
1 - The user of your web application is probably not going to wait 5-10 min for a response. You can focus on doing the hard-work on another process and your web app later shows the results to your user in some way.
2 - Yes, they allow, but that costs money. You can see Amazon EC2 and Digital Ocean (cheaper).
3 - The programming language in this case (C++ or Java) is not that important. Focus more on your problem, architecture, deferred tasks, batch processing, etc. That will really make a difference.
No, the programming language doesn't much matter. It used to be the case that java was slower than C++ i believe, but that gap has closed pretty much as compilers have improved. If you want to run your applications better, try to design them in such a way that they are very efficient. Looking into Time Complexity may help, if you haven't already done so. The better your time complexity, the faster your program.