Skip to main content

Hi there!

The magical world of parallel computing awaits you!


We all know Serial computing using our favourite mainstream programming language( C, C++, Java, Python ) on a single faithful CPU with a module of RAM of a standard PC, one machine instruction after another. (Don't fret even if you don't! We'll discuss concepts gradually transitioning from serial processing to concurrency to parallelism and beyond!)

Now, suppose we have a 2.0 GHz single core CPU, so it is capable of about 2 billion instructions per second.
Compare that to Human calculation time, about 2 seconds per instruction!!! (assume simple addition or relocation instruction)

What if we want more than that? Just visit the closest computer hardware store and buy another PC with:

  • n processors with m cores, x.yz GHz
  • Superscalar architecture, Hyper threading, etc
  • r GB GDDR5 RAM and s GB SSD 
  • Integrated Graphics and Discrete Graphics card with abcd GPGPU cores

(Wauw, that is possibly going to hit your bank account...)

Now we're ready to go! We've got all necessary hardware for improving performance of our code! Right?

NOPE.

Parallelization being the next step of serial computing is a misconception. There are many optimizations and critical algorithm changes which can lead to amazing performance gains without even entering parallelism. So, remember that parallelization is not the only solution for multitasking.

Once the program is most logical and optimal, we can dive into parallelizing/multitasking. I'll describe many common techniques in future posts, so let's save that for later.

In this blog, I'm going to help you understand the principles behind parallelization and improving computation performance.

There is a lot to convey in this vast subject, so there's something new to learn everyday, so I'll keep the blog updated with latest bits of general advice and code snippets! (in mainstream programming languages)

Here's my (rather silly) animation of serial and parallel computing!




Static Image Source: Lemon shooter by Balderek

Comments

Popular posts from this blog

Parallel computing jargon

Parallel is always better than Serial... right? NO Well, in the most general case, parallel computing is better than serial computing in terms of speed and throughput. Sometimes, we have to make other considerations too. As a comparison , consider computer networking where serial transmissions are straight-forward and faster than their parallel SCSI  counterparts! Some processes are inherently not parallelizable, due to presence of data dependency. (Two bank account withdrawals from different locations which may lead to negative account balance if done simultaneously! Anyway, such a pair of withdrawals with critical section management using semaphores/mutexes conceptually and momentarily reduces to serial execution...) On a lighter note, the process of Sneezing and keeping your eyes open is not parallelizable for example! Before jumping into the concepts and principles of parallelizing a given task, let us go through some interesting set of ( controversial ...

A simple python program using multiprocessing... or is it?

I would like to show you a very simple, yet subtle example on how programs can seem to produce unreasonable outputs. Recently, I was glancing through certain programs in Python, searching for places to optimize code and induce parallelism. I started thinking of threads immediately, and how independent contexts of computation can speed up code. Although I program frequently with Python, I hadn't been using any kind of explicit parallelism in my code. So, using my C knowledge, I went towards the Threading library of Python. Long story short, that was a mistake! Turns out that the Python implementation which is distributed by default (CPython) and Pypy, both have a process-wide mutex called the Global Interpreter Lock. This lock is necessary mainly because CPython's memory management is not thread-safe. The GIL locks up any kind of concurrent access to any objects in the Python run-time to prevent any race conditions or corruption of states. This is effectively a synchroniz...

EnchantingProgram: Spoiler alert

This is part-2 of the "EnchantingProgram" post, read this post first: http://magical-parallel-computing.blogspot.in/2017/04/a-simple-python-program-using.html So, let's see the actual reason of the speedup in the C and C++ programs. Lo and Behold, it is the effect of Branch Prediction ! Surprised? Well, at least my comments in the programs should have given you some direction!!! The if condition leads to a branch in the control flow. We know that branch predictions lead to pipeline flushes and create a delay in the piped execution scheme. Modern microprocessors utilize complex run-time systems for throughput, execution speed and memory efficiency. One example of such a technique is dynamic branch prediction. A long time ago, microprocessors used only a basic technique called static branch prediction, with two general rules: A forward branch is presumed to be not taken A backward branch is presumed to be taken Now, static branch p...