The following code-base attempts to compare multiple libraries on a simple experiment of Algebraic operations. The results are in favour of certain libraries because they are more natural in he application of Compute bound tasks, whereas the others are better at I/O bound tasks. Threading in Python is considered broken... I've also refrained from using the MPI library of C. I've made a driver.py program which create 5 sets of test-cases of increasing size to test the excution times of each version of each program. The comparison is across different libraries and languages, with the following programs: C : optimal serial code openMP directive based parallelism pthread library Python : optimal serial pyMP Python multiprocessing module Note that the gcc compiler automatically vectorizes the addition of arrays x and y using SIMD compliant hardware and appropriate data-types. I'll be adding the GPU comparison as soon as I can! Currently I don't h
I had used Scilab for learning Digital Image Processing. It is a good open source alternative to Matlab. Although it has a good foundation, it was very clear that it was improvement was still possible. In a recent image processing workshop, I learnt that the Scilab community had open sourced the source code on GitHub. Keeping in mind that the software is continually evolving, I knew I could put my optimization skills to good use. So, I dug into the code-base and found a few optimizable programs. So, I decided to inculcate parallelism into matrix norm calculation program, which is quite simple yet used by many other subprograms and external function calls. https://github.com/opencollab/scilab/blob/master/scilab/modules/linear_algebra/src/c/norm.c There were many potential areas for optimization: Elimination of redundant comparisons Use else if construct to remove redundant checks Remove floating point equality comparison! Embarrassingly parallel/perfect ly parallel for l