I recently wrote some software that makes a lot of calculations.
The calculations are done in levels, while for each level, the calculations within it are independent. I.e. logically, I can run them in parallel, as none of them relies on the result of the others.
My question is: Is there a general C library for making parallel math (matrix) operations on the GPU, which works on all the platforms (Windows/Unix/etc.)?
When I say general - I mean to some library that would work with any modern GPU
C is a singlethreaded language, to assign your calculations to run in parallel on a GPU, for something like 'oh I don't know' Bitcoin mining, you should consider using the AMP library in C++. I'm not too sure about the whole, compile to run on GPU core thing, but I know that this will certainly perform your calculations in parallel.