I was wondering when we should use x and y coordinates for threads in CUDA? I've seen some codes when they have nested loops, they use x and y coordinates. Is there any general rules for that?Thanks
When do we need two dimension threads in CUDA?
2k Views Asked by dibid At
1
There are 1 best solutions below
Related Questions in CUDA
- direct global memory access using cuda
- Threads syncronization in CUDA
- Merge sort using CUDA: efficient implementation for small input arrays
- why cuda kernel function costs cpu?
- How to detect NVIDIA CUDA Architecture
- What is the optimal way to use additional data fields in functors in Thrust?
- cuda-memcheck fails to detect memory leak in an R package
- Understanding Dynamic Parallelism in CUDA
- C/CUDA: Only every fourth element in CudaArray can be indexed
- NVCC Cuda 5.0 on Ubuntu 12.04 /usr/lib/libudt.so file format not recognized
- Reduce by key on device array
- Does CUDA include a real c++ library?
- cuMemcpyDtoH yields CUDA_ERROR_INVALID_VALUE
- Different Kernels sharing SMx
- How many parallel threads i can run on my nvidia graphic card in cuda programming?
Related Questions in GPU
- Get GPU temperature in Android
- Can I use Julia to program my GPU & CPU?
- C: Usage of any GPU for parallel calculations
- Can I run Cuda or OpenCl on Intel processor graphics I7 (3rd or 4rd generation)
- How to get fragment coordinate in fragment shader in Metal?
- Is prefix scan CUDA sample code in gpugems3 correct?
- How many threads/work-items are used?
- When do we need two dimension threads in CUDA?
- What does a GPU kernel overhead consist of?
- Efficiently Generate a Heat Map Style Histogram using GLSL
- installing gputools on windows
- Make a dependent loop independent
- Is it possible to execute multiple instances of a CUDA program on a multi-GPU machine?
- CUDA cuBlasGetmatrix / cublasSetMatrix fails | Explanation of arguments
- Missing functions vload and vstore: OpenCL on Android
Related Questions in GPGPU
- How to detect NVIDIA CUDA Architecture
- Different Kernels sharing SMx
- How to do calculation using OpenGL ES 2.0/3.0?
- How to run PageRank in Blazegraph on a dataset?
- When do we need two dimension threads in CUDA?
- CUDA cuBlasGetmatrix / cublasSetMatrix fails | Explanation of arguments
- Confusion over compute units and expected cores on nvidia GPU
- Declaring a cl_uint variable in OpenCL C leads to Segmentation fault (core dumped)
- Unkown Issue with input sequence size of FFT in OpenCL
- Passing Host Function as a function pointer in __global__ OR __device__ function in CUDA
- Nvidia OpenCL hangs on blocking buffer access
- CUDA: Cuda memory accessing different than OpenCL? What is causing this illegal memory access?
- Computing on variable length arrays in OpenCL
- AMD HCC Swizzle Intrinsic
- Sparse matrix multiplication OpenCL vs Intel MKL performance
Related Questions in NESTED-LOOPS
- Combining data using R (or maybe Excel) -- looping to match stimuli
- Time complexity of nested for loops
- Why does this loop return a value that's O(n log log n) and not O(n log n)?
- Creating a Christmas Tree using for loops
- When do we need two dimension threads in CUDA?
- Javascript iMacros nested while loop (two loops in macro)
- MS Access compare a value to a table column and return a field from the same row
- Sum selected elements in dict of dicts in Python using one liner instead of for-loop
- Java loop control
- In C++ How can I make integers from a data file be spaced evenly into 5 colums?
- Declaring/initializing variables inside or outside of a double (or multiple) loop
- How can I avoid having 10 nested loops in R?
- Nested loop list comprehension
- Nested for loops in swift 3 for uibuttons title
- Combining objects in nested loops with big data
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
The answer to the question in the title is simple: Never. You never really need the 2D coordinates.
However, there are several reasons why they are actually present. One of the main reason is that it simplifies the modelling of certain problems. Particularly, of problems that GPUs are "good at", or that they have been used for, for "historical" reasons. I'm thinking about things like image processing or matrix operations here. Writing an image processing or matrix multiplication CUDA kernel is far more intuitive when you can clearly say:
and from that on only deal with the simple pixel coordinates. How much this actually simplifies the index hassle becomes even more obvious when shared memory is involved, for example, during a matrix multiplication, and you want to slice-and-dice a set of rows+columns out of a larger matrix, to copy it to local memory. If you only had 1D indices and had to fiddle around with offsets and strides, this would be error prone.
The fact that CUDA actually does not only support 2D, but also 3D kernels might stem from the fact that 3D textures are frequently used for things like Volume Rendering, which is also something that can be greatly accelerated with GPUs (Websearches including keywords like "volume ray casting" will lead you to some nice demos here).
(Side note: In OpenCL, this idea has even been generalized. While CUDA only supports 1D, 2D and 3D kernels, in OpenCL, you only have ND kernels, where the N is explicitly given as the
work_dimparameter)(Another side note: I'm pretty sure that there also are more low-level, technical reasons, that are related to hardware achitectures of GPUs or the caching of video memory, where the localities of 2D kernels may easily be exploited and be beneficial for the overall performance - but I'm not familiar with that, so this is only a guess until now)