[Hi All, I am new to Spark Rapids. I was going through the basic introduction to Spark Rapids, where I got a figure (attached) explaining the difference between CPU and GPU based query plans for hashaggregate example. All things in the plans, except the last phase converting to the Row Format is not clear to me. Can anyone please suggest the reason behind this.]
1
There are 1 best solutions below
Related Questions in RAPIDS
- Is there a way to make array entries complex variables in NumPy?
- Rotating matrix of points for a custom angle
- sparse sparse product A^T*A optim in Eigen lib
- parallelizing matrix multiplication through threading and SIMD
- How to do Matrix Chain Multiplication (MCM) with Java threads?
- How to work with huge matrices in R?
- how to generalize square matrix multiplication to handle arbitrary dimensions
- OpenCV Mats of Type Float and Matrix Multiplication
- Is sum or matrix multiplication faster?
- What does three.js's Matrix4.multiply() method do?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
I do not see the referenced figure, but I suspect what is happening in your particular query comes down to one of two possible cases.
If your query is performing some kind of collection of the data back to the driver (e.g.:
.show
or.collect
in Scala or otherwise directly displaying the query results) then the columnar GPU data needs to be converted back to rows before being returned to the driver. Ultimately the driver is working withRDD[InternalRow]
which is why a transition fromRDD[ColumnarBatch]
needs to occur in those cases.If your query ends by writing the output to files (e.g.: to Parquet or ORC) then the plan often shows a final
GpuColumnarToRow
transition. Spark's Catalyst optimizer automatically insertsColumnarToRow
transitions when it sees operations that are capable of producing columnar output (i.e.:RDD[ColumnarBatch]
) and then the plugin updates those transitions toGpuColumnarToRow
when the previous node will operate on the GPU. However in this case the query node is a data write command, and those produce no output in the query plan sense. Output is directly written to files when the node is executed instead of sending the output to a downstream node for further processing. Therefore this is a degenerate transition in practice, as the data write command sends no data to the columnar-to-row transition. I filed an issue against the RAPIDS Accelerator to clean up that degenerate transition, but it has no impact on query performance.