[Hi All, I am new to Spark Rapids. I was going through the basic introduction to Spark Rapids, where I got a figure (attached) explaining the difference between CPU and GPU based query plans for hashaggregate example. All things in the plans, except the last phase converting to the Row Format is not clear to me. Can anyone please suggest the reason behind this.]
1
There are 1 best solutions below
Related Questions in RAPIDS
- How to install the latest version of rapids, without specifying the version number
- Spark Rapids: Simple HashAggregate Example
- Unable import "cuxfilter" package in Kaggle Notebook environment
- rapids cannot import cudf: Error at driver init: Call to cuInit results in CUDA_ERROR_NO_DEVICE (100)
- TypeError in cudf.pandas
- I'm having trouble installing rapids.ai on my windows 10 desktop
- which is the correct version of akka-actor-typed to use with Spark 3.5.0?
- Why do I get a CUDA memory error when calling an api using RAPIDS/cudf but success in the container?
- Missing Ubuntu library RAPIDS build from source /usr/bin/ld: cannot find -lxtl: No such file or directory
- Can you install Rapids 0.16 and TF 2.2 in the same conda environment?
- Spark on Rapids single node
- Apply ta_py function to Cudf dataframe - RAPIDS
- RAPIDS cuml KNeighbors: number of landmark samples must be >= k
- How to read Protobuf files with Dask?
- Join values from a DataFrame according to an array of indices
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
I do not see the referenced figure, but I suspect what is happening in your particular query comes down to one of two possible cases.
If your query is performing some kind of collection of the data back to the driver (e.g.:
.showor.collectin Scala or otherwise directly displaying the query results) then the columnar GPU data needs to be converted back to rows before being returned to the driver. Ultimately the driver is working withRDD[InternalRow]which is why a transition fromRDD[ColumnarBatch]needs to occur in those cases.If your query ends by writing the output to files (e.g.: to Parquet or ORC) then the plan often shows a final
GpuColumnarToRowtransition. Spark's Catalyst optimizer automatically insertsColumnarToRowtransitions when it sees operations that are capable of producing columnar output (i.e.:RDD[ColumnarBatch]) and then the plugin updates those transitions toGpuColumnarToRowwhen the previous node will operate on the GPU. However in this case the query node is a data write command, and those produce no output in the query plan sense. Output is directly written to files when the node is executed instead of sending the output to a downstream node for further processing. Therefore this is a degenerate transition in practice, as the data write command sends no data to the columnar-to-row transition. I filed an issue against the RAPIDS Accelerator to clean up that degenerate transition, but it has no impact on query performance.