Why so many Parquet files created in sparkSql? Can we not limit Parquet output files ?
Why so many Parquet files created? Can we not limit Parquet output files?
2.2k Views Asked by Manjeet Singh At
1
There are 1 best solutions below
Related Questions in APACHE-SPARK
- Spark .mapValues setup with multiple values
- Where do 'normal' println go in a scala jar, under Spark
- How to query JSON data according to JSON array's size with Spark SQL?
- How do I set the Hive user to something different than the Spark user from within a Spark program?
- How to add a new event to Apache Spark Event Log
- Spark streaming + kafka throughput
- dataframe or sqlctx (sqlcontext) generated "Trying to call a package" error
- Spark pairRDD not working
- How to know which worker a partition is executed at?
- Using HDFS with Apache Spark on Amazon EC2
- How to create a executable jar reading files from local file system
- How to keep a SQLContext instance alive in a spark streaming application's life cycle?
- Cassandra spark connector data loss
- Proper way to provide spark application a parameter/arg with spaces in spark-submit
- sorting RDD elements
Related Questions in APACHE-SPARK-SQL
- How to query JSON data according to JSON array's size with Spark SQL?
- dataframe or sqlctx (sqlcontext) generated "Trying to call a package" error
- How to keep a SQLContext instance alive in a spark streaming application's life cycle?
- How to setup cassandra and spark
- Where are the API docs for org.apache.spark.sql.cassandra for Spark 1.3.x?
- Spark Cassandra SQL can't perform DataFrame methods on query results
- SparkSQL - accesing nested structures Row( field1, field2=Row(..))
- Cassandra Bulk Load - NoHostAvailableException
- DSE Cassandra Spark Error
- How to add any new library like spark-csv in Apache Spark prebuilt version
- Scala extraction/pattern matching using a companion object
- Error importing types from Spark SQL
- Apache Spark, add an "CASE WHEN ... ELSE ..." calculated column to an existing DataFrame
- one job takes extremely long on multiple left join in Spark-SQL (1.3.1)
- scala.MatchError: in Dataframes
Related Questions in APACHE-SPARK-MLLIB
- Spark MLLib How to ignore features when training a classifier
- SparkMLlib MultiClassMetrics.confusionMatrix() and precision() seems giving contradictory results
- What is rank in ALS machine Learning Algorithm in Apache Spark Mllib
- How to run Spark locally on Windows using eclipse in java
- Debugging large task sizes in Spark MLlib
- spark-mllib: Error "reassignment to val" in source code
- Spark saving RDD[(Int, Array[Double])] to text file got strange result
- How to integrate Apache Spark with Spring MVC web application for interactive user sessions
- How to train Matrix Factorization Model in Apache Spark MLlib's ALS Using Training, Test and Validation datasets
- TypeError: Incorrect padding while running Kmeans on Spark Mllib (spark 1.4.0)
- From DataFrame to RDD[LabeledPoint]
- Private objects and traits in spark and ml
- SPARK ERROR:executor.CoarseGrainedExecutorBackend: Driver while executing KMeans Clustering onspark on EC2 cluster
- Spark's LinearRegressionWithSGD is very sensitive to feature scaling
- How to create correct data frame for classification in Spark ML
Related Questions in PARQUET
- Spark with Avro, Kryo and Parquet
- Set parquet snappy output file size is hive?
- Getting error,Error: org.kitesdk.data.DatasetIOException: Cannot decode Avro value
- Got exception running Sqoop: java.lang.NullPointerException using -query and --as-parquetfile
- bit vector intersect in handling parquet file format
- Spark: error reading DateType columns in partitioned parquet data
- export parquet format data to mysql using sqoop
- Hive - How to print the classpath of a Hive service
- Flink Avro Parquet Writer in RollingSink
- How to convert parquet file to Avro file?
- from java objects to parquet file
- Spark empty _metadata file in parquet output
- java.lang.NoSuchMethodError: com.microsoft.azure.storage.core.StorageCredentialsHelper.signBlobAndQueueRequest
- Reading/writing with Avro schemas AND Parquet format in SparkSQL
- Partial Vertical Caching of DataFrame
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
in general when you write to parquet it will write one (or more depending on various options) files per partition. If you want to reduce the number of files you can call coalesce on the dataframe before writing. e.g.:
Of course if you have various options (e.g. partitionBy) the number of files can increase dramatically.
Also note that if you coalesce to a very small number of partitions this can become very slow (both because of copying data between the partitions and because of the reduced parallelism if you go to a number small enough). You might also get OOM errors if the data in a single partition is too large (when you coalesce the partitions naturally get bigger).
A couple of things to note: