I've recently been settings up hadoop in pseudo distributed mode and I have created data and loaded that into HDFS. Later I have formatted namenode because of a problem. Now when I do that, I find that the directories and the files which were already there before on the datanodes don't show up anymore. (the word "Formatting" makes sense though) But now, I do have this doubt. As the namenode doesn't hold the metadata of the files anymore, is access to the previously loaded files cut-off? If that's a yes, then how do we delete the data already there on the datanodes?
How to format datanodes after formatting the namenode on hdfs?
4.9k Views Asked by Sai Darahaas Ayyangalam At
1
There are 1 best solutions below
Related Questions in HADOOP
- Can anyoone help me with this problem while trying to install hadoop on ubuntu?
- Hadoop No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster)
- Top-N using Python, MapReduce
- Spark Driver vs MapReduce Driver on YARN
- ERROR: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "maprfs"
- can't write pyspark dataframe to parquet file on windows
- How to optimize writing to a large table in Hive/HDFS using Spark
- Can't replicate block xxx because the block file doesn't exist, or is not accessible
- HDFS too many bad blocks due to "Operation category WRITE is not supported in state standby" - Understanding why datanode can't find Active NameNode
- distcp throws java.io.IOException when copying files
- Hadoop MapReduce WordPairsCount produces inconsistent results
- If my data is not partitioned can that be why I’m getting maxResultSize error for my PySpark job?
- resource manager and nodemanager connectivity issues
- ERROR flume.SinkRunner: Unable to deliver event
- converting varchar(7) to decimal (7,5) in hive
Related Questions in NAMENODE
- Can anyoone help me with this problem while trying to install hadoop on ubuntu?
- Can't replicate block xxx because the block file doesn't exist, or is not accessible
- Update hadoop hadoop-2.6.5 to haddop 3.x. Operation category WRITE is not supported in state standby
- Hadoop namenode container unhealthy
- HDFS DataNode BlockSender.sendChunks() exception: java.io.IOException: "An established connection was aborted by the software in your host machine
- Increase the NameNode heap memory based on NameNode logs
- hadaoop + Configuring NameNode Heap Size
- Why the values from Hadoop API doesn't match with the calculated values?
- Cannot make hadoop HDFS data persist with docker
- hdfs reformat name node failed by error Unable to check if JNs are ready for formatting
- How can I resolve the missing blocks inconsistency issue between 2 name nodes
- writing to hdfs error: Not enough replicas was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1}
- Namenode is not showing in JPS
- How to have a cluster of 3 datanodes that work at the same time?
- HDFS unreachable outside of network
Related Questions in DATANODE
- Can't replicate block xxx because the block file doesn't exist, or is not accessible
- Update hadoop hadoop-2.6.5 to haddop 3.x. Operation category WRITE is not supported in state standby
- On the datanode node of HDFS, if the core-site.xml configuration is modified, how can the configuration take effect without restarting the service?
- How to move data block from datanode to other datanode during mapreduce?
- writing to hdfs error: Not enough replicas was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1}
- How to start data node(s) and compute node(s) without accessing the DolphinDB web interface?
- How to have a cluster of 3 datanodes that work at the same time?
- HDFS unreachable outside of network
- Datanode directory is empty
- New datanode not tranferring data from existing hadoop cluster
- Is there a way to get DataNode's DiskUsage using java client?
- Cannot start secure DataNode due to incorrect config?
- frequently getting stale alerts for Datanodes
- Initialise datanode in your hadoop worker machine
- Hadoop Datanode won't start on master node [Debian 11]
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Your previous datanode directories are now stale, yes.
You need to manually go through each datanode and delete the contents of those directories. There is no such format command via the Hadoop CLI
By default, the data node directory is a single folder under /tmp
Otherwise, you've configured your XML files where to store data
Where HDFS stores data