Is there any way to delete the 100th row (or 'last row'), from a csv file, as a new row gets added?

466 Views Asked by At

Is there any way to delete the last row, from a csv file, as a new row gets added ? (without having to create separate / duplicate csv file).

My goal is to keep the running csv file from exceeding number of rows and size.

Ideally:

  1. Without creating additional file
  2. Preferred csv.DictWriter/Reader , but csv.reader or csv.writer is ok.

The errors I get are:
DictWriter is not iterable.
And DictReader does not write to csv.

1

There are 1 best solutions below

4
On
import csv, sys, os

filename = "sampletable2.csv"
'''
 sampletable2.csv  contains :
            high, low, precipitation
            22, 53, 0.12
            33, 52, 0.1
            44, 54, 0.1
            99, 98, 0.97 
'''
new_row = {'high': '99', 'low': '98', 'precipitation': '0.97'}
data = []  # list of dictionaries
max_rows = 4

# READER
with open(filename, 'r', newline='') as csvfile:
    csvreader = csv.DictReader(csvfile)
    for i, row in enumerate(csvreader):
        if i >= max_rows:  # =row to chop - 2
            del (data[:- max_rows])  # row to chop - 2  del(data[:max_rows])  # row to chop - 2
        else:
            data.append(row)  # continually add pre found rows to new data_dic list.
    data.append(new_row)  # ALWAYS ADD ((( NEW )) FOUND SINGLE DATA RECORD
    csvfile.close()
    
    # WRITER (write while still opened for reading) ; just writes new limited row data to csv
    with open(filename, 'w', newline='') as f:
        fieldnames = ["high", "low", "precipitation"]
        csvwriter = csv.DictWriter(f, fieldnames=fieldnames)
        csvwriter.writeheader()  # AWAYS WRITE HEADER due to 'w' starts all over ; never use 'a'
        csvwriter.writerows(data)
        
        f.close()  # not necess but allows to see updated csv if viewing sep monitor real-time