How to count all the human readable files in Bash?

7.5k Views Asked by At

I'm taking an intro course to UNIX and have a homework question that follows:

How many files in the previous question are text files? A text file is any file containing human-readable content. (TRICK QUESTION. Run the file command on a file to see whether the file is a text file or a binary data file! If you simply count the number of files with the .txt extension you will get no points for this question.)

The previous question simply asked how many regular files there were, which was easy to figure out by doing find . -type f | wc -l.

I'm just having trouble determining what "human readable content" is, since I'm assuming it means anything besides binary/assembly, but I thought that's what -type f displays. Maybe that's what the professor meant by saying "trick question"?

This question has a follow up later that also asks "What text files contain the string "csc" in any mix of upper and lower case?". Obviously "text" is referring to more than just .txt files, but I need to figure out the first question to determine this!

2

There are 2 best solutions below

3
On

Quotes added for clarity:

Run the "file" command on a file to see whether the file is a text file or a binary data file!

The file command will inspect files and tell you what kind of file they appear to be. The word "text" will (almost) always be in the description for text files.

For example:

desktop.ini:   Little-endian UTF-16 Unicode text, with CRLF, CR line terminators
tw2-wasteland.jpg: JPEG image data, JFIF standard 1.02

So the first part is asking you to run the file command and parse its output.

I'm just having trouble determining what "human readable content" is, since I'm assuming it means anything besides binary/assembly, but I thought that's what -type f displays.

find -type f finds files. It filters out other filesystem objects like directories, symlinks, and sockets. It will match any type of file, though: binary files, text files, anything.

Maybe that's what the professor meant by saying "trick question"?

It sounds like he's just saying don't do find -name '*.txt' or some such command to find text files. Don't assume a particular file extension. File extensions have much less meaning in UNIX than they do in Windows. Lots of files don't even have file extensions!


I'm thinking the professor wants us to be able to run the file command on all files and count the number of ones with 'text' in it.

How about a multi-part answer? I'll give the straightforward solution in #1, which is probably what your professor is looking for. And if you are interested I'll explain its shortcomings and how you can improve upon it.

  1. One way is to use xargs, if you've learned about that. xargs runs another command, using the data from stdin as that command's arguments.

    $ find . -type f | xargs file
    ./netbeans-6.7.1.desktop: ASCII text
    ./VMWare.desktop:         a /usr/bin/env xdg-open script text executable
    ./VMWare:                 cannot open `./VMWare' (No such file or directory)
    (copy).desktop:           cannot open `(copy).desktop' (No such file or directory)
    ./Eclipse.desktop:        a /usr/bin/env xdg-open script text executable
    
  2. That works. Sort of. It'd be good enough for a homework assignment. But not good enough for a real world script.

    Notice how it broke on the file VMWare (copy).desktop because it has a space in it. This is due to xargs's default behavior of splitting the arguments on whitespace. We can fix that by using xargs -0 to split command arguments on NUL characters instead of whitespace. File names can't contain NUL characters, so this will be able to handle anything.

    $ find . -type f -print0 | xargs -0 file
    ./netbeans-6.7.1.desktop: ASCII text
    ./VMWare.desktop:         a /usr/bin/env xdg-open script text executable
    ./VMWare (copy).desktop:  a /usr/bin/env xdg-open script text executable
    ./Eclipse.desktop:        a /usr/bin/env xdg-open script text executable
    
  3. This is good enough for a production script, and is something you'll encounter a lot. But I personally prefer an alternative syntax which doesn't require a pipe, and so is slightly more efficient.

    $ find . -type f -exec file {} \;
    ./netbeans-6.7.1.desktop: ASCII text
    ./VMWare.desktop:         a /usr/bin/env xdg-open script text executable
    ./VMWare (copy).desktop:  a /usr/bin/env xdg-open script text executable
    ./Eclipse.desktop:        a /usr/bin/env xdg-open script text executable
    

    To understand that, -exec calls file repeatedly, replacing {} with each file name it finds. The semi-colon \; marks the end of the file command.

0
On

there's a nice and easy way to determine whether a file is a human readable text file, just use file --mime-type <filename> and look for 'text/plain'. It will work no matter if the file has an ending or has a different ending to .txt

So you would do sth like:

FILES=`find $YOUR_DIR -type f`

for file in $FILES ;
do

mime=`/usr/bin/file --mime-type $YOUR_DIR/$file | /bin/sed 's/^.* //'`

if [ $mime = "text/plain" ];  then      
    fileTotal=$(( fileTotal + 1 ))
    echo "$fileTotal - $file"
fi

done

echo "$fileTotal human readable files found!"

and the output would be sth like:

1 - /sampledir/samplefile
2 - /sampledir/anothersamplefile
....
23 human readable files found!

If you want to take it further to more mime types that are human readable(e.g. does HTML and/or XML count?) have a look at http://www.feedforall.com/mime-types.htm