Submit Hint Search The Forums LinksStatsPollsHeadlinesRSS
14,000 hints and counting!

Counting files in a directory from the terminal UNIX
Sometimes, it's the little things. I was trying to replicate an error someone was experiencing involving a large number of files in a directory. So I made my large directory, opened a terminal, then did an "ls" on the directory. Everything scrolled by, and then I noticed that there's no total file or total size information. Size information is easy to get (type "du directory_name"), but how do you know how many files are in a directory in the terminal?

Given my basic UNIX skills, I headed to the "man" pages for "ls", but found nothing useful there. Same thing with "man du". I finally had to use a lifeline and phoned a friend ;-). The answer definitely speaks to the sometimes non-intuitive nature of UNIX, but also shows how you can pretty much make it do what you want by combining commands.

To count the number of files in a directory, enter:
cd directory_to_count
ls | wc -l
That's the "ls" directory listing command, the vertical bar (which 'pipes' the output of "ls" to the next command), and then the "wc" word count command with the "l" (lower-case L) option to count the number of lines instead of characters. The output of this command is the number of files in the directory. Subdirectories count as one entry; the files in the subdirectory are not counted.

Of course the GUI is much easier, but if you're connecting remotely via SSH, you won't have that option available!
  • Currently 5.00 / 5
  • 1
  • 2
  • 3
  • 4
  • 5
  (3 votes cast)

Counting files in a directory from the terminal | 4 comments | Create New Account
Click here to return to the 'Counting files in a directory from the terminal' hint
The following comments are owned by whoever posted them. This site is not responsible for what they say.
File count
Authored by: Anonymous on May 08, '01 06:42:45PM
Rob, I also recently needed a file count. I found a solution in UNIX Secrets, 2nd Ed. by James C. Armstrong, Jr.

find / -print | wc -l

Substitute your directory name for the slash. This command acts recursively, and so will count all the files in the nested directories. Worked great for counting the total number of files in my web site.


[ Reply to This | # ]
File count
Authored by: babbage on May 09, '01 02:33:11PM
Clever. If you wanna wedge your current directory in there, an easy way would be via the backticks trick: "find `pwd` -print | wc -l".

The Unix shell is built around using a series of utilities that each do one small task, and do it very well. By plugging these together in pipes, as you've shown here, you use these commands as filters which, well, filter out the data stream that you send to them. Once you get used to it, you'll probably find yourself using these little tricks all the time.

As an example, one that I use all the time is to generate a list with "grep", rearrange it with the "sort" command, then use "uniq -c" to count & collapse repeated lines, and one more "sort" or "sort -r" (reverse) to again rearrange the list, this time by frequency. Thus:

cd /private/var/log/httpd
grep "does not exist" error_log | sed 's#^.*exist: ##' | sort | uniq -c | sort | more

A line like that will parse over your Apache error log, find all "404 not found" hits, filter out the beginning of the line (assuming that we're not interested in dates here -- just the names of the missing files), then arranges the list into a useful order. I'd paste in a sample here if my logs were doing anything, but they're not (yet).

Another cool trick is conditional or short-circuit execution. Here, you use the command line's "and" (&&) and "or" (||) operators to control command execution. Say for example you want to pat yourself on the back if you're having no 404 errors on your logs, and either way, mail a listing of the results you find. You could do it something like this:

cd /private/var/log/httpd
grep "does not exist" error_log | mail [address]|| echo "no errors!"
grep "does not exist" error_log | mail [address] && echo "some errors!"

Etc. These are kind of weak examples, but I hope they get the point across. You can check on the status of something, and depending on whether that status is "true" or "false", can conditionally take further action, all within one command.

Another related trick, and the one mentioned up above, is to nest one command within another by use of the backtick trick. Here, you have one command -- say, one that formats the date for you, or gets a list of files matching a pattern, or whatever -- and then hand off the results of that command to another wrapper command. For example:

vi ~/`date '+%Y.%m.%d'`.diary   # current date as file name
vi `grep -ils chris *`          # edit all files that mention my name

[ Reply to This | # ]
Counting files in a directory from the terminal
Authored by: xSmurf on Feb 15, '06 05:04:19PM
I know this topic is about 5 years old but what ever if someone finds it looking for this info (like I did) it might still help. If you are trying to count the # of lines of files in a dir you can use
$ cat `find . | grep -v \.jpg` | wc -l

Note that I made sure I didn't include "data" files, in this case I knew they were all jpg but it might be easier to do the opposite and include only certain extensions.

PM G4 DP 800 / 1.25gb / 120Gb+80Gb / CD/DVD±RW/RAM/DL
- The only APP Smurf

[ Reply to This | # ]

Counting files in a directory from the terminal
Authored by: sjk on Feb 16, '06 07:37:54PM
That's broken in several ways (unnecessary cat; includes directories; doesn't handle filenames with embedded spaces; matched .jpg anywhere in the filename). Try something like this:

find . -not -type d | egrep -v '\.jpg$' | wc -l

The "-not -type d" excludes directories but would find symlinks and other non-regular files; change it to "-type f" to only find regular files. There should be a single backslash before the dot in the .jpg string for egrep; I always seem to get that quoting wrong here because the preview doesn't match the post.

[ Reply to This | # ]