Fixing issues with large directories on Linux using EXT4 file system via regex

If you’re having slow website or server performance it may be due to a directory having too many files. Here is a youtube video showing this in action. https://youtu.be/OZXok1Nb7Yc

I created a test directory with 1.5 million files, and getting the directory listing takes over 5 minutes. This means anytime you interact with the directory it could have a 5 minute + delay.

This is often caused by errant crontab scripts that are writing a file to the directory every minute over years. Or perhaps an email script, or some type of plugin creating too many images.

To start we will want to goto the problem directory, and get a file listing. I would recommend doing this in a screen session because it may take some time, and if it takes too long your SSH session could timeout.

screen -S searchfiles

After starting the screen session use ls to get a directory listing.

ls -l

After this finally get some output you can use this command sequence to scroll up in the screen session. CTRL + A then hit ESC. Now you can use PGUP to scroll up to see the file names. Once you have the file names you will want to start removing them with rm -f command

rm -f filename.[1][0-9][0-9][0-9][0-9][0-9]

This says forcefully remove all files with filename.1 through filename.100000. The [0-9] is a regular expression for 0 through 9. The rm command can only handle about 100,000 expressions before it will fail.


After going through all the regex iterations, and removing the files you have intended, you will then want to shrink the directory. This is done by rsyncing the remaining files to a new directory, and then moving that new directory into place. If there are any ownership / permissions that need correcting do that as well.

I would recommend removing the old directory at this point as its just a huge directory that has a copy of the files, and is not needed. Make sure you use the full path when using rm -rf .

rm -rf /home/temp/testing.bak/

Now that you have this issue solved make sure you also solve what was creating these files. If its a crontab issue you can add this code to the end of the crontab script to prevent it from creating files.

> /dev/null 2>&1

I’ve seen this issue at least a hundred times, and sometimes the systems inodes will even be full from this type of issue. Full inodes will cause the Linux system to not work at all, and that issue needs to be solved as fast as possible. If you need to do a inode investigation use my previous article to check that. https://disc4life.com/blog/?p=138

If you found this useful consider subscribing on youtube, or following me on twitch and twitter https://twitch.tv/djrunkie and https://twitter.com/djrunkie

Leave a Reply

Your email address will not be published. Required fields are marked *