Skip to main content

Duplicate file finder/Remover using perl and SHA1

When you are using a computing devices (either a laptop or PC or a Tab) for your personal use after some time (let take some years) you will realise that your disk is full and most of the space are occupied by duplicate files (Same copy of file located in different locations).

For ex: You might have a favourite music file in "My Favourite" folder as well as in the "Album" folder. But finding this duplicate manually is a biggest process. That too if the file names are different OMG!.

There are lot of free utilities available to do this in automated way, but if you are a programmer, you will always prefer to do it on your own.

Here are the steps we are going to do. This is purely on a linux - Ubuntu system.  (for windows you might need to change the path as per conventions )
  • Getting SHA1 for all the files recursively in a given directory
  • Compare SHA1 with other files
  • Remove the duplicate file
Getting SHA1 of a file

Using cpan module  Digest::SHA1 we can get SHA1 for a file data as follows

use Digest::SHA1 'sha1_hex';
use File::Slurp;
my $fdata = read_file($file);
my $hash = sha1_hex($fdata);

In the above code I used read_file method which is provided by File::Slurp module.

To find SHA1 for all the files recursively in a directory. There are many modules available in www.cpan.org for iterating a directory but my favourite is always File::Find module which works same like a unix find command.

use File::Find;
use File::Slurp;
use Digest::SHA1 'sha1_hex';

my $dir = "./";

# Calls process_file subroutine for each file
find({ wanted => \&process_file, no_chdir => 1 }, $dir);

sub process_file {
    my $file = $_;
    print "Taking file $file\r\n";

    if( -f $file and $file ne '.' and $file ne '..' ){
        my $fdata = read_file($file);
        my $hash = sha1_hex($fdata);
    }
}
Finding the duplicates

Our next step is to find the duplicates based on the SHA1 values found above. I am going to use Hash ref with key as a SHA1 value and values as an Array ref with list of file Path. So once we process all the files we can easily get the list of duplicate files by just getting length of the array.

use File::Find;
use File::Slurp;
use Digest::SHA1 'sha1_hex';

my $dir = "./";
my $file_list;

# Calls process_file subroutine for each file
find({ wanted => \&process_file, no_chdir => 1 }, $dir);

sub process_file {
    my $file = $_;
    print "Taking file $file\r\n";
    if( -f $file and $file ne '.' and $file ne '..' ){
        my $fdata = read_file($file);
        my $hash = sha1_hex($fdata);

     push(@{$file_list->{$hash}}, $file );
    }
}

Removing the duplicates

Now we have we have list of duplicate files found. Only thing left is removing the those files by keeping only one copy of them. Perl has a default command called unlink which will remove the file from that location.

unlink "$file"

Now combine everything and add some printing statements and options you will get a nice utility script to remove the duplicate files.

#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
use File::Slurp;
use Digest::SHA1 'sha1_hex';


my $dir = shift || './';
my $count = 0;
my $file_list = {};
my $dup_dir_list = {};
my $dup_file_count = 0;
my $dup_dir_count = 0;
my $removed_count = 0;

find({ wanted => \&process_file, no_chdir => 1 }, $dir);


foreach my $sha_hash (keys %{$file_list}){
    if(scalar(@{$file_list->{$sha_hash}} > 1)){

        # Number of duplicate files
        $dup_file_count = $dup_file_count + scalar(@{$file_list->{$sha_hash}}) - 1;
        my $first_file = 1;
        foreach my $file (@{$file_list->{$sha_hash}}){
            # Don't delete the first file
            if($first_file){
                $first_file = 0;
                next;
            }
            if((unlink "$file") == 1){
                print "REMOVED: $file\n";
                $removed_count = $removed_count + 1;
            }
        }
    }
}

print "********************************************************\n";
print "$count files/dir's traced\n";
print "$dup_dir_count duplicate name directories found\n";
print "$dup_file_count duplicate files found\n";
print "$removed_count duplicate files removed\n";
print "********************************************************\n";

sub process_file {
    my $file = $_;

    #print "Taking file $file\r\n";
    if( -f $file and $file ne '.' and $file ne '..'){
        my $fdata = read_file($file);
        my $hash = sha1_hex($fdata);

        push(@{$file_list->{$hash}}, $file );
        $count = $count + 1;

        local $| = 1;
        print "Processing file: $count\r";
    }
}


The above code will remove any duplicate files in a given directory based on SHA1 value for the data. Keep in mind that if you are having audio or video files which are downloaded from different sources might have different SHA1 values based on various conditions. So this script will remove only computer identical files and it does not have any AI to identify same video/audio/images. When we see an image as a human we can identify it easily but computer will see it as different files based on various properties like that image might be compressed or resolution might have changed etc.

Comments

Popular posts from this blog

Git and Github useful commands

Working on a personal forked repository Most of the time when you are working in an organisation who uses git, you will be requested to fork the project and create pull for your feature or bug fixes (Most of the open source contribution in git will be happening like this).

In this note I will let you know some useful git commands which will be very helpful for you in your day to day life with git. I am not going to explain any git technical terms. I am assuming you already know about it or all those terms are linked to a proper source for more information.
Lets start. You can fork any github project by clicking the fork icon on the top right of a repository.  git clone git@github.com:<your-github-id>/Project.git - to clone your forked Project (personal repo)cd Project/ - get into your repositorygit fetch origin - will fetch all the details/indexes (branches and commits) from remote origin but will not apply.git branch - is used to check all the available branches and current work…

Make use of JavaScript console methods

When I say JavaScript console methods, the one and only thing that hit most of our mind is console.log() and we use it a lot and only that some might be familiar with error() and debug(). Did you ever think of checking if there are any other methods available in console other than log? Most of us don't do that, but knowing those methods might have saved lot of our development time. In this post I will show you some of the console methods which will be very useful in our day to day coding.
Log based on conditionconsole.assert(assertion, log_message); To print something on the console only if the assertion failed - assertion can be any expression which returns a boolean.
Note: In chrome this log will print as an error with message saying "Assertion failed:" with the log_message passed. In FireFox its a normal log message.
Log number of occurrence console.count([label]); If we want to count something, it may be a click, may be a callback, or event triggers. We don't …

My First Post

After lot of thinking and research me too started a blog finally to share my knowledge :-).

Not everybody know everything but also everybody else don't know what we know. So lets share and gain knowledge with Win Win approach.