Now eliminating candidates based on first bytes:removed 0 files from list.2 files left. Removed 9 files due to unique sizes from list.2 files left. Removed 1 files due to nonunique device and inode. Now scanning "/home/alvin", found 12 files. Known as "redundant data lookup", this command can determine which files are the original files based on the file date, which is helpful for us to choose to delete duplicates, because it will delete the newer files. The rdfind command will also look for duplicate (same content) files. Tips: we must install fslint on the system and add it to the search path: $ export PATH=$PATH:/usr/share/fslint/fslint If we need to run a large number of files, this command may take quite a long time to complete the lookup. But here's a caveat: we need to give it a starting point. The fslint command can be used specifically to find duplicate files. Use the find commandĪlthough the find command has no option to find duplicate files, it can be used to search for files by name or type and run the cksum command. In the above operation, we can see that the checksums of the second and third files are the same, so we can think that the two files are the same. Although the calculated results are not absolutely unique, the possibility of different documents leading to the same checksums is similar to that of China's men's football team entering the world cup. The checksum command, cksum, computes the contents of the file into a very long number (for example, 2819078353 228029) according to a certain algorithm. If we want to compare multiple files, the efficiency of comparing two files must be very low. However, the disadvantage of diff command is that it can only compare two files at a time. If your diff command does not output, the two files are the same: $ diff home.html index.html When there are differences between the two files, the diff command will output the differences: $ diff index.html backup.html With this feature, we can find the same file. The output of the diff command will use the symbols to show the differences between the two files. In our normal operation, the easiest way to compare the differences between the two files is probably to use the diff command. This article will introduce 6 ways to find duplicate files in the system, so that you can quickly free up hard disk space! 1. These files will not only occupy our disk, but also drag down our system, so it is necessary to kill these duplicate files. Whether it's a Windows computer or a Linux computer, there will be more or less many duplicate files left in the process of using.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |