The previous Learning Perl Challenge asked you to find duplicate files. This challenge needs some of which you did there, but for a different purpose.
Write a program that monitors a directory to find any file changes. Programs such as tripwire do this by recording meta information about a file on its first run then checking that the same information matches later. For instance, the size and SHA1 digest should stay the same. You could also just store the original content, but that’s not very convenient.
Since you’re at the Learning Perl level, we can’t ask too much here or judge you too harshly. A lot of the problem is storing the data and reading it later. Here’s a hint: create a flat file to store the “good” data on the first run, then read this file on the second run:
How are you going to handle the security on this file after you create it? As an example, you might look at CPAN::Checksums, which handles the same task for the modules on CPAN.
There are many ways that you can employ use this. You can run it periodically from cron, for instance, but you might also make a daemon that runs continually and is always checking. Once you find a change, you can report it in many ways, but we’ll only ask you to print a line to the terminal, that might look something like:
file.txt changed. Was: Size: 1023 bytes SHA1: 53a0935982ae11a4784d51aa696733c947c0614f Is now: Size: 2001 bytes SHA1: 730c6983bb9f942ef5cf6c174d76ad0c1594c1a7