Thanks @hydn for explanation.
I don’t remember which was for me one of worst oops moment, but I’m pretty hasty
guy and sometimes I don’t keep the correct attention while doing critical operations ![]()
I would say one of most efficent way for manage disaster is having many backup incremental copies of your hard drive or backup images of very critical / important files.
For let you know I tried to manage a backup of a Linux hard drive with Timeshift. It went good, but when I was restoring the image system was not so responsive and it was lacking some modules.
So I think is mandatory have a complete image of hard drive, with all your installed softwares and your importand data.
Talking about Linux think for example a broken update, the one which completly broke your system, maybe with broken greeter or lightdm, you won’t be able to enter in desktop anymore.
In this case possibles solutions are:
- Wait from repos if the developers push the correct update => this is bad because you don’t know how much time you’ll have a down system
- Login in CLI and see if you can rollback some updates => this might be tricky
- Take a previous snapshot and recover the entire disk, then do later the update => you can manage to recover the system, but you will lose last work done since last backup.
I’m talking about a full system disaster where you are neither able to login in desktop environment.
For just some critical files for example crontab or fstab I think the best practice consist to have a backup copy of the singles important file, in case of broken file you can recover it from backup.
But yes, with this practice you need to be constant with backup, at least one at week or better maybe by scheduling them.