I recently started to use deja-dup.
As long as no errors happen, everything works well in accordance with design guidelines of deja-dup. I.e. the setup was very fast and easy to understand.
I backup to a remote ssh location.
Now, I had to change my remote back-end and some errors occurred.
If you need an example, they were very similar to this bug:
While I troubleshooted, I had this problem: deja-dup does time consuming work. Then tries to connect and fails. Only option is to close “Giving up after 5 attempts” dialogue and start over. This was very frustrating and time consuming.
So my workflow was:
Try to fix config → Start backup → Wait an hour → See it did not work → Repeat.
When I only upload one folder, the waiting is reduced to an okay amount. But in my case, the mistake was somewhere in the configuration with all folders.
Ideas how this could be made more user friendly:
- Allow quick manual retry of the uploading process
- Maybe in the “Giving up after 5 attempts” dialogue
- Improve cache for the compression etc. so one has not to wait between tries.
- The error message is not very helpful. But it is already discussed in a bug report
If you need any clarification, please reach out.
Ubuntu Ubuntu 20.04.5 LTS
Hello, thank you for your report!
I think I detect a few distinct user experience issues:
- Deja Dup is slow. I agree sadly, but the most likely path to a fix for this is likely the experimental restic backend (added in 43.0).
- Bad error message. This is largely out of our hands unfortunately. The error message is coming direct from GNOME’s filesystem abstraction GVFS. It’s not always the kindest.
- A retry button might be helpful. This depends on why the error happened. Did you eventually successfully back up, after retrying? Or was the error 100% reproducable? If you did eventually succeed, it would be interesting to find out how to be more reliable, so that we don’t need a retry button at all, and can do it ourselves.
Thanks for your response.
- I look forward to improved speed then
- OK, I understand.
- I succeeded by excluding problematic folders from the backup. Now I am trying to bring them back in, but it takes some time, I go trial and error. I brought the idea up, because the error message seems to pop up now and then and others are clueless, too. For example in the bug I linked.
The actual problem at hand, I solved by keeping the remote end active.
I.e. in the folder mounted in nautilus, I click a different folder every 3 mins. Then it worked in my case.
With an option to “re-active” or “re-connect” the remote end, when all the scanning etc. is done, it would have been easier. That’s why I wrote the things above.