![]() I get that the whole DB and backup set would probably have to be recreated and storage would likely increase following the process. Now I haven’t really looked into this all that hard but I know there are many threads around here on the topic and at the end of the day it can’t currently be done but holy moly would it be nice. ![]() I started re-writing the manual page on the topic but lost the edit in a stray reboot, I’ve been trying to get back to it but things have been busy as of late.ģ- Create a routine that can change the -blocksize for an existing backup set. Maybe a note like if you’re backing up more than 0.5TB you absolutely should increase the value to “xyzKB/xyzMB” or more. If that’s not in the books for the foreseeable future could we create a “hot fix” of sorts, something users can run post install to adjust the -blocksize setting to a larger default value? This leads to item 3, but stop by item 2 first.Ģ- Change the wording in the Duplicati documentation around -blocksize to emphasize how it should be scaled up based on data sizes. Short of re-compiling with the new default there shouldn’t be anything else required, no re-writing of code or conflicts it’s just a default change that would likely prevent a lot of user frustration moving forward as this sort of thing seems to be coming up more and more often. This is the in the developer section so I’ll toss a bit more onto the fire…ġ- As mentioned above, changing the default -blocksize to a larger value (YTBD) is likely a simple change that could be implemented in the next release (I know, also YTBD). Seems to me, raising the default -blocksize to a larger value could be a good thing moving forward, I’m thinking 100K just isn’t quite enough for the amounts of data people are now backing Thanks for the effort on a fix.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |