Yeah when I was describing it I was thinking “this will be a massive pain in the arse to do”. I’m prepping a new build to go live as soon as the current build’s 7 day staged rollout has finished (it’s on day 4 now). So it’ll definitely be less fuss to just wait until that update arrives next week.
Is the entire File Importer really sluggish too? I suspect what’s happening is the “file system change observing” that turns on when the importer view is open is clogging up the whole system. It was definitely a hindrance when I first built the File Importer, but on iOS 16 I’m finding it almost intolerably slow. So I’m working on ditching the use of iOS’s built in file system change observing, and just doing periodic checks every minute or so instead.
Ugh. That’ll be a bug due to Dark Mode sneaking through. Arc is supposed to ignore Dark Mode completely (at least in current builds - I’m working on an update that supports Dark Mode right now). But some views in the app somehow ignore the app-wide setting and try to use Dark Mode, fail miserably in the attempt.
I think I know the answer to this, and it’s a convoluted one. The short version is: I suspect the missing Place files are totally irrelevant Places and aren’t even needed.
But then how is there backup data referencing irrelevant/unnecessary Place files? The failure there is in me not noticing that iOS at some point changed how it names files that aren’t synced locally to the phone.
At some stage earlier on they were named “normal_filename.icloud”. So I could detect whether I file needed downloading by looking for that “.icloud” on the end, and distinguish between entirely missing files and files that just need downloading in the same way. But at some point iOS started naming them “.normal_filename.icloud”. Notice the dot at the start - I didn’t!
So at some time in the past, the backup system went from dutifully downloading existing backup files before updating them, to thinking the file was missing completely, and recreating it as new. That has resulted in masses of duplicate backup files, eg:
2016-W17.json.gz
2016-W17 2.json.gz
2016-W17 3.json.gz
- etc, etc
Now, the restore system doesn’t care about the duplicates. Whenever it imports one of those files it checks each object’s lastSaved
and skips over the import data if it’s older than the data in the database. No harm done.
But, annoyingly, the checks for dependent files happen before the check for newest lastSaved
. So the importer will error out, saying “ugh, missing dependent” before it realises it didn’t even need that out of date import data anyway.
So yeah, what I think those missing Place files are are Place files that are referenced by out of date data, but not referenced by up to date data. So the backup system at some stage probably deleted the Place file, thinking “don’t need that anymore - no one uses it”.
Aside: All these duplicate files are also slowing down the restore/import process massively. On my test device (that I’m still trying to get the restore finished on!) I’m seeing sometimes up to 30 or 40 duplicate files. Each of which the managed restore has to try to import (even if it ends up skipping every sample inside the file, due to being out of date).
Oh, don’t go deleting those duplicates by the way! There’s no way of knowing which will be the newest. Well… I assume the newest is the one with the highest number. But I wouldn’t want to gamble on that.
Heh. Yeah, I am really bad at waffling on endlessly Though these longwinded explanations do kind of act as rubber duck debugging for me - I’m selfishly using them as way to think through the problems and make sure I’ve definitely understood them correctly myself.