Timeline status on past days never stabilising

I’ve been using Arc for about a month, and something that I’ve been seeing consistently is apparent thrashing of the status of the timeline when viewing days in the past. Is this normal?

For example, right now if I view the timeline for yesterday, every second or two the banner at the top of the timeline changes: “Updating metadata”, “1 unconfirmed item”, “3 unconfirmed items”, “Processing timeline”, “2 unconfirmed items”. I get a similar constantly-changing status no matter what day I choose.

Surely there must be some point at which the processing and re-processing of past days’ data must end and stabilise?

Hi @sentience!

What you describe is partly an annoying bug that I’ve yet to find, and partly just the UI not being entirely sensible.

For the second case - the UI communicating poorly - what’s happening is the processing engine is probably updating metadata on items in a different day (most likely today), but it’s still showing that “updating metadata” bar even though nothing in that day you’re looking at is being updated. I need to fix that, because it’s misleading.

The annoying bug case is one where … well, if I knew how to describe it well I’d probably know enough to be able to find it and fix it :joy: But basically yes, sometimes it does appear that old data starts getting reprocessed when seemingly it shouldn’t. And also sometimes it gets reprocessed repeatedly, on a seemingly endless loop every few seconds, while you’re sitting there watching it. That’s the case that worries me the most, because it’s a waste of energy/battery.

If you see the unconfirmed count on an old day start high then gradually nudge down until it disappears (eg 3, 2, 1, gone), and it doesn’t get stuck repeating forever, then that could actually be a good thing (albeit one I’d like to see happen less). In those cases what might be happening is there’s some items in the timeline that weren’t confirmed at the time, and maybe did need confirmation at the time, but now that the processing engine and classifier models are having a fresh look at it, they’re concluding that they now have enough model data to be confident in their choices for those items, so they don’t need confirmation anymore.

It really depends what your life is like. If your life is one of familiar routines with little change, then yes, what needs or doesn’t need confirmation will stabilise and not change over time, with the classifier models becoming very stable and accurate.

But if you have a more complex life (in terms of places, modes of transport, common routes, new routes, etc), then the models will be constantly changing over time based on your new confirmations and corrections, and that will then change how those models perceive old data.

For example if for a month you walked from home to a particular Starbucks every day, the activity type model for that neighbourhood would learn the walking route and become confident in it, and it would learn the Starbucks model and become confident in it. No more asking for confirmation.

But then the next month you also start cycling to the Starbucks some days, you go there at different times, and you also stop in at a bakery next to the Starbucks before heading home. Now the activity types model for that neighbourhood is aware of both walking and cycling along that route, different times of day, and there’s also multiple competing places very close together, all adding up to reduced confidence in each classification.

Then with that more complex model data, if you browse back to a day in the previous month, that previously needed no confirmation because the models were simpler and more certain, now those unconfirmed items won’t look so certain to the classifiers anymore, and you’ll get asked to confirm them. Basically a month ago it could confidently say “you walked, and you went to Starbucks” but then with the new knowledge, and looking at that old data, it has to say “it looks more like walking than cycling, but the difference in this case isn’t significant enough for me to be highly confident. and it looks like Starbucks more than the bakery, but again, I can’t be as sure as I was at the time”.

A pretty common effect is to go back to old data, then get asked to confirm some stuff that you feel sure you must’ve confirmed at the time. But in reality you didn’t confirm those items at the time - the classifiers/models were confident enough in their auto assignments at the time, so didn’t bother you with asking. But now that the models know more about your life, they need to ask.

There’s a UX angle to this that I think is an open question: Whether the app should bother asking for confirmation for old data, or whether it should just leave it be. “The past is the past”, kind of thing. My personal preference is to have it ask, because I’d rather get it all right and not leave any little mistakes in my old timeline data. But I recognise that a lot of the time it’s annoying and tedious. So yeah, still an open question.

Thanks for that explanation! I can understand the UX issue with the “updating metadata” status being global across days. I agree that would be nice to fix up.

But I think the more problematic issue I’m facing is the annoying/unsolved bug you described. Just sitting at my desk looking at the timeline for yesterday, I see the classification of a car trip I took constantly flickering between different detected movement modes, and the number of “unconfirmed items” going up and down and back up again in the process.

Here’s a screen recording I just took of this issue on my phone (map cropped for privacy). Again, this is what I’m seeing just sitting at my desk with the app open.

Ah that’s likely my least favourite variation of the bug :grimacing: I suspect what it is is the processing engine’s “edge stealing” logic getting stuck in a loop.

Whenever timeline item processing is done, each item in the target period is evaluated for whether its “edges” (ie first sample and last sample) are appropriate, or whether they should be moved to the adjacent item. In the context of Visit items, this means looking at whether there’s some moving samples at the start or end of the visit that should really be pushed out into the following or preceding trip item, based on whether they’re really “inside” the visit or not.

What can happen is that the calculation says “yeah this sample should be pushed out” or “the edge sample from the adjacent trip should be pulled in”, but then after it’s done, and the calculation repeated, the conclusion becomes the opposite. The “edge stealing” changes the nature of the items, causing a potentially different result to the decision. Then it’s stuck in an endless loop of shunting the sample back and forth :upside_down_face:

The same can happen with edge stealing between two trip item, with the decision being based on which item the sample’s activity type, speed, etc looks better matched with.

There’s already “infinite loop breaker” logic in there to stop this happening. But I think the bug is that in some cases it becomes a two or three step infinite loop, so instead of it being one sample getting shunted back and forth endlessly, two or three samples get shunted around, so no two consecutive steps look like a repeating pattern.

Apologies for the longwinded explanation! I’m basically using this as a way to explain out the problem for my own benefit :wink: It often helps me to see potential solutions I hadn’t thought of before.

Though in this case the solution seems obvious: just extend the infinite loop breaking logic to cope with more than one step. No idea why I haven’t done that yet. I’ll hunt out the task for it in the task tracker and see why it’s never got done. I bet there’s a note in there like “actually that’s totally not what’s happening. cool theory, but not true. oh well”. Heh.

1 Like

Uh, I suppose I should offer an actual useful way to work around the problem!

I think though that I’ve never found one, other than “come back later and it’ll have stopped”. Top grade problem solving and customer support eh :joy:

Thanks for the explanation - I like to understand the internals of problems like these.

FWIW, “come back later and it will have stopped” isn’t working for me either. After recording the video I shared above, I did my best to confirm all unconfirmed items in the day (although it continued to flicker 1-2 unconfirmed items on and off), then closed the app. Today, more than a day later, I opened the app and went to view the same day in the timeline, and found it still flickering between different numbers of unconfirmed items. :frowning:

Hm. I bet it’s the endless loop of edge stealing then.

I had a look at that code yesterday, and it’s designed to cope with multiple samples getting swapped back and forth, but only if it’s the same samples in two consecutive edge stealing runs. Like if 3 samples get shoved one way, then the same 3 samples get shoved the other way, it’ll catch that. But if it’s a sequence like say +3, -2, -1 it won’t catch it, because it’s only comparing the +3 and -2, or the -2 and -1. Sigh

Though that doesn’t sound impossible to fix. I’ll cogitate on it a bit. Can probably just do something like maintain a stack of samples that’ve been shoved around, and not allow any of them to get moved more than once in a single processing loop.

Of course then something else could trigger another processing loop, and it all starts again :grimacing: But hopefully that extra layer of loop breaking would be enough to push it from being a rare edge case to being an extremely rare edge case.

Oh, on the upside: It’ll only happen when you’re looking at that day’s timeline. Timeline processing is typically triggered by the UI. The rest of the time it sits dormant, deferring processing until someone’s looking, to save energy/battery.

Aside: That’s why when you open the app at the end of a busy day you’ll see a whole lot of shuffling around in the timeline for a little bit, as the processing engine picks up from the fairly raw state the recording engine left it in, and starts applying all the various timeline rules.