All segments cycling

All segments of my day are recognized as cycling events, but I was not cycling at all at this day. I was just in my car. How can this happen?

Hi @Marc!

There’s a few ways that could happen, though they’re all really the same thing: The activity types models don’t have enough information yet about your travel patterns in that area.

Typically this is something you’d see in the first day or two in a new area. The models won’t have enough information yet, so they’ll make some bad guesses, and then ask you to confirm/correct them.

Then once you’ve given that day or two of corrections, the models will be updated overnight, and the next time you take a similar trip in the same area or nearby, the models will have a better idea how to guess, and probably get it right.

To go into a bit more detail: The models are split up into neighbourhood sizes, city/state sizes, and then a single global size model, with all three model sizes being used when classifying the samples inside a trip item.

  • The neighbourhood side model has the clearest understanding of the finer details like specific roads, train lines, etc, and how you tend to travel on them (including times of day).
  • The city/state sized models have a more bird’s eye view of things, and can pick up the slack when you travel into a new neighbourhood but still travelling in a similar way to how you do in more familiar areas.
  • The global model does its best work when you travel far away, perhaps to a new country or state you’ve never been to before. There won’t be any useful data in the neighbourhood or city level models, but the global model can still recognise your general behaviours. For example if you tend to walk more often than cycle, or take taxis but never drive cars. It can also recognise things like airplane flights, based on speed and altitude not being plausible for anything else.

Oh I should add: It’s important to do the confirmations/corrections that the app asks you to do! It will ask about choices it’s made that it’s unsure about, for which the model data doesn’t cleanly match up. It then uses your feedback to update the models so that next time it’ll have a better idea and hopefully won’t need to ask.