Car journey is always tracked as a bike

Arc Timeline tracks every car journey as a bike ride and I don’t know where I have to or can change anything. And when I wanted to change an old “bike ride”, the route broke into what felt like 50 pieces. Perhaps someone has the ultimate tip for me on how I can make car journeys recognized as car journeys.

Hi @ner3y!

How long have you been using the app? What you describe is typically what you’d only see within the first few days or week of use. Once it’s built up enough data it’ll know your own patterns well and won’t make those kinds of silly mistakes anymore.

Make sure you’re confirming/correcting any items it marks as “uncertain”. Those are the ones that the classifier is having the most difficulty with, so confirming/correcting those is the best way to train it to understand your data better.

Hi @Matt,
I have the same problem as @ner3y, although I have been using arc since 2017. I try to correct every incorrectly recognized item on the same day/evening. Sometimes there is a backlog of a few days (rarely weeks) which I then work through. But since 2020, my calendar has almost no gray days.

I use the car almost every day, mainly for work in the city (Berlin, Germany). I commute by bike and train and use both in my spare time - sometimes on similar or parallel routes to the car.

So I think something is wrong with the app/model (I only use Arc Timeline and Arc Recorder — no Arc Mini anymore).

That’ll be the problem in your case then. It’s not about there being something wrong with the models, it’s just that they don’t have enough information to distinguish between the two, and there’s not really anything more we can tell them.

To us, car seems very different from bicycle. But to the accelerometer and other metrics they look almost identical, especially in built up city areas. Cars don’t actually go fast in cities, they spend almost all their time either stationary or going about the same speed as bicycles.

So when you put together accelerometer, speed, locations… there’s no way for the model to tell the difference.

The models do however have other model features that can help. Time of day and step/pedalling cadence being the remaining hopefuls. But if you tend to cycle at similar times of day to when you drive the same routes, that one’s gone. And if you mount your phone on the bike instead of having it in your pocket then pedalling cadence is gone too.

There’s not really any sensors left on the phone that we could use to distinguish further. Possibly detecting nearby Bluetooth, to identify the car, though I’m not sure that that’s technically feasible within iOS’s limits.

If the phones had noise or temperature sensors then that’d possibly do it. The Apple Watch’s ambient light sensor could come in handy, but it’d be delayed data that’d have to be added to the recorded samples during a later sync.

Actually heart rate data could help. Though that again would be a later sync, so immediate classification wouldn’t have it available. If I were to add anything more it’d probably be heart rate data, when available. It would still mean the initial classification would miss the distinction, but going into the edit views later, after the heart rate data had synced, could potentially trigger better classifier results.

Bit of a tangent, but this actually has parallels with a lot of the problems people tend to have with chatbots. There’s this sense of “this is obvious - why aren’t you getting it”, but we’re often failing to notice that the robots/models are working on much less contextual information than we are.

We take for granted our real time, continuous vision, our ambient senses, all sorts of things that are feeding in continuously for us, making various things blatant and obvious, but that the models/robots don’t have any awareness of (yet).

For reference, here’s the full list of model features the activity type models use:

    var stepHz: Double?
    var xyAcceleration: Double?
    var zAcceleration: Double?
    var movingState: Int
    var verticalAccuracy: Double?
    var horizontalAccuracy: Double?
    var speed: Double?
    var course: Double?
    var latitude: Double?
    var longitude: Double?
    var altitude: Double?
    var timeOfDay: Double
    var sinceVisitStart: Double

“course” can be useful for cases where you travel the same route but by different modes in each direction, like if you cycle to work but take a bus home. Though “time of day” also helps a lot with those cases too.

“altitude” can help marginally when there’s train travel along similar route to car trips, if the train line is elevated or underground. Though in practice the vertical accuracy of the location data often isn’t good enough.

But then in those cases sometimes “horizontal/vertical accuracy” can pick up the slack, noticing differences in location data accuracy between trains and cars / cycling, etc. Still pretty hit and miss though.

Basically it’s a hard problem, and the models are already making use of almost all of the available information. The remaining information would be delayed data like heart rate. Which is still hopefully a worthwhile improvement to add someday.

Just wanted to update that I’ve now added heart rate data to the activity type models in Arc Editor. And seeing interesting results already!

One emergent behaviour that I didn’t expect is that it can now sometimes classify walking inside long visits, ie when recording is in sleep mode thus no accelerometer or pedometer data available. Previously it would all be detected as stationary, because all the classifier had to go on was location and nothing else. But now with heart rate data added to the samples (during a delayed sync) the classifier can in some cases recognise that it might be walking.

It managed to pick up that I went downstairs to my hotel’s buffet breakfast this morning, classifying some of that as walking, even though it’s been in sleep mode continuously since I arrived home last night. Cool!

I suspect it’ll also help considerably with the car vs cycling issue, when travelling on the same routes. Though we’ll have to wait and see. But yeah, promising results so far!