Editing & Symbolization


I just started using Arc. My situation is similar to the market-stall scenario mentioned here in the forum. I’m at an amusement park every weekend - Walt Disney World. So, I generate sequential, short dwell locations in attraction queues and movement on rides while walking between attractions. Additionally, I transit between parks and resorts via car, truck, tram, shuttle, bus, gondola (cable car), steamboat, ferry, and monorail (train) punctuated by many types of transit stations.

At the end of the day, I can have 50+ unconfirmed items. Currently, since I only started using Arc last week, I’m confirming items after Arc’s recording and processing engine have done their thing. This takes a long time. Are there any pros/cons with confirming items in real-time while Arc is processing?


Due to many modes of travel, is there a way to better randomize (assign?) the colors between transit types? It looks like both monorail train (below left) and gondola/cable car (below right) are gold in color.

1 Like

Hi @Brian! Looks like you’ve got some fun data to work with :grinning_face_with_smiling_eyes:

These days, nah there’s not really any difference - just personal preference. Personally I either wait until processing has finished or do my own edits inline (due to impatience) depending on whether I’m working on the app at the time.

If I’m working on the app at the time, then I’m probably wanting to keep an eye on things like how much Arc gets right on its own, how much it fails to get high enough confidence with, etc. But when I’m just playing with the app on my own time I don’t care about that stuff, so I rush ahead and do editing even if the processing engine is lagging behind.

Potentially the two of you (ie yourself and the processing engine) both fiddling with the data at the same time can cause some extra mess, if both of you happen to fiddle with the exact same bits - you’ll both end up seeing results that you don’t expect. That can sometimes explain why when you’re doing edits you see changes that make no sense - you and the processing engine both went at the same data, and tripped each other up. But it’s all harmless enough.

Luckily you should see it getting faster and faster over time, and more accurate and detailed. Especially if you’re recording similar kinds of things each day - Arc will learn the monorail lines, your walking routes, etc, and get much smarter and faster about cleaning that stuff up.

Sorry about that! When adding a bunch of new activity types in the last round of additions I ran out of spare colours, and made some particularly uninspired choices about which colours to reuse for which types. I think probably we need at least one extra colour, at least for the gondola / cable car / chair lift types, and maybe ideally 2-3 more colours. Having those reuse the same colour as train and tram is fairly nonsense. I’ll add that to my todos!

After reading this forum and the public roadmap, this is how I thought Arc would work. Thanks for clarifying, Matt.

Kudos on this app, by the way. I used a combination of Moves and Gyroscope from 2015 to 2020. They could never capture the granular location detail I needed for my Disney World travels. I’m looking forward to seeing what Arc can do in this fun yet challenging spatial environment.


Thanks Brian! That’s basically the reason why I started Arc in the first place :grinning_face_with_smiling_eyes: I loved using Moves to keep track of my nomadic lifestyle, but it wasn’t trying to capture the full detail, and I wanted something that did. So Arc’s founding goal was “Moves app, but with as much detail as possible given current smartphone technology”.

You’ll definitely butt up against the technological limits eventually. Though the results that are possible within those limits are already quite fun.

Location data accuracy on phones has its limits, and there’s also certain tradeoffs that have to be made for battery life reasons. Aside: The main one that still bothers me is Arc having to go into “sleep mode” while stationary (to get energy use down to near zero when you’re not going anywhere), then the recording engine missing the first bits of detail after you start moving again (due to Arc waking up seconds too late and not having the accelerometer and other things up and running fast enough).

But with the phones getting increasingly more powerful, and battery life getting better and better, I’m often testing those limits, and seeing where I can push things to get more detail and accuracy.

It does. My work experience covers a lot of these areas.

Given the precision of Arc and nature of location data, I’d suggest maybe even posting some links to required reading for new users. With Arc, understanding things like UERE, GDOP, multipath, etc. may help users better differentiate between Arc app, Apple iPhone, and GPS location data behavior.

Well…Apple :roll_eyes: What they’re up to sometimes has to be discovered first. They’re not exactly forthcoming with everything they’re doing with a user’s location data when they’re doing it.

Understood. I saw a few mentions of it in the forum here. I came across the “sleep mode” issue already, and I’ve observed the missed recording detail you mentioned. So far, this has only occurred when abruptly taking my iPhone from an hours-long 100% stationary position directly outside for a walk. The first part of the walk isn’t immediately recorded. But, I’ve seen the processing engine catch up and correct the missing information. It’s the same walking path every day. That seems reasonable given the current battery-saving measures.

Edit: Arc didn’t record the first seven minutes or .3 miles of my walk yesterday. I’ve observed Arc fix “repair” the missing data before many hours later, but this morning that journey leg is still missing. Stationary > Walking is the issue, like you said. I wonder if there’s a quick fix for that. I don’t swipe/kill the app ever but that may be the only way to wake it up before a walk.

Does Arc go into “sleep mode” if the phone is connected to a power source?

Unfortunately the technical details that users most often end up needing to know about are much more boring :disappointed: Things like the fact that iOS / iPhones make no promises about letting apps keep running in the background, so random terminations (and thus data gaps) are always possible, and sometimes likely. I spend more time dealing with that kind of technical (and support) headache than anything else!

Arc lives in a grey zone on iPhones that Apple doesn’t really want apps to be living in. There’s allowances for long running location data recording, but they’re targeted at low resolution data (ie deferred updates, triggered once location has changed X metres or Y seconds), which don’t fit with Arc’s goals at all.

But those problems are gradually getting better over the years (with notable massive regressions here and there, when new iOS releases contain nasty regressions), as iPhones get more powerful and battery life improves.

I barely get time to think about those things myself :smirk: They’re more in the zone of stuff that’s interesting to me but that would bore someone to death if I brought them up at a party :joy:

I think the most technical stuff that end users tend to be passingly interested in is the ML aspects. And that’s an area that I have started preparing some brief “onboarding articles” for, in the app. But I haven’t had the time to finish up the articles, so that extra optional onboarding reading is all turned off in the App Store release builds. Something for me to finish up some rainy day.

Yes. Though perhaps the less said about that the better. I’ve already lost too much sanity dealing with Apple’s secrecy on matters that I really need them to be open about, in order to do my job properly. Sigh.

Luckily Arc will get better at those wake-from-sleep starts over time, as long as it’s things that fit into detectable patterns. Arc adjusts its sleep cycle durations (between about 12 to 60 seconds) depending on the current predicted leaving likelihood (or falls back to 30 second cycles if no sensible prediction is possible). So if it’s close to a time (or visit duration) for a place that you regularly visit, the recording engine will be waking more frequently and be more likely to catch the start of the path when leaving.

Though there’s almost always going to be at least one or two samples that are recorded right at the start that lack useful info like accelerometer data, so you’ll often see the very first segment of each path (that starts from a visit) being classified as stationary type, because the ML classifier just didn’t have enough info to go on. Those are my personal bugbear.

Short answer for that: That sounds like an actual data gap - ie the app wasn’t even alive (likely terminated by iOS). So it only got back alive and recording again once iOS’s Significant Location Change service (or CLVisit monitoring service) triggered, and restarted the app in the background.

About the “repair” process you’ll see sometimes: When the recording engine is live recording, the processing engine only does the simplest of passes over the data, splitting the incoming samples up into TimelineItems (Visits and Paths) based on the real time moving state detection within the samples. So the initial almost-unprocessed timeline items tend to be more back and forth between Path and Visit. Then once the app is in the foreground (or sleep mode starts) a more thorough processing pass is done, which steals samples back and forth between Visit-Path edges, based on the centre of gravity of the visits. So that’s where you’ll see paths getting backdated so that they connect back to the previous visit.

Though in the situation of your 7 minutes of missing data, there’d be no samples to “edge steal” between Visit and Path, so it’s just plain lost :disappointed: If iOS kept a cache of recent location data for apps to access after the fact (like they do for some Core Motion data), that’d allow for a way around that. But the privacy issues that would raise would be horrific, so I don’t expect Apple to ever do that (nor would I want them to - privacy really needs to come first when it comes to location data).

Right, but you shouldn’t be getting blame for “jitter” and then having to spend time explaining how GPS works. Maybe most users understand it already. Who knows?

Arc learning tutorials would be useful, yes. I’m even figuring how to best split segments right now. Then again, my spatial behavior at Disney World is bananas.

I had the app open before I left for the walk - open the entire time. I was even moving between screens. It just didn’t recognize the path movement until well into the walk. :man_shrugging:

Would swipe-terminating Arc before a walk fix this issue? I don’t normally swipe Arc closed for reasons you’ve stated here in the forum.

Hmm. That sounds like one of the rare cases of the phone itself failing to provide updated location data.

In the past when that happened, it would happen to all apps at the same time, and you were probably better off restarting the whole phone to snap it out of it. But in more recent iOS versions it’s become a more mysterious glitch, happening to only one app at a time (I run three different LocoKit based apps on my test phones - Arc v3, Arc v4, and Arc Mini, so I get to see the exact same recording engine getting fed different data).

If it persists and doesn’t quickly self resolve, then restarting the phone will always fix it, but that’s a bit drastic. Unfortunately restarting the affected app doesn’t always fix it, but if it’s got to the point where it’s obvious the app isn’t getting given updated location data, I’d still try swiping the app closed first, before restarting the whole phone.

But yeah, in general swiping Arc closed is a bad idea, and will just worsen the data you get, due to causing a data gap and giving the processing engine more mess to clean up, as well as wasting energy on the app having to go through its full startup sequence again. (A common cause of poor battery life is people swiping apps closed all the time, and Arc especially hates it, because it causes an unavoidable data gap).

Arc’s LocoKit recording engine (on GitHub) doesn’t ever get stuck in sleep mode. As long as the app is alive, LocoKit will always be recording new samples every 2-60 seconds (2-10 seconds while actively recording, and 12-60 seconds while in sleep mode), so all incoming location data will be captured in those samples. A gap of no samples at all can only happen when either the app has been terminated/crashed or the phone isn’t providing updated location data. Aside: a valid / not-a-glitch case for the phone not providing any updated location data is when on underground trains - those are a nightmare to record cleanly, and still a weak spot for Arc.

Aside: There was a bug in some iOS versions that caused the CLLocationManager to just go sour and die, providing no new location data even though other CLLocationManagers (even in the same app) would be still getting updates. So LocoKit has a safety net for that - if it’s waiting for location data and nothing arrives in 30 seconds it’ll discard the location manager and make a new one. That trick successfully got around that iOS bug well enough that I’m not even sure if that iOS bug still exists. And if that workaround doesn’t succeed, then it means iOS has glitched out more severely, and it’s at the point where possibly even restarting the app won’t fix it, and rebooting the phone might be necessary. It’s all just guesswork though as to what’s really happening and what will help to fix it - it’s not as if Apple are going to document these intermittent bugs anywhere.

A couple of tips for segment splitting:

  • I find it super useful to temporarily use an obviously nonsense activity type such as scooter, when doing fiddly splits that need to be done in multiple steps. Scooter is an obvious red, so it’s easy to see which bits you’re still fiddling with. So for example if I’m being obsessive and want to split out a short walk and stop that’s inside a larger building, I might first split off all the likely samples into a scooter segment, then split that scooter segment up again into walking/scooter to get the first walking bit out, then stationary/walking to get the rest out. It makes it much easier to see what you’re working with, and also avoids the segments merging back together before you’re ready (segments are just contiguous groups of samples of the same activity type, so if there’s for example a walking segment after walking segment, it’ll merge those two into one, potentially messing up some fiddly stuff that you weren’t finished with yet).

  • Segments within the current visit (ie where you are now) are liable to be rebuilt at any moment, due to new samples being recorded and added to the timeline item. So it’s sometimes quite frustrating trying to do fiddly segment splitting on the current visit, due to the segment you’re working with getting changed or even disappearing, then the view kicking you out due to the change. If the recording engine is still alive and recording new samples every 2-10 seconds, it’s a pain in the arse trying to carefully edit data that doesn’t stay still. So I tend to not bother doing anything in the current visit. The exception being when the recording engine is in sleep mode and it knows you’re definitely not going anywhere, so it’s doing 60 second sleep cycles. Then it’s calm enough to get some things done.

1 Like

Round two of Arc’s theme park training at Disney World this past weekend. The “scooter edit” works well. Thanks for that tip, Matt. :+1:

Just a little feedback as I learn Arc.


One challenge is balancing near-real-time edits with editing at the end of the day. As you said, editing while Arc is processing can produce unwanted results. But, given all the activity I’m logging at the parks, waiting until the end of the day to do all the edits requires about an hour. I’m manually logging all dwell times and cross-referencing it with Arc (i.e., editing by timestamp), but, in some cases, I’m finding it more expedient to edit visually (i.e., spatially) using the “scooter edit”.

Overall, the editing is fairly easy. As easy as it can be while Arc is learning, at least. It gets a little hairy when I edit too fast. Arc needs time to replicate the changes no matter when you edit. So, I’m doing a small batch of edits then giving Arc some time to catch up before proceeding. I noticed the changes replicate if I swipe/kill Arc but I don’t want to get into the habit of doing that.

When adding visits, it can be tricky when I’m dwelling for short periods at discrete locations in close proximity to each other (see below). I visited many animal habitat locations on Discovery Island and the Oasis (south of it) in Disney’s Animal Kingdom. They are the most challenging to edit. However, no app can do it except for Arc (not counting GPS Tracks but that’s a different type of app). Now that I have them entered into Arc, I suppose it’s just a matter of choosing the right one that corresponds to any future visits and avoiding multiple place names for the same location.

Common Path Data

What is the number of visits for a path to generate common path data and a place to generate place statistics? I visited the Harambe Wildlife Reserve twice this weekend (Sat and Sun) by way of the Kilimanjaro Safaris (labeled as “songthaew” just to differentiate it from “car” even though the actual safari truck isn’t one). No common path data yet, but they didn’t unload the safari vehicle at the usual location the day before. Now, I’m wondering (1) how much of the path has to be the same to generate stats and (2) if the path will need named visit locations book-ending it.

Yeah, unfortunately there’s an unavoidable time cost to some of this stuff. The catch with Arc is that it lets you get near complete detail and accuracy, down to nearest seconds and nearest metres in a lot of cases, but the automatic processing and ML classifying can’t achieve the finest details at that level, especially if it’s short visits and paths which fall under the merge rule “keeper” thresholds (eg minimum 2 minutes for a visit, and minimum 20 metres + 20 seconds for a path).

Aside: You can see these thresholds at the top of the Visit and Path source files:

It will get faster over time, with the activity types classifier making much less noisy / random segments, and the places classifier getting smarter too (although the place ML learns exceptionally fast, so you’ll probably already be seeing the near full benefits of that). But there’s always some time cost, if your goal is to get down to the grittiest details.

I do the same. It’s easier to eyeball the segments, then double check the timestamps afterwards.

This is often due to the UI intentionally not immediately updating to match the underlying data changes. The UI in a bunch of places defers its updates to avoid wasting CPU/energy with constant repeated UI rebuilds when the underlying data is potentially getting thrown around in all sorts of ways. But the side effect of that is that sometimes the UI is painfully behind the data, and misleads us into making edits that no longer make sense.

That’s something that I’d hoped a move to SwiftUI would alleviate, and is part of the goals for the Arc Mini project (an open source rebuild of Arc’s core UI in pure SwiftUI). But it turned out to be a bit premature, trying to build something in pure SwiftUI that sits on top of ever shifting data. SwiftUI is still a bit too young and there’s too many nasty traps, so I haven’t been able to get it to a shippable state yet.

Yeah a tricky part there is the minimum visit radius that’s enforced (I think it’s maybe 10 metres), which then causes adjacent paths to get merged in if the next visit is too close. For example Visit → Path → Visit, where the distance between the visits is only ~20 metres, the minimum 10 metre visit radius that leaves no space for the between Path to exist, so it gets merged into one/both of the visits. Mucky, but a trade off that had to be made at the time. Perhaps that minimum radius could be reassess now though, given that the location data from iPhones has come a long way since I set that threshold (probably ~4 years ago). When I compare data from back then as to now, the quality has night and day. Current gen iPhones (and iOS versions) are dramatically better at providing clean data.

Ugh. This feature is painfully neglected. I built it in a weekend, then realised that to get it right would take quite a bit more fuss. So I set it aside and decided to come back to it when I had more time (which is something that never happens).

It only requires twi matching trips to generate stats, but the catch is that it only looks for exact PlaceA → Path → PlaceB matches. It doesn’t know how to detect common paths that contain more than one path item (eg PlaceA → walking → bicycle → PlaceB). Dealing with the multiple-paths-between-visits cases would be much more database query intensive, so I’ll have to do it carefully (and probably persist results to db), where as the super basic PlaceA → Path → PlaceB query is trivially easy and fast.

Beyond that, if it’s not showing up for a specific exact Place → Path → Place trip that you know it should, it’s probably a UI caching issue. The CommonPath objects aren’t persisted to the database and are just rebuilt on demand, as requested by the UI. If it’s already in memory it’ll appear in the item details view, but if not, then it might not appear until the second time you go into that view.

That feature doesn’t really meet the MVP threshold, in my opinion. Something I’d love to get back to eventually and really flesh out and do properly.

1 Like