The Arc Reader build is 875. This version may include Stationary as a heatmap option (I’m not sure if I pushed it to Github yet) That particular feature still has issues, but the other activities should still be OK!
Re: Python PhotoKit wrapper to access my Photos library on Mac.
I have a few other projects that run local servers, but for Arc Reader it’s a setup hassle for users who just want to click the Github link. Fortunately, I can still access the photo library and make multiple selections via the normal Apple Open dialog by scrolling down to the media section.
I’m about to add a heat map feature to my Photo locations program that I’ve been using for mapping koalas. Unfortunately a Photokit wrapper would not help with that one because I always take several photos of the same Koala and that would mess up a frequency heat map.
I also added the heatmap function to the photo-mapping program I use for tracking Koalas on Mt Gravatt in Brisbane. Unfortunately Arc Reader doesn’t support 3D mode
I’m developing an LLM interface on the Analysis Page. I will push it up to Github later tonight hopefully. At the moment you will need a Claude API Key to use it.
I’m currently looking at optimisations to reduce the tokens used, but it depends on the type of query. I also included the option to use Claude Haiku 4.5 since most queries shouldn’t need a lot of ‘intelligence’. Sonnet 4.6 is overkill. BTW, using Haiku reduced the cost for the same query down to $0.0058; much more reasonable!
I’m now adding a simple map interface so that Claude can show results on the map e.g. “Show me on the map, all the locations I visited this week. Exclude work and home.” I’m not sure what that will do to the token count? I guess I’ll soon find out!
I’m really excited about this feature!
PS: One potential issue is that the function results are uploading to Anthropic, so a user should not ask some questions e.g. “where do I live?”; that would send the exact GPS location to Claude. I’ll need to add a warning.
This is exactly what I want to add to Arc Editor!! I trialled it in a new side project the other week, and it worked out fantastically well. So it’s high on my todos to do something similar in Arc.
And now you’ve gone ahead and beaten me to it again Though I think there’s some specific benefits to also doing it outside of the app. The constraints are different, so both approaches hold value.
I found the same in my side project app. Also if you haven’t already, make sure to have extended_thinking enabled. Haiku really excels when it can use thinking tokens to plan out its approach.
With extended thinking, I’ve found so far that there’s nothing Haiku can’t nail. It doesn’t struggle at all. And as you say, it’s also absurdly cheap compared to the others.
Me too. It’s something I’ve been thinking about for a while, but had filed as “later; maybe never”. But with my recent side project and the significantly better than expected results there (and Haiku’s capabilities being also far better than I’d expected)… It now feels like a high priority, extremely high value feature.
I am effectively creating an Arc Reader API for the LLM to use, so it’s a game of ‘Whack-a-Mole” to expose all the data the LLM will possibly need to answer user questions. I’m also doing this due to privacy issues of potentially sending exact GPS co-ordinates to Anthropic. To get around this, I only send location names and cache the GPS data locally. When the LLM comes back with the results, the app merges the GPS data to create the map, table etc.
Note: Haiku 3 goes away in April
Here’s an example ‘chat’. It’s doing things, that the main program can’t do as easily. Note the costs are shown after each interaction.
Yeah the workaround I did with Arc Editor’s Haiku powered search view is that Haiku interprets the user’s query and turns it into filters, then the app applies those filters on the app/client side. So Haiku never sees the user’s data, only the user’s search query.
You’re not using Haiku 4.5? It’s a champ!
Yeah exactly. To build all this power into the main app (Arc Editor or Arc Reader) would be unworkable to impossible. But give the AI the necessary tools and it can apply intelligence to unlock a breadth of functionality we could never build otherwise.
Oh now I see Haiku 3.0 is significantly cheaper. Hmm. I guess usage and pricing will be something we feel out over time, in both apps. Hard to predict ahead of time what token consumption is going to look like.
I have other models available in a dropdown menu. I can up the model for more complex questions, but so far Haiku 3 has been effective and so far my whole development of the feature has cost me USD $0.47 in tokens. I’ve also spent a lot of effort optimising the efficiency of the query process. There’s also a Claude option called prompt-caching where subsequent calls within the same conversation are 90% less expensive.
Yeah prompt caching is worth a lot. Though have to be careful to move all the dynamic content as far down the context as possible. And make sure it doesn’t change between API calls. That’s the first failure I always see - cache hits not happening because something in the prompt is subtly changing on each call.