Hi again @extrawdw!
It could also be that your Android phone has better GPS/GNSS hardware, depending on model of Android phone and iPhone. Newer iPhones and Android phones often have dual frequency GNSS hardware, which does a lot to improve these kinds of situations, and that’s typically the biggest differentiator in location data quality, at least in recent years.
This is often a sign of single frequency GPS - building shadows. This is an almost universal problem in built up city areas! The signals are reflected, or line of sight to satellites on a consistent side of the sky are blocked, and the location is consistently offset. Dual band GPS hardware goes a long way to fixing that.
I’m not sure which iPhone model first introduced dual band, nor which Android phones have it either. But it’s definitely something worth checking, because the messy data can be quite an annoyance, as I’m sure you’re already well aware 
Anyway, leaving aside the technicals of what can cause this kind of problem, let’s look at solutions!
Arc’s built in workarounds for are 1) the “bogus” activity type, and 2) the Trust Factor system.
When there’s location data that’s more than let’s say 100 metres from the real location, your best bet is to split those segments out using the segment splitting view, and mark those portions as “bogus”. That will train the classifier to recognise location data in those areas with those patterns as being bogus automatically. That then stops those samples from being included in various calculations, and also assists the timeline processing engine in keeping the timeline more clean and sensible.
When the location data is drifty but within about 100 metres of the correct location, explicitly mark it as “stationary”. That then trains the classifiers to recognise drifting data in that area as stationary rather than a moving type, but also trains the Trust Factor system to recognise moving location data as untrustworthy in that area. The Trust Factor system then adjusts the reported accuracy of the data.
For example if the phone is reporting “this location data is accurate to within 30 metres”, but Trust Factor now knows that location data around there drifts all over the place, it’ll change that value to be let’s say 130 metres. So now all the algorithms treat that “30 metres accuracy” to instead be “130 metres accuracy” - much lower trust. That then allows the moving/stationary state detection to do a much better job, and the filtering algorithms to know to more aggressively filter it for nonsense.
The combination of those two systems can often result in this problem going away, or at least the problem significantly reduced.
Oh for the 100 metres threshold as to whether to use bogus or stationary, that’s more of a rule of thumb. In practice I tend to use geographic boundaries. Like, if the samples are completely past the other side of the road, then I think “nah, that’s shit, I’m gonna mark it bogus”. So in that case I use the wide road outside my hotel as the boundary. I basically play it by ear, deciding a case by case threshold for which samples are “close enough” and which fall into the “nah that’s useless nonsense” basket.
Oh also, if you’re not already using the Arc Editor public betas, I recommend trying those. Arc Editor’s totally rebuilt LocoKit2 recording engine is significantly better at intelligently filtering location data. So it’d be interesting to see how much of the problem goes away simply by having a better recording engine dealing with it at the lower levels.