Categorizing Variants of Goodhart’s Law gives us a deeper look at the now well known Goodhart's law.
The definition of Goodhart's Law from the paper:
a Goodhart effect is when optimization causes a collapse of the statistical relationship between a goal which the optimizer intends and the proxy used for that goal
I'd always thought of Goodhart's law in terms of gaming of metrics, i.e., the metric may not represent the goal perfectly and can be gamed by agents tasked with moving the metric. But the paper shows there are other types of Goodhart effects. E.g. 'Extremal Goodhart', when the metric simply stops being a good proxy for the goal after it is taken out of a certain range.
As an example of Extremal Goodhart in technology, think about Daily / Monthly Active User metrics, which are commonly used by many tech companies to measure growth of a product. These metrics can be good proxies for growth in usage of the product (our goal), but over time the statistical relationship could become weaker. You might have a large number of fake users driving growth in daily actives, who perhaps log in regularly but don't do anything useful, only create spam on your product.
When you're dealing with metrics it's important to regularly check if they continue to be correlated with the outcomes you care about.