One of the frustrations that drove the origins of this research project was wanting to better understand news alert performance. We had clickthrough rates, but that felt so limited compared to the kind of multi-dimensional data available for websites or email newsletters. Imagine only knowing about clicks from your emails, without the context of opens, unsubscribes and forwards.
This has been a problem for several years. The 2017 report from the Tow Center for Digital Journalism and the Guardian U.S. Mobile Lab wrote about the difficulties with metrics:
“In some ways, the lack of any shared consensus around (a) which metrics to prioritize and strategize around; (b) the best tools to use (some use in-house tools, while others use third-party offerings from the likes Urban Airship, Google Analytics, Adobe/Omniture); and (c) the frequency with which data should be analyzed, fed back, and acted upon reflects a general feeling of uncertainty about—and dissatisfaction with—the current state of push-based analytics.”
Unfortunately, there’s not an easy solution to generating better data. If a consumer reads a news alert on a lock screen that has three other notifications on it, without clicking on any of them, how would the device even report that? Is that the same as reading your subject line in their email inbox or opening your email?
Acknowledging this uncertainty and dissatisfaction with analytics, here are two ways we evaluated AP’s news alerts besides open rates.
Using publicly available Google Trends, publishers can compare what they alerted to the most popular search terms in Google over a period of time and in a given location. It's a useful tool to compare against broader trends in news. In hindsight, was anything not alerted that should have been?
Below is a chart of the most searched terms of 2020, according to Google Trends. Those in yellow are topics AP sent at least one news alert about, while those in white, AP did not push an alert about.
The purpose of auditing isn’t to ensure that we checked all the boxes but rather to make sure we are happy with our decision-making.
Based on this chart, the AP could evaluate whether it could have sent more “news-you-can-use” alerts such as an alert about coronavirus symptoms. The AP produced stories on all of the deaths in the chart, but did not send push alerts about all of them. In regards to the stimulus checks, we alerted when the bill was passed but did not alert information on how consumers could get their checks, which is what people were searching for.
We are happy with – even proud of – our decision not to alert about murder hornets.
The AP found it was easier to identify trends in our alerting behavior using longer time horizons such as a month or a year versus on a daily or even weekly basis.
Another way to audit past alerts is to score past alerts against key measures. This exercise is useful for identifying potential areas to eliminate and to ensure your alerts align with your editorial mission and brand.
The AP chose to score alerts in three areas: