Barb’s building blocks: A brief history of Barb’s panel structure and how to win at Jenga
Joe Lewis, Project Director
Across the first half of 2014, BBC iPlayer enjoyed a 16% year on year increase in requests – from 1.1bn to 1.3bn. Now I’m not a fan of “numberwanging”, and even less of a fan of this often used VOD requests metric, but this figure should still be considered. While in total minutes terms iPlayer still accounts for only a small fraction of BBC television viewing, it is a stark reminder of how technological change is driving how we watch and consume TV content.
As the ways in which content can be delivered continue to grow so does concern over how these are tracked and measured. How can the classic sample survey provide the granular insight required for the demand of modern media planning?
Well, I often think about what we do at Barb as a little like playing Jenga. To explain further I will need to go back to the early 1980s to witness not only the creation of Jenga but, more importantly, Barb. In 1983, just a couple of years after the launch of Barb, there were four broadcast channels, all of which were running across one singular analogue platform and only broadcasting part of the day. In this environment it was relatively straightforward to maintain a structurally sound panel.
By the turn of the century, nearly 20 years on, times and technologies were changing. There were now more than 100 channels, delivered on six platforms across analogue and digital signals. Barb was also reporting both live and timeshifted viewing up to seven days after broadcast. Reflecting the increasing demands, Barb’s sample size had increased since 1981 from just over 3k homes to nearly 4.5k homes in 2000. There was a further increase to 5.1k homes a couple of years later.
So where are we now? Barb and the Barb panel are certainly keeping up the pace and continuing to develop what it can measure and report. There are now 280 channels reported, all with up to 28 days timeshifted viewing post broadcast. Barb is also reporting viewing by type of device and delivery mechanism, for instance identifying Sky On-Demand from Sky+ playback.
Furthermore, our data are not only used to account for the £7bn that is invested each year in the production and distribution of programme and commercial content. Increasingly, they are being used in many ways to reconcile carriage deals, as well as assessing planning strategies and understanding the nature of smaller and more fragmented target audiences. This adds more pressure on to the standard sample survey, increasing the risk of instability.
In statistical terms, this precariousness appears as increased sampling variation as the data are requested to be cut more granularly and fragmented. Aware of the increasing devices and desires being piled on top of the Jenga tower, Barb has been deliberating over how to make the next move.
An obvious response would be to increase the sample size even further. Barb is already spending over £25m a year on the service and it is well known that there is a diminishing scale of return in terms of sample size and reduction in sampling error. In effect we would need to quadruple the Barb panel in order to half the sampling error. At current service costs, this is unrealistic.
Our strategy has therefore developed, leading to the launch of Project Dovetail. The concept is that we use Barb’s current panel measurement with additional data sources. These data will most likely take the form of device based data, be it return path data from set-top boxes or site centric “census” data from IP delivery.
Barb is working on both with the aim of integrating these data with the outputs from our representative panel of homes. This will naturally combine the strengths of observed behavioural data with the robustness that comes from granular device-based data. Integration of census data and, in time, set-top box data will allow us to strengthen and rebuild our foundations to make them as relevant and stable as ever.
As a final thought, we should highlight that the Barb panel continues to offer the same level of confidence for network campaigns as it did back in 1981. We must also appreciate however, that users of Barb data are asking ever more detailed questions and we must evolve in order to keep the trusted, authoritative value and accuracy of the Barb measurement.