One Rank to Rule them All: The Politics of Benchmarking
Almost a decade ago, the EU Commission started to measure the eGovernment progress of its member states (now 27) and select other countries. Whenever the new edition is published, the survey receives a lot of media attention. Headlines scream “Country X is a leader in eGovernment, it ranked 2nd behind country Y.” Whenever I attend EU conferences that are in some way connected to eGoverment, representatives of Member States like to point out their country’s position in the EU eGovernment ranking to underline how far they have come – it matters in politics. When politicians or high-level administrators from EU member states talk about eGovernment, they refer mostly to one particular result the EU eGovernment benchmark – online sophistication. So clearly, the benchmark has positively influenced eGovernment policies in EU Member States and beyond. Yet, what does it actually tell us?
The EU eGovernment benchmark measures 20 public services and the national portal, using four indicators : online sophistication (5-stages), online availability, user centricity and national portals. So in its essence the E-Government benchmark only tells us what is happening on the supply-side of eGovernment in 20 areas. eGovernment, of course, is much more complex than that. Other eGovernment benchmarks like the one conducted by the United Nations face similar difficulties. How do you measure a complex issue with a limited budget? How do include new trends such as Government 2.0 in a benchmark? How can you compare/allign benchmarks? They tend to differ in scope (EU=20 public service indicators; UN= mix of info society indicators), underlying cause-effect framework, or transparency of the methodology. Results differ widely and politicians tend to pick and choose on what they point at. Why not agree on one global cross-financed benchmark or at least a standardized set of indicators?
The EU and the United Nations are currently revising their respective eGovernment benchmark methodologies. This happens in smoke-filled backroom dealings between government representatives and select academics: There is no opportunity for the general public to participate, no platform for suggestions, no wiki to collaborate, no ranking/feedback mechanism, and the dataset is not available on a website in machine-readable format (think www.data.gov – read more about it in the Wired data.gov wiki). How can we change this? What indicators would you want to be included? How would you weigh them?
This text is an expansion on an entry published on the Harvard Kennedy School Complexity and Social Networks Blog.