Code in action: Closing the black box of WCAG 2.0, A Latourian reading of Web accessibility
First Monday

Code in action: Closing the black box of WCAG 2.0, A Latourian reading of Web accessibility by David Kreps and Mhorag Goff

The focus of much academic work on Web accessibility has been concerned with the lack of implementation of the Web Content Accessibility Guidelines. There seems, as yet, however, to have been little critical reflection on the Guidelines themselves — save perhaps some awareness of the heterogeneous nature of the Web, and the difficulties facing Web developers trying to ensure their work displays true to their intentions across a wide range of different browsers and devices, making use of continually evolving and contested code. Yet, as this paper highlights, the long drawn out process by which version 2.0 of the WCAG came into being hides many skeletons, including aspects of the process of developing standards that bear closer scrutiny, and reveal much when viewed through Latourian eyes. The findings of this paper suggest that the WCAG2.0 are almost irrelevant today — to the detriment of those for whom they were made — and that the process of creating them was at fault.


1. Introduction — The digital divide
2. Web accessibility
3. A Latourian reading
4. Code in action
5. Actors, black-boxing, inscription and translation
6. Conclusion



1. Introduction — The digital divide

The concept of the ‘digital divide’ between the ‘information haves’ and the ‘information have-nots’ (Servon, 2002) has been the focus both of policy concerns (U.S. Department of Commerce, 2000; Selwyn, 2004), and of much academic writing, exploring perceived inequalities between those who have access to digital networked technologies and those who do not (e.g., Loader, 1998; Marshall, et al., 2003; Servon, 2002; Ellis and Kent, 2011). Many have also focused on the question of disabled access to the World Wide Web, often known as Web accessibility (Goggin and Newell, 2003; Kreps and Adam, 2006; Adam and Kreps, 2006a, 2006b; Goggin and Newell, 2007; Wheeler and Kreps, 2008; Wentz, et al., 2011; Ellis and Kent, 2011; Jaeger, 2012). The techniques to make the Web accessible were developed only a few years after the Web itself, in the late 1990s — in the form of the Web Content Accessibility Guidelines (WCAG), version 1.0 of which was published as a formal recommendation by the World Wide Web Consortium (1999) in May 1999.

The World Wide Web Consortium (W3C), established by Berners-Lee in 1994, is a non-profit, academic body. In the political climate of global capitalism, however, the W3C is a cautious organisation. They publish formal recommendations, rather than standards. They do not engage in any direct lobbying of the industry concerning compliance. Members of the industry, on the contrary, participate as paying members of the W3C.

Arguably, the focus of much academic work on Web accessibility has been primarily concerned with the lack of implementation of these Guidelines, (Kurniawan, et al., 2001; McMullin, 2002; Ritchie and Blanck, 2003; Marincu and McMullin, 2004; Guo, et al., 2005). There seems, as yet, in the academic literature at least, however, to have been little critical reflection on the Guidelines themselves — save perhaps some awareness of the heterogeneous nature of the Web, and the difficulties facing Web developers trying to ensure their work displays true to their intentions across a wide range of different browsers, making use of continually evolving and contested code (Kreps and Adam, 2008). Yet, as this paper highlights, there has amongst the practitioner community been much debate over the last several years concerning these Guidelines, in particular with the long drawn out process by which version 2.0 of the WCAG came into being (Clark, 2006; Holzschlag, 2008). In particular, the evolution of these Guidelines has brought to the fore an aspect of the process of developing standards that bears closer scrutiny. It was remarked by many, at the time, and remains both salutary and extraordinary, that disabled people themselves were not, in fact, involved much in the process of the creation of these standards; the process itself, indeed, was often quite inaccessible.

The work of Bruno Latour on the ‘black boxing’ of scientific artefacts, discoveries, etc, proves a useful lens through which to consider this story. ‘Black box’ is an often-used term for any device, system or object when seen primarily with regard to its input and output characteristics. Almost anything can be referred to as a black box, and Latour himself uses examples as varied as a weapons system, a hormone, and an early computer (Latour, 1987). Once black-boxing has been achieved, only the inputs and outputs are of note, and — as Latour points out — much that occurred prior to the closure of the box is effaced by its new status. Prior to black-boxing, in fact, everything is contestable. The process by which W3C formal recommendations attempt to black-box the specifications of the coding languages used on the Web, and the specific example of the WCAG, proves to be highly revealing of the contingent, heterogeneous nature of all such processes, and highlights the contentious quality of the moments of the process prior to ‘closure.’ Indeed, this story shows how in a world of ‘permanent beta’ such as the World Wide Web, where closure seems very hard to come by, such processes need above all to accommodate this high level of uncertainty — perhaps better than they currently do!

Research was carried out initially by the first author during the closing stages of the completion of WCAG 2.0, and included a wide range of published and publically available material including postings on A List Apart, the personal blogs of relevant parties, and more formal academic papers. In the early months of 2014, the first author enlisted the help of the second to improve, in particular, the actor-network theory approach of the first version of this paper, which has been rewritten for this special issue. During the spring of 2014, to supplement the original secondary research, further data was gathered using a short questionnaire. The publically available link to this online questionnaire was sent directly to as many of those people involved in the process of the creation of WCAG 2.0 as could be contacted, and a number of other interested parties involved in the ‘community’ of Web accessibility, known to the first author. This material was then analysed from the theoretical perspective of actor-network theory, in order to present this paper.

The paper falls into four sections: firstly, a recap of the key issues around Web accessibility. Secondly a brief introduction to the actor-network theory perspective the authors applied to the research data. The third section begins by relating the story of the creation of the WCAG 2.0, identifies some of the principle actors in the story and addresses the key controversy at the heart of the tale: the exclusionary nature of the closure. The paper then introduces the follow-up research undertaken in 2014 and relates more of its findings. This is followed in the fourth section by analysis.



2. Web accessibility

2.1. How the Web discriminates

An inaccessible Web site is one which simply does not make the information it contains available to people with a range of impairments. An inaccessible site is like an elevator with no voice-over, or a building with no ramp access to a raised front door. Web accessibility can be a complex and highly technical area, but for those unfamiliar with it the following example may assist with understanding the problem.

Visually impaired people use speech synthesis software that reads out the text on Web pages. ‘Screen readers’ are perhaps the most commonly known ‘assistive technologies’ used by disabled people to surf the Web. The <img> element of HTML is used to place an image on a Web page. The “alt” attribute of this element was part of the very earliest common version of HTML, v.2, for Web authors to provide a text equivalent for images. The UK’s RNIB (Royal National Institute for the Blind) recommends using five words as the usual number required to produce a meaningful tag, e.g., alt="dog leaps for a stick". Speech synthesis software then reads the "alt" text back to the user. Unfortunately, automatic checkers will accept, for example, alt=image.jpg in the code, as a valid alt attribute. They cannot check whether the text supplied is actually meaningful.

2.2. Graphic paradigm vs. semantic information

Web pages, originally merely text with the odd image added to spice things up, increasingly became, during the mid-1990s, a ‘virtual’ extension of the already mature desktop publishing revolution, which had seen the printing industry massively computerised over a very short period of time. HTML 3.2, the first formal recommendation of the W3C in 1996, contained a wide range of new visual formatting properties, in response to the increasing interest in what could be achieved presentationally on the Web. There were four main players in this online development: Netscape, Microsoft, the W3C, and, later, the Web Standards Project. These were the browser wars of the mid-late 1990s, and the W3C’s victory was something achieved rather through the vigorous lobbying of external organisations such as the Web Standards Project (WaSP, 1998), than by the W3C itself. WaSP formed to enable Web developers to avoid the increasingly necessary expense of creating multiple versions of their Web sites individually tailored to increasingly different browsers. While Netscape and Microsoft vied for control of the Web with their own, proprietary, unwieldy new versions of HTML, and the WaSP lobbied for clear and universally recognised standards, the W3C focused on the much leaner HTML 4, and a new presentational language: cascading style sheets (CSS).

Core to this development of the Web was the shift from a graphical to a semantic paradigm. The design-led Web of the early desktop publishers focused on how information is presented on a display — the screen. But to the W3C it was clear that this was not the way forward, and that the presentation of information needed to be secondary to what could then become content that was much more flexible and maneuverable — capable of being presented in many places and in many ways, from a single source. Thus HTML had to focus exclusively — ‘strictly’ — on the underlying information, the text, and not the presentation of that information. The transferability of content in this way depends upon it being machine-readable — accessible to software — and that machine readability depends on it being uncluttered and unhindered with visual formatting. Screen readers and other assistive technologies, similarly, depend on exactly the same machine readability, in order to access the semantic information. Thus a modern Web page is made up of two files: HTML with content, and CSS files with presentation information — the style. Assistive technologies don’t work properly on Web pages based on the graphical paradigm: text-to-speech technologies must have text to read.



3. A Latourian reading

3.1. ANT and its relevance

This paper recounts how an artefact, WCAG 2.0, has come into being, by focusing on the activities of the WCAG working group in developing the guidelines. Actor network theory (ANT) theorises entities — such as the WCAG — as heterogeneous networks of human and non-human actors (Latour, 1993). Any given phenomenon being studied emerges as an effect of a particular configuration of entities and their inter-relationships within the network (Law, 1992). Actor network theory therefore provides a means of describing phenomena; explaining how they come to exist and continue existing (Latour, 1996). A full and proper ANT analysis of the development of the WCAG is not possible in a journal paper. Nonetheless, an ANT-inspired reading is offered, here, because it lends itself extremely well to the analysis of standards formation, and discussing the formation of the WCAG in the context of ANT proves to be highly revealing.

A set of standards, or guidelines, such as the WCAG, might be regarded simply as an artefact whose development is unproblematic and unworthy of attention as an object of research. However, it is argued in ANT circles that the nature of processes within standards bodies is not democratic (Lee and Oh, 2006), and may actually be complex and problematic (Hanseth and Monteiro, 1997). ANT’s approach has value in denaturalizing artefacts and phenomena by rejecting the assumption of a ‘natural’ or ‘given’ order (Alcadipani and Hassard, 2010). In doing so, using ANT is arguably a means of producing an alternative and more authentic account of how standards come about.

Bruno Latour, French anthropologist and sociologist of science, is recognised as the father of actor-network theory. The most important distinction in Latour (1987) is between ready-made science, and ‘science in the making’. Science and technology studies, as a discipline, must, for Latour, enter the scene of whatever facts/machines are to be studied whilst they are in the making, in order to understand the sociology of what is occurring. The closure of black boxes has for Latour, two explanations: one when it is finished, and one while it is being attempted.

3.2. Standards, inscription and translation

Those who have investigated standards development from an ANT perspective suggest that whilst intended by definition to be universal to their domain and comprehensive in their effects, standards do not simply emerge fully formed; they have to be made to fit (Bowker and Star, 1996; Timmermans and Berg, 1997). This means that they do not work for everyone, and that there are winners and losers; standards development entails a balance to be struck in terms of accommodating local practices and providing definitive and global rules and guidelines (Bowker and Star, 1996). The role of black boxing, therefore, is to represent an unequivocal order, achieved through negotiations, implying acceptance by stakeholders, and presented as fact and a solid basis for action. Whether those negotiations have really been successful in translating the interests involved, and whether the standards are comprehensive in their scope within the domain of application — and durable — is arguably the crux of the matter in the case of WCAG 2.0.

Two further concepts from Latour, of particular relevance for this story, are inscription (Akrich and Latour, 1992) and translation (Callon, 1990; Latour, 1987). Inscription refers to the way artefacts embody patterns or scenarios of use. This is not to suggest that action is hard-wired into an artefact. Inscription sits halfway between a perspective that would suggest artefacts determine the use and, contrastingly, a perspective suggesting an artefact is always interpreted and used flexibly. According to Latour, there is a process in society of continual negotiation; of aligning multiple and disparate interests. Stability therefore rests on the ability to translate, “that is, re-interpret, re-present or appropriate, others’ interests to one’s own.” (Hanseth and Monteiro, 1998) In this sense, all design is translation.

Latour (1990) provides an example of this: getting hotel guests to leave their keys behind. This is a ‘scenario of use’ in which there is a ‘desired pattern of behaviour’ and the problem is how to inscribe this pattern into the network of hotel guests, keys, and staff. In Latour’s story, management first tried to inscribe the pattern with a sign behind the counter requesting that guests return their keys. Then they tried having a human doorkeeper. Finally they inscribed it into a key with a metal knob, the weight of which they gradually increased, until the desired behaviour was finally achieved. Thus the inscription went through a series of translations until it became effective.

The content, creation and updating of the Web Content Accessibility Guidelines provide a plethora of examples of such inscription and translation — far more technical than the hotel key example, or than space in this paper will allow. In summary, ‘scenarios of use’ — in particular through the intermediation of assistive technologies — by disabled actors in the actor network of Web use, are the kernel of the Web Accessibility Initiative. The ‘desired patterns of behaviour’ are the coding practices of Web developers that can make such scenarios easy. But the delicate balance between how an assistive technology embodies such a scenario, and the flexible use of that technology by a human actor, lies at the core of many of the most pithy technical questions that were discussed over the decade long process of moving from v.1 to v.2 of the guidelines. How these much contested scenarios were then translated and inscribed into a set of formal recommendations proved to be the key delaying issue in the entire process, in particular focussing on the problems of i) who inscribes them, and ii) how strong these inscriptions can or should be. Crucially, throughout this process, those for whom these inscriptions were being designed — those human actors in the scenarios of use — were effectively excluded from voicing their opinions.



4. Code in action

4.1. The story

As the originating briefing package for the creation of the W3C’s Web Accessibility Initiative (WAI) succinctly summarised in January 1997, “Part of the W3C’s commitment to realize the full potential of the Web is to promote a high degree of usability for people with disabilities.” (World Wide Web Consortium, 1997) Just over two years later, in May 1999, the WAI published WCAG 1.0. It is perhaps worthy of note that the W3C International Programme Office which launched the WAI did so at a “meeting hosted (and called for) by the U.S. Government at the White House” (World Wide Web Consortium, 1997) The W3C were chosen as the “ideal host for such a program,” because, “Since its inception, [in 1994] the W3C has had an official activity area devoted to accessibility for people with disabilities”. Representatives from Microsoft and other companies were at the meeting in the White House, too, however.

Between January 2001 and March 2006 numerous working drafts of a second version of the WCAG were published, as part of the W3C’s sometimes rather tortuous process (World Wide Web Consortium, 2005), dogged by disagreements and in-fighting, until finally in April 2006 WCAG 2.0 was published as a Last Call Working Draft. It was at this point that the fiercest controversies began to rage, delaying the next step of the process by fully two years. Not until April 2008 was the WCAG 2.0 Candidate Recommendation published, with the Proposed Recommendation published in November 2008, and the next and Final Stage — W3C Formal Recommendation — on 11 December 2008.

4.2. The actors

The main actors in this story were, of course, the WCAG Working Group (WCAG WG) at W3C, whose membership inevitably changed on frequent occasions between 2001 and 2008; and the range of invited experts who made contributions to the work of the WG, including Charles Munat, Christophe Strobbe, Joe Clark and many others (see for the full list). On the sidelines were the leading names in the practitioner community who were neither members nor invited experts but well-read commentators who posted their opinions on blogs, expressed themselves at conferences, and influenced the opinions of those who did have a voice: ‘standardistas’ such as Molly Holzschlag (WaSP), Ian Lloyd (, and many others. This lengthy process of consensus building, as we have pointed out before, did not very obviously include actual disabled computer users, however, to the chagrin of many.

4.3. The core controversy

Following the publication of the Last Call Working Draft in April 2006, Joe Clark published, on A List Apart a posting entitled “To hell with WCAG 2” (Clark, 2006). This was titled in the spirit of a classic post, also published on A List Apart, by Jeffrey Zeldman (2001), entitled “To hell with bad browsers’ which exhorted designers to create their sites for the best and latest, standards compliant browsers, and stop trying to make them backwardly compatible with the browsers of the 1990s. Both posts represent a move by members of the Web development community to represent or appropriate the inscribed pattern of use of HTML, CSS, and browsers by the Web development community. The first had the likes of Netscape and Microsoft in its sights, the second the W3C. The inscribed pattern of use attacked by Clark was the design of standards for the Web development community by what he saw as the conjoined corporate interests and academic computing intelligentsia represented at the WCAG 2.0 WG — all ignoring the disabled people it was actually meant to serve.

Clark’s tirade against the WCAG 2.0 Guidelines did not simply criticise the Last Call document, but heavily critiqued the activities of the WCAG WG and the W3C standardisation process itself. His post, indeed, reflected and provoked popular criticism of the WCAG 2.0 Last Call and accrued a hundred posted comments in the A List Apart discussion area by 31 August, when the discussion was closed by the host Web site.

Clark had himself been a ‘contributor’ to the WCAG WG process, but had been ‘ejected’ from the committee (Clark, 2006). In his post, Clark accuses the entire W3C standardisation process of being “stacked in favour of multinationals with expense accounts who can afford to talk on the phone for two hours a week and jet to world capitals for meetings.” (Clark, 2006) He attacks the process not only as elitist, but as “inaccessible to some people with disabilities, notably anyone with a reading disability (who must wade through ill-written standards documents and e-mail messages — there’s already been a complaint) and anyone who’s deaf (who must listen to conference calls). Almost nobody with a learning disability or hearing impairment contributes to the process — because, in practical terms, they can’t.” (Clark, 2006) Clark and those he represents are struggling here with the impact they are being allowed to have upon the design of the inscription of the pattern of use being written into the forthcoming black box of WCAG 2. The translations that have been occurring over the standardisation process seem, to Clark, to be all too heavily weighted toward the needs of W3C bureaucracy and of multinational companies.

Clark proceeds to make many technical criticisms of the new version of the Guidelines. The outcome is the creation, by Clark, of the WCAG Samurai, again by title, referring back to Web history by recalling the CSS Samurai, a WaSP project of the turn of the millennium which sought to (and largely succeeded to) force the major browser makers to ensure their browsers complied with the W3C CSS specifications. Hoping to repeat this successful interjection of a new ‘actor’ (in the form of an expert panel) into the actor network of Web use, to win more power for the Web development community in the translations of the network, Clark’s WCAG Samurai were tasked with producing an “errata” document to WCAG 1.0. This was a very popular move amongst the standardistas, WCAG 1.0 being by 2006 so hopelessly out of date that something was needed by the Web development community if WCAG 2.0 could not foreseeably be completed in a short space of time. The errata, first aired in June 2007, were finally published in February 2008 (Clark, 2008), only shortly before the WCAG 2.0 Candidate Recommendation was published by the W3C, in April 2008, to a generally warm reception. WCAG 2.0 was finally moved to formal recommendation at the end of 2008.

4.4. Authoritative?

So, how did WCAG 2.0 get such a warm reception, once released in April 2008, and how did the W3C process manage to get the new version so close to becoming ‘scientific’ and ‘authoritative,’ to becoming black-boxed, after such a poor showing two years earlier?

Arguably, from a Latourian perspective, only when its claims to usefulness and accuracy stopped being isolated, and when the number of people engaged in promoting it became many. Articles and blog posts from members of the Web development community warmly applauded the progress between the Last Call document of 2006 and the Candidate Recommendation document of 2008 (Pickard, 2007; Faulkner, 2008). Equally, the voices against it were newly silent. Joe Clark ‘retired’ from writing about Web accessibility, with the publication of the Errata, his final salvo at the W3C establishment, so quickly overshadowed by the Candidate Recommendation. His attempt to influence the translation of the design of the new version of the Guidelines might be said, perhaps, to have failed.

Latour talks about the ‘context of citation’ with respect to the authority with which scientific work becomes black-boxed. The authority of a piece of scientific work is underpinned by who cites the work, and how many others cite the work. In 2008, the voices of support became numerous, and the voices of opposition silent as the process neared its completing stages, and the actors with the loudest voices carried the weight to appropriate the translation of the design of the Guidelines to their own purposes.

Comments on Joe Clark’s 2006 post on A List Apart had included many of the players — still available on the Web (A List Apart, 2006) — some of whose voices attempted to silence such dissension. For example, Christophe Strobbe roundly criticises Clark’s entire approach in his post “Joe Clark versus the WCAG Working Group, again” [1], and Michael Landis, in his post, “Screaming fire In a theater” [2] from 28 May 2006, attacks what he sees as Clark’s unnecessary grandstanding. Perhaps most pertinently to the concerns of this paper Kliehm (2006) notes that many prominent voices in the Web community were raised “confirming the bad state of the consortium.”

The whole W3C process from 2001 to 2006 is dissected in Clark’s posting. There are many references to grave discomfort with the way in which the Working Group treated members. For example, Gian Sampson-Wild praised Clark for doing what she was too intimidated to do due to “the internal scare tactics of the Working Group” [3]. Charles Munat, in siding with Clark, noted that the entire W3C was an organisation effectively in the pocket of multinationals [4]. From a Latourian perspective, only certain voices were heard in the process, with a select subset of opinions respected in the process of accepting or rejecting amendments as the process of drafts unfolded.

Indeed, it may be concluded that the voices representing the W3C establishment were successful in silencing dissent, and the Candidate Recommendation, albeit an improvement on the Last Call, a success with a community already tired of dissent. Holzschlag, on her blog ( in September 2008, summed up the situation as it then stood, between the practitioner community and the scientists, academics and corporations of the W3C, neatly in that the proprietary interests have — for the time-being — won. It is important to remember that the W3C is a member organisation, with corporate and individual memberships expensive. Holzschlag noted that she “witnessed a member company representative” of the W3C “shut down an entire line of discussion simply by saying, ‘This compromises several of our patents. We will remove ourselves from the W3C if you proceed.’ With a history of no viable long-term economic model, the W3C cannot afford to lose members, particularly when they are mission critical to many evolving specifications.” (Holzschlag, 2008) She distinguished three ‘circles’ of interest: scientists and academics of the W3C; the “revolutionary and disruptive independent working groups”; and a circle of “self-interest and profiteering” and “proprietary technologies.” She concluded that “We do not have an interoperable Web. What we have is a glut of proprietary, closed, and protected stuff. While it’s sophisticated and interesting sometimes, it goes against the heart of what we came here to build in the first place: an accessible, interoperable Web for all.” (Holzschlag, 2008).

The story then, can perhaps be summed up in a similar manner as Holzschlag has done, viewing the evolution of the Web Content Accessibility Guidelines v.2, (within the wider evolution of the Web itself) as a heterogeneous web of competing interests and conflictual relations, between i) a scientific/academic class intent on high-level abstraction and adhering to rules of citation-based reputational power, ii) a vocal and sometimes unnecessarily pugnacious practitioner community struggling to have a say in the formation of the standards they must work with, and iii) the financial muscle of multinational corporate interests. Tellingly, the voices of disabled people for whom all this work is supposedly being done, were barely heard at all.

These competing interests are easy to spot. The minutiae and wording of guidelines, at the W3C, and the forum arguments amongst the practitioner community, for example at, about techniques, are a world apart, and the commercial and proprietary technologies by and large at cross-purposes from accessibility altogether. Once in the laboratory of praxis where Web accessibility is tested everything is far more complex than it appears in the specifications. Does the technique attached to a guideline work in all (versions of all) user agents (browsers and assistive technologies), and in all CSS positioned layouts, at all resolutions, on the same page as certain AJAX implementations, on a mobile phone screen? The answer (e.g., Kurniawan, et al., 2001; Ritchie and Blanck, 2003; Guo, et al., 2005; Kreps and Wheeler, 2008) is that only disabled user testing can isolate the anomalies that inevitably crop up in actual situations. Disabled user voices, in other words, are crucial to the formation of usable standards and guidelines.

As for the practitioner community’s frequent complaint that they are kept out of the process, this is clearly well founded. Membership of the W3C Working Groups, and the lengthy process of recommendations and consensus building, undoubtedly requires extraordinary computing skills, diplomatic skills, and access: one needs to be already recognised in the field to gain the special status of being listened to, and have budgetary support behind one to fund that access. Numerous comments and articles allude to this effective restriction on who can contribute to the process (Clark, 2006; Holzschalg, 2008). The Invited Experts and Committee members are clearly such leaders. Yet from the text of the complaints that are available to read on A List Apart and other forums, it is clear that “there is plenty of backstabbing, grandstanding, dirty politics, whining, and worse on the committee, as there is on every committee in which humans participate.” [5]

4.5. Spring 2014: Looking back at WCAG 2.0

In the spring of 2014, as part of the preparation of this paper for this special issue, the authors compiled a brief online questionnaire, and invited several of the key players, and onlookers, to contribute their reflections. Joe Clark, Charles Munat, and Molly Holzschlag contributed their thoughts, along with other luminaries in the field: Mike Paciello, Gez Lemon and Patrick Lauke of the Paciello Group, and Klaus Reich from the University of Innsbruck, Austria and Gill Whitney from Middlesex University, U.K. Others were invited and either declined or ignored the invite.

Asked about the process of WCAG2.0, there is unanimity of opinion. Perhaps unsurprisingly, Joe Clark remains unapologetic. “I was deemed a problem member of WCAG about halfway through my tenure,” he recalls, and “to my knowledge am the only person ever to have his Invited Expert status revoked.” His memory of the experience remains raw. Echoing the assessment above, he recalls being “surrounded by corporate shills” and he remains “really alienated from and divorced from the whole industry.” In hindsight, for Munat, in the end, albeit that the WCAG 2.0 might be deemed a ‘technical success.’ “It was, in my opinion, a political failure. In the end the documents that were produced were pretty good, but ... it was too little, too late, and it failed to address the real problem of Internet accessibility, which is not lack of knowledge, but lack of will.” Holzschlag, similarly, recalls “way too much infighting.” Mike Paciello, Patrick Lauke, Klaus Reich and Gill Whitney similarly felt the process took too long.

But what of the use, all these years later, of the WCAG 2.0 Guidelines? Munat does not believe the new guidelines have been much help at all. Having spoken recently to a group of developers he recounts “not one of them had any real understanding of any of it ... . They’d heard of it, and they were curious. And this is SIX YEARS after 2008.” Even Gill Whitney, who has worked tirelessly and enthusiastically in the field for many years, from Middlesex University, admits that, “There is insufficient political pressure on Web developers and their clients and insufficient education for future and current developers on the need to use it,” and that, “It is still in many ways easier to buy an inaccessible Web site than an accessible one.” The guidelines, in short, seem to speak rather to a small number of experts, for whom they are a sound document, and to fail in their larger aim. As Lemon puts it, in answer to the above question, “To some extent, at least from those who want to understand.”

Asked whether WCAG2.0 was sufficiently future-proof, all the respondents feel that it was not. Clark is concerned about captioning. Munat feels the guidelines are irrelevant. Holzschlag suggests that they were future-proof “in theory,” not “in practice.” Paciello remarks that the guidelines fail to be future-proof because they were not based “strictly on principles,” and that inevitably “technology will continue to require agile development of standards.” Yet as Lauke points out the guidelines did at least split the “technology-specific advice and the tech-agnostic ‘philosophy’ of the success criteria” meaning that they have aged “far more gracefully” than the first version. Many of the respondents pointed to changes in technology use that have made (at least the technical aspects of) the specifications outdated, such as their focus on “keyboard accessibility” compared to today’s ubiquity of touch screens and other interfaces, and the focus on user experience (UX) in today’s market.

Clearly there is much room for better understandings of such notions as ‘accessibility 2.0,’ — the “capacity to access information in the format of choice when working within the largely unstructured environment of user-generated content” [6]. Yet, from the experience of WCAG2.0, it is the authors of this paper’s opinion that it is unlikely that new guidelines from the W3C will be forthcoming in the near future, or indeed be of much help in this regard.



5. Actors, black-boxing, inscription and translation

5.1. An actor-network analysis

Participants’ responses indicate a perception that WCAG 2.0 fails in its primary aim to improve Web accessibility, thereby rendering the standards somewhat redundant, and this is — if for no other reason, of which there were plenty — a consequence of the fundamental failure to enrol the disabled user community.

The construction of facts, from a Latourian perspective, is a collective process, with a thousand-odd faithless allies between a contributor’s statement and its inclusion in the WCAG 2.0. Each ally is an actor, doing something to it as they pass it on. In this respect actors are not passive nodes, they have influence on what they touch, and in so doing modify the networks of which they are a part (Latour, 1999).

Negotiations inevitably entail winners and losers as different translations may be offered of which only some are taken up. When a translation is refused the losing stakeholders will go on to define their interests and goals in another manner (Callon, 1986). The implication in both Clark’s and Holzschalg’s posts, and later responses to the questionnaire, is that this experience is repeated across other committees and processes at the W3C. It is the organisation itself, and its member-based financing model, that is responsible for this problem. Some of Holzschlag’s other remarks, on her blog, including “Dear W3C Dear WASP,” and “Dear WHATWG and HTML5 WG,” make it very clear that the Web development community, at least, as voiced by Molly Holzschlag, was mightily tired of the disempowerment they faced at the hands of a process in which the principle actors seemed bent on working against consensus and for proprietary diktat.

The W3C tries to do two things at once: to “enrol others so that they participate in the construction of the fact,” and to “control their behaviour in order to make their actions predictable.” [7] They are thus caught in a very difficult web of relations in which they are finding that those they enrol often feel their participation is ignored, or override the opinions of others with commercial clout, and that ‘translation’ of the original design — to make the Web more accessible — is all too often a translation that makes the outcome a poor one.

A consistent theme amongst respondents was that WCAG 2.0 failed to enrol developers as a key stakeholder group. The findings suggest that developers do not have compelling reasons to adhere to the guidelines and, in ANT terms, this means their interests have not been translated. This has resulted in lack of awareness of WCAG 2.0 in the developer community and a resulting lack of implementation of the guidelines in practice, indicating a degree of irrelevance. Whilst there is an artefact known as ‘WCAG 2.0’ its achievement is contentious because of this failure.

For WCAG 2.0 to exist ‘on paper’ and nevertheless be deemed irrelevant and of little value to the target community whom it did not directly include in the process, yet which it is intended to serve, i.e., disabled Internet users, and for it to fall short in engaging technical practitioners involved in Web development, is a failure in terms of legitimacy.

We may therefore argue that the thing that has been black-boxed as ‘WCAG 2.0’ is a narrow, technical thing, quite different from what the WCAG Working Group set out to achieve. Whilst it is acknowledged that complex projects inevitably evolve (Ramiller, 2007; Cho, et al., 2008; Hanseth and Lyytinen, 2010), and that ANT is a valuable approach to understanding the dynamics of such processes (Whitley and Pouloudi, 2001; Cho, et al., 2008), nevertheless the case of WCAG 2.0 would appear to be a particularly stark failure of the guidelines’ fundamental raison d’être.



6. Conclusion

This paper concludes that competing and conflicting interests are at work, such that the black box of the final formal WCAG2.0 guidelines attempted to hide three primary truths:

  1. Above all, what this ‘code in action’ story serves to remind us is that the World Wide Web is in permanent beta, and that black-boxing the code standards upon which the Web is built is proving nigh on impossible, and the strategies of the technical debate useless. Code, essentially, is likely to remain ‘in action.’
  2. However, what this story also reveals is that ‘off-line’ power relations, between academics, artisans, and corporations, are reproduced ‘online’ in the actor-network of the World Wide Web, even in the process of creating guidelines for making the Web more accessible to disabled people.
  3. Crucially, those for whom this entire process was begun, undertaken, and completed — disabled Internet users — were disempowered throughout, and have not greatly benefited from it.

The core controversy in relation to WCAG 2.0 is the breakdown in the processes of negotiation and enrolment of allies in the network, manifested in disagreements, which ultimately led to certain group members leaving or being asked to leave. The inevitable simplification of actor networks into black boxes means that we experience a given entity as a unified thing, such that the component actors are opaque until and unless breakdown, failure or controversy arises (Law, 1992). This is exactly what has been observed in this study of WCAG 2.0, whereby the expulsion of Joe Clark and the subsequent, highly public, disagreements have been exposed to all. Whereas tensions and negotiations between actors in the name of network building might otherwise have taken place ‘behind closed doors’ so to speak, the issue of note is that the airing of dirty laundry arising from breakdown of the WCAG 2.0 development process has revealed the contingent and highly political nature of these guidelines.

This has led to the situation where these Web accessibility standards are not necessarily serving the audiences they are intended to.

In the final analysis — and perhaps this story provides evidence towards further research on this issue — it is possible that it is the ‘self-regulation’ of the World Wide Web that is at fault, and clearer government direction is required — with all the attendant problems that itself may raise! End of article


About the authors

Dr. David Kreps is a Senior Lecturer in Information Systems and Society at the University of Salford.
Direct comments to: d [dot] g [dot] kreps [at] salford [dot] ac [dot] uk

Mhorag Goff is a Research Associate at Manchester Business School.
E-mail: mhorag [dot] goff [at] manchester [dot] ac [dot] uk



1. Comment 4, at, accessed 23 August 2015.

2. Comment 61, at, accessed 23 August 2015.

3. Comment 54, at, accessed 23 August 2015.

4. Comment 22, at, accessed 23 August 2015.

5. Charles Munat, comments of 24 May 2006, at, accessed 23 August 2015.

6. Ellis and Kent, 2011, p. 25.

7. Latour, 1987, p. 108.



A List Apart, 2006. “Comments on Joe Clark’s posting, ‘To hell with WCAG 2’,“ at, accessed 13 June 2014.

A. Adam and D. Kreps, 2006a. “Web accessibility: A digital divide for the disabled?” In: E. Trauth, D. Howcroft, T. Butler, B. Fitzgerald and J. DeCross (editors). Social inclusion: Societal and organizational implications for information systems. IFIP International Federation for Information Processing, volume 208. Boston: Springer, pp. 217–228.

A. Adam and D. Kreps, 2006b. “Enabling or disabling technologies? A critical approach to Web accessibility,” Information Technology & People, volume 19, number 3, pp. 203–218.
doi:, accessed 23 August 2015.

M. Akrich and B. Latour, 1992. “A summary of a convenient vocabulary for the semiotics of human and nonhuman assemblies,” In: W. Bijker and J. Law (editors). Shaping technology/building society: Studies in sociotechnical change. Cambridge, Mass.: MIT Press, pp. 259–265.

R. Alcadipani and J. Hassard, 2010. “Actor-network theory, organizations and critique: Towards a politics of organizing,” Organization, volume 17, number 4, pp. 419–435.
doi:, accessed 23 August 2015.

M. Callon, 1990. “Techno-economic networks and irreversibility,” Sociological Review, volume 38, number S1, pp. 132–161.
doi:, accessed 23 August 2015.

M. Callon, 1986. “Domestication of the scallops and the fishermen of St Brieuc Bay,” In: J. Law (editor). Power, action and belief: A new sociology of knowledge? London: Routledge, pp. 196–223.

S. Cho, L. Mathiassen and A. Nilsson, 2008. “Contextual dynamics during health information systems implementation: An event-based actor-network approach,” European Journal of Information Systems, volume 17, number 6, pp. 614–630.
doi:, accessed 23 August 2015.

J. Clark, 2008. “Errata to WCAG 1.0,” WCAG Samurai, at, accessed 13 June 2014.

J. Clark, 2006. “To hell with WCAG 2,” A List Apart (23 May), at, accessed 13 June 2014.

K. Ellis and M. Kent, 2011. Disability and new media. London: Routledge.

S. Faulkner, 2008. “Sucking on WCAG 2.0,” Paciello Group blog (6 June), at, accessed 13 June 2014.

G. Goggin and C. Newell, 2007. “The business of digital disability,” Information Society, volume 23, number 3, pp. 159–168.
doi:, accessed 23 August 2015.

G. Goggin and C. Newell, 2003. Digital disability: The social construction of disability in new media. Lanham, Md.: Rowman & Littlefield.

B. Guo, J.C. Bricout, and J. Huang, 2005. “A common open space or a digital divide? A social model perspective on the online disability community in China,” Disability & Society, volume 20, number 1, pp. 49–66.
doi:, accessed 23 August 2015.

O. Hanseth and K. Lyytinen, 2010. “Design theory for dynamic complexity in information infrastructures: The case of building Internet,” Journal of Information Technology, volume 25, number 1, pp. 1–19.
doi:, accessed 23 August 2015.

O. Hanseth and E. Monteiro, 1998. “Understanding information infrastructure” (27 August), at, accessed 13 June 2014.

O. Hanseth and E. Monteiro, 1997. “Inscribing behaviour in information infrastructure standards,” Accounting, Management and Information Technologies, volume 7, number 4, pp. 183–211.
doi:, accessed 23 August 2015.

M. Holzschlag, 2008. “Web standards 2008: Three circles of hell,” A List Apart (23 September), at, accessed 13 June 2014.

P. Jaeger, 2012. Disability and the Internet: Confronting a digital divide. London: Lynne Rienner.

M. Kliehm, 2006. “To hell with Joe Clark,” Learning the World (31 August), at, accessed 13 June 2014.

D. Kreps and P. Wheeler, 2008. “Combating eDiscrimination in the North West — Final report,” University of Salford and European Social Fund, at, accessed 23 August 2015.

D. Kreps and A. Adam, 2006. “Failing the disabled community? The continuing problem of Web accessibility in human computer interaction research in Web design and evaluation,” In: S. Kurniawan and P. Zaphiris (editors). Advances in universal Web design and evaluation: Research, trends and opportunities. Hershey, Pa.: Ideas Group, pp. 25–42.
doi:, accessed 23 August 2015.

S. Kurniawan, R. Ellis and P. Zaphiris, 2001. “Usability and accessibility comparison of governmental, organizational, educational and commercial aging/health-related Web sites,” WebNet Journal, volume 3, number 3, pp. 45–52.

B. Latour, 1999. “On recalling ANT,” Sociological Review, volume 47, number S1, pp. 15–25.
doi:, accessed 23 August 2015.

B. Latour, 1996. “On actor-network theory — A few clarifications,” Soziale Welt, volume 47, number 4, pp. 369–381.

B. Latour, 1993. We have never been modern. Translated by C. Porter. Cambridge, Mass.: Harvard University Press.

B. Latour, 1990. “Technology is society made durable,” Sociological Review, volume 38, number S1, pp. 103–131.
doi:, accessed 23 August 2015.

B. Latour, 1987. Science in action: how to follow scientists and engineers through society. Cambridge, Mass.: Harvard University Press.

J. Law, 1992. “Notes on the theory of the actor-network: Ordering, strategy, and heterogeneity,” Systems Practice, volume 5, number 4, pp. 379–393.
doi:, accessed 23 August 2015.

H. Lee and S. Oh, 2006. “A standards war waged by a developing country: Understanding international standard setting from the actor-network perspective,” Journal of Strategic Information Systems, volume 15, number 3, pp. 177–195.
doi:, accessed 23 August 2015.

B. Loader (editor), 1998. Cyberspace divide: Equality, agency, and policy in the information society. London: Routledge.

C. Marincu and B. McMullin, 2004. “A comparative assessment of Web accessibility and technical standards conformance in four EU states,” First Monday, volume 9, number 7, at, accessed 23 August 2015.

S. Marshall, W. Taylor and X. Yu (editors), 2003. Closing the digital divide: Transforming regional economies and communities with information technology. Westport, Conn.: Praeger.

B. McMullin, 2002. “Users with disability need not apply? Web accessibility in Ireland, 2002,” First Monday, volume 7, number 12, at, accessed 23 August 2015.

J. Pickard, 2007. “WCAG 2.0: Woeful to wonderful in one easy draft?” (25 May), at,accessed 13 June 2014.

N. Ramiller, 2007. “Constructing safety: System designs, system effects, and the play of heterogeneous interests in a behavioral health care setting,” International Journal of Medical Informatics, volume 76, supplement 1, pp. S196–S204.
doi:, accessed 23 August 2015.

H. Ritchie and P. Blanck, 2003. “The promise of the Internet for disability: a study of on-line services and Web site accessibility at Centers for Independent Living,” Behavioral Sciences & the Law, volume 21, number 1, pp. 5–26.
doi:, accessed 23 August 2015.

N. Selwyn, 2004. “Reconsidering political and popular understandings of the digital divide,” New Media & Society, volume 6, number 3, pp. 341–362.
doi:, accessed 23 August 2015.

L. Servon, 2002. Bridging the digital divide: Technology, community, and public policy. Malden, Mass.: Blackwell.

S. Timmermans and M. Berg, 1997. “Standardization in action: Achieving local universality through medical protocols,” Social Studies of Science, volume 27,number 2, pp. 273–305.
doi:, accessed 23 August 2015.

U.S. Department of Commerce, 2000. “Falling through the net: Toward digital inclusion,” at, accessed 13 June 2014.

P. Wheeler and D. Kreps, 2008. “User testing is not a luxury,” Electronic Markets, volume 18, number 4, pp. 324–332.
doi:, accessed 23 August 2015.

E. Whitley and A. Pouloudi, 2001. “Studying the translations of NHSnet,” Journal of Organizational and End User Computing, volume 13, number 3, pp. 30–40.
doi:, accessed 23 August 2015.

WaSP, 1998. “Web Standards Project,” at, accessed 13 June 2014.

World Wide Web Consortium (W3C), 2005. “World Wide Web Consortium Process Document” (14 October), at accessed 1 October 2008.

World Wide Web Consortium (W3C), 1999. “Web Content Accessibility Guidelines 1.0” (5 May), at, accessed 23 August 2015.

World Wide Web Consortium (W3C), 1997. “Briefing Package For Project Web Accessibility Initiative (WAI),” at, accessed 13 June 2014.

B. Wentz, P. Jaeger and J. Lazar, 2011. “Retrofitting accessibility: The legal inequality of after-the-fact online access for persons with disabilities in the United States,” First Monday, volume 16, number 11, at, accessed 23 August 2015.

J. Zeldman, 2001. “To hell with bad browsers,” A List Apart (16 February), at, accessed 13 June 2014.


Editorial history

Received 21 August 2015; accepted 22 August 2015.

Copyright © 2015, First Monday.
Copyright © 2015, David Kreps and Mhorag Goff.

Code in action: Closing the black box of WCAG 2.0, A Latourian reading of Web accessibility
by David Kreps and Mhorag Goff.
First Monday, Volume 20, Number 9 - 7 September 2015

A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2019. ISSN 1396-0466.