The Internet Watch Foundation is responsible for deciding which material should be blocked from public access for criminality, but is a private body a desirable form of regulation?

It is the revolutionary tool that has gripped our society with such omnipresence – we literally are capable of entering into cyberspace anywhere and at any given moment – that man forgets of its youth. Few can acknowledge quite the impact it has had in demolishing the old world and sculpting the new, but for all that it has done, the internet has been perennially coupled with a crucial problem: how exactly is it to be regulated?

Thinkers in the “age of the internet” have disputed over this key issue for years, and although it is important to look at regulation of the internet from a theoretical viewpoint, it is equally important to analyse what is being done in society today and to determine how effective these current mechanisms are. We will study the role of the Internet Watch Foundation as it attempts to regulate Britain online, and examine the advantages of having this private organisation regulate the internet, whilst outlining the concerns of such a form of internet regulation.

The role and nature of the IWF

The Internet Watch Foundation (IWF) is a private body which is funded by a grant from the European Union and subscription fees paid by those within the internet sector. The Government effectively gives the IWF the task of regulating the internet in Britain, with specific regard to child abuse images online.

On complaints being made towards internet service providers (ISPs) regarding the frightening availability of child pornography(1), representatives of the Internet Service Providers’ Association (ISPA) worked with ISPs and the London Internet Exchange to draft a memorandum from which emerged the fledgling Internet Watch Foundation in 1996. Together they formulated its three objectives(2). They acknowledged that a system had to be created that would rate the levels of harmful and offensive conduct available online. They understood the need to provide an ability to report obscene content via a notice and take-down procedure to all ISPs hosting child abuse images in the UK; and they also realised the need for a system to be responsible for promoting education on the issue of child abuse images.

The role of the IWF became clear: its remit was “to minimise the availability of ‘potentially criminal’ internet content, specifically images of child sexual abuse hosted anywhere, and criminally obscene adult content in the UK”(3). The IWF was initially welcomed by the Home Office, which was no doubt glad that the problem of regulating child abuse images was taken out of its hands. However, there were growing concerns that the IWF lacked independence from the industry, and a review was commissioned in 1998 by the Department of Trade & Industry and the Home Office, enlisting the help of KPMG and Denton Hall. From here, it has been argued that the IWF spiralled into what many see as an organisation shrouded in some form of regulatory espionage.

The IWF’s use of blacklisting, introduced in 2004, may be to blame. This is the method used by the IWF to ensure that its members block all sites providing images that the IWF recognises as child abuse. The URLs of these blacklisted sites are encrypted for confidentiality and then sent to ISP members for blocking. Importantly, the secrecy of the IWF has caused a fundamental problem – users are given no details as to what sites the IWF has blacklisted. The IWF has inevitably aroused suspicion for not revealing the pages it blocks to anyone.

We can tell that the IWF has inherently good intentions. What we cannot tell is the IWF’s relationship with the UK Government and with its subscribing ISPs. Importantly, we cannot understand what the IWF targets and brings down. As far as the layperson is concerned, the IWF works closely with their ISP to minimise the risk of widespread child abuse images online. (Interestingly, many members of the public will have no knowledge of the IWF’s existence which raises a deeper question of how “just” the IWF really is.) It takes the inquisitive citizen to start pondering on how effective the IWF may be, or whether civil liberties are being impinged.

The IWF within the regulation debate

It is unwise to proceed further without exploring the proposed mechanisms for regulation. There is a belief that cyberspace has its own sovereign space where the laws of the real world are rendered inapplicable. As cyberspace involves users “travelling” from one jurisdiction to the other in a borderless web of hosts and servers, it is believed that there is a legal impracticability for government to sanction actions on the internet. (However, UK citizens who visit online paedophilic communities are still at risk of being apprehended by UK law – R v Fellows & Arnold [1997] 2 All ER 548.) Underpinning this “cyberlibertarian” theory is the notion that governments “possess no methods of enforcement we have true reason to fear”(5). Cyberlibertarians believe in regulation effected by “decentralised, emergent law”(6), and that regulating cyberspace requires a system organically developed with the consent of the majority of people in cyberspace(7). It may seem like the IWF partners cyberlibertarianism nicely, but doubts arise with the emergence of countering theories.

Not all thinkers believe that cyberspace is impossible to regulate using existing methods of law enforcement. It has been theorised that there are methods that can be utilised to regulate people online. It is believed that a regulator can brandish laws, markets, architecture and norms(8), and can use specialised hybrids of the four to achieve whatever outcome it desires. These four modalities constrain users; and the person being regulated has little say in how regulation should occur(9). This theory of “cyberpaternalism” may be flawed as it does not take into account the complexities of how cyberspace and all its forums work in the modern era. It would be difficult to label the IWF as an organisation that regulates us unconditionally today.

People in cyberspace are not always confined to the regulation of the IWF. The four modalities all garner their own legitimacy by way of community consensus. Therefore, the community plays a vital role in its own regulation(10). A regulatory settlement can be challenged by a community which does not support it any more(11), and this can have an impact on whether or not a regulatory decision remains in force.

Despite the ongoing collisions between these regulatory theories, it is possible to ascertain one thing: in manmade cyberspace, there is an ability to change the design of cyberspace with a few well placed keystrokes(12). Therefore, one of Lessig’s four modalities of regulation has been adopted at the very least – architecture has been used in order to achieve the regulatory demands of many.

If we imagine a local authority wishing to prevent vehicle access to a busy shopping town centre where cars would cause extreme danger to pedestrians, it may erect concrete bollards in areas where vehicles are otherwise likely to enter. The authority is using its power to alter the “architecture” of that particular area of the town and is effectively manipulating the landscape in order to regulate its people.

Now take ISPs and their ability to control the code of the internet – if it becomes apparent that allowing access to a certain website is against public policy, the ISP has the power to amend the code and prevent access to that particular site. As we can see, an ISP holds the same ability to manipulate the architecture in cyberspace as the local authority in the real world, and this goes to exemplify exactly what Lessig was theorising when discussing “architecture” as a key modality of regulation. If we look to the IWF’s close relationship with ISPs, the IWF will have authorised access to the code of the internet, and this allows the IWF itself to amend the code of the digital world and to regulate almost at will.

The IWF: successful or not?

Although the IWF has good intentions, it is plagued by criticism of how it implements its regulation. The regulation of child abuse images online by the IWF can be highly effective. We have a system in place free from the constraints of bureaucratic legislation. Private regulatory practices have been implemented and followed by most of the leading ISPs. This is evident when looking at Cleanfeed.

Cleanfeed is a content-blocking technology which was created in 2003 and implemented by the largest ISP in Britain – British Telecom – in 2004. Although the Home Office Minister Vernon Coaker insisted that all ISPs adopt the Cleanfeed system on a voluntary basis by the end of 2007 or face legal compulsion, legislation has never been passed in order to enforce this “threat”. Cleanfeed is much more efficient than older forms of regulation, and filters user requests by comparing them with the IWF blacklist.

Importantly, Cleanfeed does not blacklist entire sites, but only specified pages. This may seem effective, but there is one underlying problem – blacklisted pages are not listed and cannot be viewed by the public. When a user requests a blocked page, a standard error message appears, leaving the user confused. The practice of blocking pages is not understood by the public. This inevitably leads to the accusation that the IWF is suspiciously covert, and the IWF’s lack of accountability is questionable.

The question of accountability leads to a regulatory paradox. Perhaps a private organisation regulating the internet by working alongside technocrats is more attractive than state control. The state would be deluded if it thought it could know everything about cyberspace, thus it is better to have a decentralised team of experts focus on the issue(13). Conversely, having governmental control leads to accountability and democratic transparency, something that the IWF fails to deliver. Hence, we are left with a key problem not so easily solved. In fact, neither the state nor the IWF may be capable of delivering the perfect solution.

When the IWF reversed its decision to block access to a Wikipedia page deemed to be well within its ambit(11), questions of the IWF’s regulatory efficacy arose. If the internet community is capable of combining to reverse the decisions of the IWF, this arguably impinges on the IWF’s power. Perhaps cyberlibertarians are correct in saying that people in cyberspace are too difficult to regulate using external law; or perhaps network communitarians are correct in stating that the community has an important role in determining its own regulation.

Despite the Wikipedia controversy, the IWF has fallen back into normal practices. In fact, the role of the Cleanfeed system has been expanded to include peer-to-peer network trading (Dramatico Entertainment Ltd v British Sky Broadcasting Ltd [2012] EWHC 268 (Ch): the "Pirate Bay" case). This is controversial, as it has the ability to block access not only to illegal material but to perfectly legal content too. We are left speculating. Without the blacklist, no one knows whether rightful blocking has taken place.

Has the IWF been a success? Possibly: but it awkwardly straddles the boundary between a disproportionate success and an unquestionable victory. It is for this reason that the IWF cannot be left to carry on without constant close inspection and inquisition, until it reveals its practices for the benefit of assuring us of our civil liberties. It is admitted that if the IWF could somehow wash away its alarmingly efficient camouflage then we may have ourselves the utmost effective regulator of the internet – but is this likely to happen, or even achievable? The niggling thought remains that the IWF’s secrecy is almost required for any success at all. Its work may have to fall within the same category as that of our intelligence agencies, whose privacy is one of the essential elements that actually protects us from harm. Transparency permeating throughout the IWF may have to remain something that is nice to think about in theory but fails to work in practice. Grudgingly, the IWF’s operations may be better than watching David Cameron’s mission to defeat child abuse imagery online collapse in plain sight.

So what of the IWF?

For all its successful regulation, however, it is contended that the IWF falls foul of one too many societal problems. The IWF is still an unaccountable panel of censors that vets our domestic internet connections, using a mysterious blacklist that is overseen by nobody and known to no ordinary citizen or ISP themselves. Britain is a representative democracy that demands a form of regulation that is not akin to China or Islamic states, but so long as our internet regulation is carried out by an unaccountable private body our civil liberties are in danger. Do we all have to pay so high a price to gain any sort of effective regulation of child abuse online?

It seems all is cloak and dagger in cyberspace.


(1) Perhaps most famously the open letter sent to every ISP in the UK by Metropolitan Police Chief Inspector Stephen Fry, who called on ISPs to sufficiently regulate material on the internet or else face an “enforcement apology”. Quoted in Julian Petley, Web Control (February 2009). "Web Control": Index on Censorship 38 (1): 78–90.

(2) Known as the “R3 Agreement”. These “three Rs” remain cornerstones of the IWF’S remit.


(4) C J Davies, “The Hidden Censors of the Internet”, Wired UK magazine(2009) at p 2 [1] -

(5) J P Barlow, A Declaration of Independence for Cyberspace,

(6) D Johnson and D Post, “Law and Borders: The Rise of Law in Cyberspace”, 48 Stanford Law Review 1367 (1996).

(7) D Johnson and D Post, “The New ‘Civic Virtue’ of the Internet: A Complex Systems Model for the Governance of Cyberspace”, in C M Firestone (ed), The Emerging Internet (1998).

(8) Lessig, Code and Other Laws of Cyberspace, Basic Books (1999).

(9) Ibid, “The Pathetic Dot”.

(10) “Network communitarianism”: A Murray, The Regulation of Cyberspace: Control in the Online Environment (2007).

(11) This is shown in the “Virgin Killer” case. The IWF blacklisted the Wikipedia page hosting images of The Scorpions album cover because they found it contrary to Protection of Children Act 1978. Their decision was reversed after public outcry.

(12) A Murray, Information Technology Law: The law and society (2nd ed, 2013), Oxford University Press.

(13) See the “synoptic delusion” in F A Hayek, The Road to Serfdom (Chicago: University of Chicago Press, 1994).
The Author
(Mr) Lee Thomson is chief editor and founder of the Strathclyde Student Law Review
Share this article
Add To Favorites