Establishing a duty of care creates difficulties in pinpointing liability when defective software causes injury
Increasingly software is used in situations where failure may result in death or injury.

In these situations the software is often described as safety critical software. Where such software is used and where an accident occurs it is proper that the law should intervene in an attempt to afford some form of redress to the injured party or the relatives of a deceased person. Safety critical software is used in specialised situations such as flight control in the aviation industry and by the medical profession in carrying out diagnostic tasks.

Nowadays software will have an impact on the average citizen’s life whether by choice or otherwise. However for most individuals as the plane leaves the airport typical concerns usually centre on the exchange rate and not the computer software controlling the flight. These concerns of course change when the plane falls from the sky without explanation. What can the individual do when faced with such occurrences? In such a dramatic scenario the situation there is unlikely to be a contractual relationship between the individual affected by the defective software and software developer. In this article I shall attempt to examine how liability may accordingly arise.

Setting the Scene – the Computer as the Villain

The legal concept of liability has traditionally long included as a base element the concept of culpa or fault. Humans are marvellous at attributing blame in any given situation, the converse of this phenomenon being they are equally good at passing the buck. When things go wrong and where a computer is involved more often than not the initial response is blame the computer. Whilst solving the puzzle following a calamity is never straightforward the first line of attack is often the technology used in a situation that has gone wrong.

An example of this pattern of behaviour can be seen following the introduction of computerised stock indexed arbitraging in the New York financial markets back in 1987. On 23rd January 1987 the Dow Jones Industrial Average rose 64 points, only to fall 114 points in a period of 70 minutes, causing widespread panic. Black Monday as it became known was indeed a black day for many investors large and small alike who sustained heavy financial losses. The response of the authorities in the face of the crisis was to suspend the computerised trading immediately.

In considering this event Stevens argues that all computerised program trading did was increase market efficiency and perhaps more significantly get the market to where it was going faster without necessarily determining its direction. However, the decision to suspend computerised trading was taken without a full investigation of all the relevant facts.

As Stevens himself puts it:

“Every disaster needs a villain. In the securities markets of 1987, program trading played that role. Computerised stock-indexed arbitrage has been singled out as the source of a number of market ills” 1

Of course in the situation outlined above the losses incurred would be economic in nature which is not to say that such losses do not have real and human consequences for those who suffer them, but as now appears to be the case in both Britain and America there can be no recovery where the losses are purely economic unless there has been reliance in accordance with the Hedley Byrne principle2.

Turning from the purely financial implications of software failure other failures have rightly generated considerable public concern. In particular the report of the inquiry into the London Ambulance Service3 highlighted the human consequences when software failed to perform as it was expected to.

The situation becomes all the more problematic when it is remembered that nobody expects software to work first time. Software by its very nature is extremely complex, consisting as it does of line upon line of code. It might be thought that the simple solution to this problem would be to check all software thoroughly. That of course begs the question as to what actually constitutes a thorough check. Even where software developers check each line of code or test the validity of every statement in the code the reality is that such testing will not ensure that the code is error free. Kaner4 has identified at least 110 tests that could be carried out in respect of a piece of software none of which would necessarily guarantee that the software would be error free. Indeed the courts in England have explicitly accepted that there is no such thing as error free software.5

Furthermore the hardware on which software runs can also be temperamental. It can be affected by temperature changes, power fluctuations or failure could occur simply due to wear and tear. There are any number of other factors which could affect software such as incompatibility with hardware and when all the factors are taken together one could easily justify the assertion that establishing the precise cause of software failure is no easy task. This is of significance given the necessity for the person claiming loss to prove the actual cause of that loss.

Principle, Pragmatism or Reliance

In situations where an individual is killed or injured as a consequence of the failure of a piece of software there is no reason why, in principle, recovery of damages should not be possible. It would however be foolhardy for the individual to assume that recovering damages is by any means straightforward.

In order for an individual to establish a case against a software developer at common law it would be necessary to show that the person making the claim was owed a duty of care by the software developer, that there was a breach of that duty, that the loss sustained was a direct result of the breach of duty and that the loss was of a kind for which recovery of damages would be allowed. In determining whether or not a duty of care exists between parties the starting point has traditionally been to consider the factual relationship between the parties and whether or not that relationship gives rise to a duty of care.

The neighbour principle as espoused by Lord Atkin in the seminal case of Donoghue v Stevenson6 requires the individual, in this case the software developer, to have in his contemplation those persons who may be affected his acts or omissions. By way of an example it is obvious to assume that the developer of a program used to operate a diagnostic tool or therapeutic device is aware that the ultimate consumer (for lack of a better word) will be a member of the public although the identity of that person may be unknown to the software developer.

The case of the ill-fated Therac–25, a machine controlled by computer software used to provide radiation therapy for cancer patients, highlights the problem. Prior to the development of radiotherapy treatment, radical invasive surgery was the only means of treating various cancers. Not only was this extremely traumatic for patients but often it was unsuccessful.

With the development of radiotherapy treatment the requirement for surgery has been greatly reduced. However between 1985 and 1987 six patients were seriously injured or killed as a result of receiving excessive radiation doses attributable to the Therac-25 and defective software. Commenting on the tragedy Liversedge stated that:

“Although technology has progressed to the point where many tasks may be handled by our silicon–based friends, too much faith in the infallibility of software will always result in disaster.”7

In considering the question of to whom a duty of care is owed the law will have to develop with a degree of flexibility as new problems emerge. The question for the courts will be whether or not this is done on an incremental basis or by application of principles. If the former approach is adopted a further question for consideration is whether or not it is just and reasonable to impose a duty where none has existed before. However in my view the absence of any direct precedent should not prevent the recovery of damages where there has been negligence. I do however acknowledge that theoretical problems that can arise are far from straightforward.

The broad approach adumbrated in Donoghue is according to Rowland8 appropriate in cases where there is a direct link between the damage and the negligently designed software such as might cause intensive care equipment to fail. However she argues that in other cases the manner in which damage results cannot provide the test for establishing a duty of care. She cites as an example the situation where passengers board a train with varying degrees of knowledge as to the signalling system being Y2K compliant. Whilst she does not directly answer the questions she poses, the problems highlighted are interesting when one looks at the extremes. For instance, should it make any difference to the outcome of claims by passengers injured in a train accident that one passenger travelled only on the basis that the computer controlled signalling system was certified as safe while the other passenger did not apply his mind to the question of safety at all? It is tempting to assume that liability would arise in the former scenario on the basis of reliance but that then begs the question of whether or not liability arises in the latter scenario at all. If, as sh e implies, reliance is the key to establishing liability then it would not as there has been no reliance. That result in the foregoing scenario would be harsh indeed, avoiding as it does the issue of the failure of the developer to produce a signalling system that functioned properly.

More often than not an individual may have little appreciation that his environment is being controlled by computer software. It could be argued that because of the specialist knowledge on the part of the computer programmer it follows that he or she assumes responsibility for the individual ultimately affected by the software. The idea that reliance could give rise to a duty of care first came to prominence in the Hedley Byrne case. The basis of the concept is that a special relationship exists between someone providing expert information or an expert service thereby creating a duty of care. In the context of computer programming the concept while superficially attractive ignores the artificiality of such a proposition given that it is highly unlikely that the individual receiving for example radiotherapy treatment will have any idea of the role software will play in the treatment process. Furthermore in attempting to establish a duty of care based on reliance the House of Lords have been at pains to stress that the assumption of responsibility to undertake a specific task is not of itself evidence of the existence of a duty of care to a particular class of persons.9

Standard of Care

It might come as a shock to some and no great surprise to others that there is no accepted standard as to what constitutes good practice amongst software developers. That is not to say that there are not codes of practice and other guidelines but merely that no one code prevails over others. The legal consequences of this situation can be illustrated by the following example. Two software houses given the task of producing a program for the same application do so but the code produced by each house is different. One application fails while the other runs as expected. It is tempting to assume that the failed application was negligently designed simply because it did not work. However such an assumption is not merited without further inquiry. In order to establish that the program was produced negligently it would be necessary to demonstrate that no reasonable man would have produced such a program. In the absence of a universal standard, proving such a failing could be something of a tall order.

An increasing emphasis on standards is of considerable importance given that it is by this route that an assessment of whether or not a design is reasonable will become possible. Making such an assessment should not be an arbitrary judgment but one based on the objective application of principles to established facts. At present in the absence of a uniform approach one is faced with the spectre of competing experts endeavouring to justify their preferred approaches. In dealing with what can be an inexact science the problem that could emerge is that it may prove difficult to distinguish between the experts who hold differing opinions and the courts in both England and Scotland have made it clear that it is wrong to side with one expert purely on the basis of preference alone.10

In America standards have been introduced for accrediting educational programs in computer science technology. The Computer Science Accreditation Commission (CSAC) established by the Computer Science Accreditation Board (CSAB) oversees these standards. Although such a move towards standardisation has positive benefits, not least that such standards should reflect best practice, it would be naïve to assume that this will make litigation any easier. Perhaps only in those cases where it was clear that no regard whatsoever had been paid to any of the existing standards would there be a chance of establishing liability.  

It should also be borne in mind that in the age of the Internet software will undoubtedly travel and may be produced subject to different standards in many jurisdictions. In determining what standards could be regarded as the best standards the courts could be faced with a multiplicity of choices. That of course is good news for the expert and as one case suggests some of them are more than happy to participate in costly bun fights.11


Even where it is possible to establish a duty of care and a breach of that duty the individual may not be home and dry. It is necessary to show that the damage sustained was actually caused by the breach of duty. That is not as straightforward as it might sound when one pauses to consider the complexities of any computer system.

The topography of a computer is such as to make it necessary that an expert is instructed to confirm that the computer program complained of was the source of the defect, giving rise to the damage. A computer program may be incompatible with a particular operating system and therefore fail to work as expected. In these circumstances it would be difficult to establish liability on that basis alone unless the programmer had given an assurance that compatibility would not be an issue. If ISO 9127, one of the main standards, were to become the accepted standard then of course the computer programmer would be bound to provide information as to the appropriate uses of a particular program. In that event it may be easier to establish liability as a result of failure to give appropriate advice with the curious consequence that the question of whether or not the program was itself defective in design would be relegated to one of secondary importance.

A more difficult question arises in relation to the use of machines in areas such as medical electronics. Returning by way of example to the case of the ill-fated Therac-25, while it is clear that the machine caused harm in those cases where there were fatalities it would be difficult to maintain that the machines caused death as it is highly probable that the cancers if left untreated would have lead to death in any event. Equally where an ambulance was late and a young girl died from a severe asthma attack it could not be said that the cause of death was as a result of the failure in the computer controlled telephone system even although if the system had worked the chances of survival would have been greatly increased.

Let the Developer Defend

As can be seen from the above, establishing liability at common law in the context of defectively designed software is no mean feat. With the passing of the Consumer Protection Act 1987 following the EC Directive (85/374/EEC) the concept of product liability has been part of UK law for over a decade. The effect of the Directive and the Act is to create liability without fault on the part of the producer of a defective product that causes death or personal injury or any loss or damage to property including land. Part of the rationale of the Directive is that as between the consumer and the producer it is the latter that is better able to bear the costs of accidents as opposed to individuals affected by the software. Both the Directive and the Act provide defences to an allegation that a product is defective so that liability is not absolute. However, given that under the Directive and the Act an individual does not have to prove fault on the part of the producer the onus of proof shifts from the consumer to the producer, requiring the producer to make out one of the available defences.

In relation to computer programs and the application of the Directive and the Act the immediate and much debated question that arises is whether or not computer technology can be categorised as a product. Undoubtedly hardware will be covered by the directive no doubt providing a modicum of comfort to those working in close proximity to ‘killer robots’.

The difficulty arises in relation to the question of software. The arguments against software being classified as a product are essentially threefold. Firstly, software is not moveable, therefore is not a product. Secondly, software is information as opposed to a product, although some obiter comments on the question of the status of software suggests that information forms an integral part of a product.12 Thirdly, software development is a service, and consequently the legislation does not apply.

Against that it can be argued that software should be treated like electricity, which itself is specifically covered by the Directive in Article 2 and the Act in Section 1(2), and that software is essentially compiled from energy that is material in the scientific sense. Ultimately it could be argued that placing an over legalistic definition on the word “product” ignores the reality that we now live in an information society where for social and economic purposes information is treated as a product and that the law should also recognise this. Furthermore, following the St Albans case it could be argued that the trend is now firmly towards categorising software as a product and indeed the European Commission has expressed the view that software should in fact be categorised as a product.13


How the courts deal with some of the problems highlighted above remains to be seen, as at least within the UK there has been little litigation in this matter. If as Rowlands suggests pockets of liability emerge covering specific areas of human activity, such as the computer industry, it is likely that this will only happen over a long period of time. Equally relying on general principles, which has to a certain extent become unfashionable, gives no greater guarantee that the law will become settled more quickly.

Parliament could intervene to further afford consumers greater rights and clarify for once and for all the status of software. However it should be borne in mind that any potential expansion of liability on the part of producers of software may have adverse consequences in respect of insurance coverage and make obtaining comprehensive liability coverage more difficult. For smaller companies obtaining such coverage may not be an option, forcing them out of the market. Whatever the future holds in this brave new world perhaps the only thing that can be said with any certainty is that it will undoubtedly be exciting.

Maurice Jamieson is an advocate

Share this article
Add To Favorites