Article Text
Statistics from Altmetric.com
The concept of researchers needing to obtain explicit, informed consent from research subjects is a comparatively recent phenomenon of the mid-twentieth century. Resistance to the paternalistic medicine practised up to this point evolved as a result of increasing concern about the rights of the individual and as a reaction to the horrendous, unethical research undertaken in Nazi Germany and Japan during the Second World War.1 As a result of the war crimes trials, the Nuremberg Code was written: this was the first internationally accepted ethical code for human biomedical experimentation. Emphasis on the importance of voluntary informed consent to research was outlined in item 1 of the Code which deemed it “essential” in research on human beings.
The next major ethical code to be developed in this area was the World Medical Association’s Declaration of Helsinki of 1964 which has subsequently been revised several times between 1975 and 1989. This further emphasises the rights of the individual by stating that: “concern for the interests of the subject must always prevail over the interests of science and society.”
The value of the individual is embedded in the ethical principle—respect for autonomy—and is one of the four principles of biomedical ethics expanded as a common morality theory by Beauchamp and Childress.2 For autonomy to be preserved within the context of giving informed consent for research, the following conditions must be met: consent must be given freely (or voluntarily); sufficient information must be given to allow understanding of the issues involved (including the risks) for an informed decision to be made; and the person giving consent must be mentally competent to provide consent.
The legal position concerning consent to medical treatment is based in common law. Legally, for an adult, any unauthorised “touching,” including medical procedures, would constitute battery unless “the …