The urge for ethical AI in healthcare | Generative ai use Cases Mckinsey | Introduction to Generative ai Google | Best Generative ai Certification Microsoft | Turtles AI

The urge for ethical AI in healthcare
DukeRem23 March 2023
  A new study by Dr Stefan Harrer, published in The Lancet, highlights the need for a comprehensive ethical framework around the use, design, and governance of generative AI applications in healthcare and medicine. The study highlights the potential of Large Language Models (LLM) to transform healthcare workflows, education, and communication but also points out the dangers of this technology. The study proposes a regulatory framework with 10 principles to mitigate the risks of generative AI in health, including designing AI as an assistive tool for augmenting human decision-making and auditing AI against data privacy, safety, and performance standards. The study also notes that bad actors are gambling with the well-being of users and the integrity of AI and knowledge databases at scale by productizing and releasing LLM-powered generative AI tools into a fast-growing commercial market. Dr Harrer predicts that the first commercial product offerings in digital health data management will emerge within two years, but only if the research and development community aims for equal levels of ethical and technical integrity. Here is a summary for the list of ethical guidelines proposed by Dr Harrer:
  • Design and use AI to augment human decision-makers’ abilities, not to supplant them;
  • Design and use AI to produce metrics of performance, usage and impact that show when and how AI assists decision-making and monitor potential biases;
  • Research the value systems of target user groups and design and use AI to adhere to them;
  • State the intention of designing and using AI at the start of any conceptual or development work;
  • Reveal all training data sources and data characteristics;
  • Design and use AI to mark any content generated by AI as such clearly and transparently;
  • Audit AI regularly against data privacy, security and performance standards;
  • Keep databases to record and share the outcomes of AI audits, teach users about model capabilities, limitations and risks, and enhance the performance and trustworthiness of AI systems by retraining and redeploying updated algorithms;
  • Follow fair work and safe-work standards when hiring human developers;
  • Set legal precedents to determine under what conditions data can be used to train AI, and set legal frameworks for copyright, liability and accountability for governing the legal dependencies of training data, AI-generated content, and the impact of decisions that humans make using such data