Member-only story
How Companies Can Build Ethical AI
AI is exciting — but also an ethical minefield.
Artificial intelligence (AI) is becoming omnipresent in our everyday lives.
From voice assistants to personalised recommendations on streaming services and online shopping platforms, AI is being used to analyse vast amounts of data and make predictions about what we might like or need.
Due to its ever-growing utility in so many industries, AI technology has vast implications for every aspect of the world and therefore needs to be governed and regulated. Concerns around AI-powered weaponry, fake news, bias, privacy, and equality are just a few of the intensely ethical issues that AI has thrown up.
Companies, organisations, and political institutions are calling concerns over AI and driving new initiatives to improve guidelines and principles when using AI:
- WHO made a call for safe and ethical AI for health.
One of the concerns expressed was that data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity, and inclusiveness. - OpenAI launched a program to fund experiments in setting up democratic guidelines for AI systems.
Some of the topics they are to address include medical/financial/legal advice and human and LGBT rights. E.g., What principles…