Humanising Machine Intelligence

Every new technology bears its designers’ stamp. For Machine Intelligence, our values are etched deep in the code.

Machine Intelligence sees the world through the data that we provide and curate. Its choices reflect our priorities. Its unintended consequences voice our indifference. It cannot be morally neutral.

We have a choice: try to design morally defensible machine intelligence, which sees the world fairly and chooses justly; or else build the next industrial revolution on immoral machines.

To design morally defensible machine intelligence, we must understand how our perspectives reflect power and prejudice. We must understand our priorities, and how to represent them in terms a machine can act on. And we must break new ground in machine learning and AI research.

For more information visit the project page.