Enjoy this article?
Most Museums Journal content is only available to members. Join the MA to get full access to the latest thinking and trends from across the sector, case studies and best practice advice.
The quality of the data is a vital part of AI systems, as it determines the outputs. This means that addressing biases at the data level is crucial if you want to prevent social inequalities being reflected in your projects. The approach taken by the Metropolitan Museum of Art in New York, with AI metadata tags verified by humans, is a good example.
Decolonisation Data should also be examined carefully to avoid AI perpetuating colonial mindsets. Oonagh Murphy, a senior lecturer in digital culture and society at Goldsmiths, University of London, says: “AI tools recreate existing biases. AI doesn’t see or create new worlds, so existing biases are fed out in new ways.”
Museums stand to benefit a lot from AI integration, but transparency is crucial, given AI’s potential for misinformation. Museums should prioritise ethical considerations and publish a statement on how they use AI. The Smithsonian’s AI values statement is a useful guide for museums that are adopting AI and want to use it in an ethical way.
Museums’ use of AI raises concerns about its impact on jobs, especially in an underpaid sector.
“We need to be conscious of what AI could do to the jobs market in the museum sector, when organisations are often looking for cheaper, low-cost alternatives to paying people benchmarked salaries,” says Sharon Heal, director of the Museums Association (MA).
Evaluating how adopting AI aligns with your museum’s mission is vital. You will need to integrate its use into existing policies and practices.
Livi Adu is an e-curator who sits on the Museums Computer Group committee and the MA’s Code of Ethics review group
Most Museums Journal content is only available to members. Join the MA to get full access to the latest thinking and trends from across the sector, case studies and best practice advice.