Here is the recorded portion of my contribution to the Museum Next’s 2023 Summit on Digital Learning. Together with Douglas Hegley (Chief Digital Officer, Metropolitan Museum of Art) and Andrew Lih (Author & Wikimedian Strategist) we thought about how AI might be mastered by the cultural sector to better serve both the needs of the public AND solve some long-standing challenges for the cultural sector. Our 20 minute discussion addressed…
5 common museum goals possibly improved/enhanced by AI:
- Attracting visitors
- Deepening engagement
- Tracking and analyzing experiences
- Inspiring creative thinking and expression
- Accelerating other kinds of work that overwhelm us
As you will learn from the video, I wholeheartedly believe that a productive symbiotic relationship between the museum world and AI is not only possible but greatly desirable. I realize many of you reading this will not agree. Here is my current POV: Thinking very practically, museums cannot stop bad AI actors from doing bad things (that is the job of the government and the technology industry) nor can they influence AI development to be more environmentally responsive or less predatory of labor markets. However, museums as a sector and working collaboratively, can and should leverage their collection information and related data sources to vastly improve the quality of Large Learning Models and their outputs. Why? Ensuring that global histories and cultures are fairly and accurately described and represented in public is foundational to museums’ missions and interests. A sector-wide initiative might spark this idea into action, and if you have ideas on how to make such a movement, please do share them via LinkedIn or to HSI’s website contact form.
In preparing this video, I created this vocabulary sheet as a starting point for our discussions. I hope you also find this helpful in building your understanding of what makes machines “intelligent” and how they might “learn.” According to IBM, a leading AI innovator in the corporate sector with close ties to MIT research labs in AI, it is useful to visualize the hierarchy of these 4 key AI concepts as nesting dolls:
a field combining computer science and robust datasets, to enable problem-solving. There are three main categories of AI: Narrow (ANI), General (AGI), and Super (ASI). ANI is considered “weak” AI as it is developed to solve a very specific task (e.g., play chess.) Chatbots and virtual assistants like Siri and Alexa are examples of ANI. Strong AI is defined by its ability compared to humans. Artificial General Intelligence (AGI) would perform on par with another human while Artificial Super Intelligence (ASI)—also known as superintelligence—would surpass a human’s intelligence and ability. Neither forms of strong AI exist yet, but ongoing research in this field continues.
focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy to make predictions or classifications based on input data and validations from users.
(or more specifically Artificial Neural Networks, ANNs) simulate the behavior of a human brain allowing it to “learn” from large amounts of data using algorithms. At a basic level, a neural network is comprised of four main components: inputs, weights, a bias or threshold, and an output.
is a neural network with three or more layers that optimize and refine the accuracy of the neural network. Deep learning drives many AI applications and services that improve automation, performing analytical and physical tasks without human intervention. Deep learning technology lies behind everyday products and services (such as digital assistants, voice-enabled TV remotes, and credit card fraud detection) as well as emerging technologies (such as self-driving cars).
Deep learning is distinguished from machine learning primarily because it does not require human intervention to distinguish or characterize data types (also called “unsupervised learning”). It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the set of features that distinguish “pizza”, “burger”, and “taco” from one another.
deep-learning models can generate high-quality text, images, and other content based on the data they were trained on & in response to user input.
Large Language Model
(also known as foundation models) drives a combination of tasks and functions all trained on unstructured data via deep learning. These work well in low-labeled data domains. These models typically result in increased performance & productivity; disadvantages include 1. compute costs—they are very expensive to train—and they are very expensive to run. 2. Trustworthiness is uneven because the sources of the data are often unknown.
is shaping a generative model’s responses so that they better align with what we want to see. Reinforcement learning from human feedback (RLHF) is an alignment method popularized by OpenAI that gives models like ChatGPT their uncannily human-like conversational abilities. In RLHF, a generative model outputs a set of candidate responses that humans rate for correctness. Through reinforcement learning, the model is adjusted to output more responses like those highly rated by humans. This style of training results in an AI system that can output what humans deem as high-quality conversational text.
The above definitions are adapted from the following blog posts at IBM.com:
- AI vs Machine Learning vs Deep Learning vs Neural Networks: What’s the difference?
- What is Machine learning?
- What is generative AI?
The (non-profit educational media company) Marketplace is continually producing podcasts that do a great job of explaining current AI & its dynamics, see for example: The ABCs of AI, Algorithms, and machine learning. My colleague Robert J. Weisberg also tracks the implications of AI from especially the Human Resources point of view. See his blog links about the AI Conundrum at Museum Human.