Artificial intelligence (AI) refers to “computer systems that can absorb information, process it, and respond in ways similar to humans,” according to the Foreign Policy Association.
The tasks AI can be trained to complete range widely, including recommending a new TV series to you based on your viewing history, driving a car, or evaluating a medical x-ray to determine whether your bone is broken.
Generative AI is a subset of AI that can learn to create entirely new images, audio, or text using vast amounts of training data. Examples of generative AI programs that have been in the news include OpenAI’s ChatGPT, which creates text in response to questions and prompts, and DALL-E, which creates new images that correspond to a text-based prompt.
While AI-generated content may resemble art or speech created by humans, AI programs are not conscious and do not learn in the same ways humans do. These programs actually work like a sophisticated version of the auto-complete program you might have built into your text or email. They learn patterns from their training data and use that to create plausible responses to prompts. The more data they are trained on, the better they are at creating content that mimics human-generated content.
Generative AI has been used to create overviews on topics, essays, and artwork, but the information it generates is not always correct. Generative AI’s capabilities to complete some school assignments have raised questions around how schools should regulate the use of these programs and how curriculum might need to change to reflect this new reality.