Teaching and Learning Economics with the AIs

Tyler and I have a new paper, How to Learn and Teach Economics with Large Language Models, Including GPT:

GPTs, such as ChatGPT and Bing Chat, are capable of answering economics questions, solving specific economic models, creating exams, assisting with research, generating ideas, and enhancing writing, among other tasks. This paper highlights how these innovative tools differ from prior software and necessitate novel methods of interaction. By providing examples, tips, and guidance, we aim to optimize the use of GPTs and LLMs for learning and teaching economics effectively.

Most of the paper is about how to use GPTs effectively but we also make some substantive points that many people are missing:

GPTs are not simply a chatty front end to the internet. Some GPTs like ChatGPT have no ability to search the internet. Others, like Bing Chat, can search the internet and might do so to aid in answering a question, but that is not fundamentally how they work. It is possible to ask a GPT questions that no one has ever asked before. For example, we asked how Fred Flintstone was like Hamlet, and ChatGPT responded (in part):

Fred Flintstone and Hamlet are two vastly different characters from different time periods, cultures, and mediums of storytelling. It is difficult to draw direct comparisons between the two.

However, one possible point of similarity is that both characters face existential dilemmas and struggles with their sense of purpose and identity. Hamlet is plagued by doubts about his ability to avenge his father’s murder, and his own worthiness as a human being. Similarly, Fred Flintstone often grapples with his place in society and his ability to provide for his family and live up to his own expectations.

Not a bad answer for a silly question and one that (as far as we can tell) cannot be found on the internet.

GPTs have “read” or “absorbed” a great amount of text but that text isn’t stored in a database; instead the text was used to weight the billions of parameters in the neural net. It is thus possible to run a GPT on a powerful home computer. It would be very slow, since computing each word requires billions of calculations, but unlike storing the internet on your home computer, it is feasible to run a GPT on a home computer or even (fairly soon) on a mobile device.

GPTs work by predicting the next word in a sequence. If you hear the phrase “the Star-Spangled”, for example, you and a GPT might predict that the word “Banner” is likely to come next. This is what GPTs are doing but it would be a mistake to conclude that GPTs are simply “autocompletes” or even autocompletes on steroids.

Autocompletes are primarily statistical guesses based on previously asked questions. GPTs in contrast have some understanding (recall the as if modifier) of the meaning of words. Thus GPTs understand that Red, Green, and Blue are related concepts that King, Queen, Man and Woman are related in a specific way such that a woman cannot be a King. It also understands that fast and slow are related concepts, such that a car cannot be going fast and slow at the same time but can be fast and red and so forth. Thus GPTs are able to “autocomplete” sentences which have never been written before, as we described earlier.2 More generally, it seems likely that GPTs are building internal models to help them predict the next word in a sentence (e.g. Li et al. 2023).

The paper is a work in progress so comments are welcome.

Comments

Comments for this post are closed