🪄I’ve been thinking and thinking about the possibilities of AI as a magic wand. So many possibilities when we approach LLMs as makers! But also: for those of us who aren’t trained wizards (developers), this particular magic wand seems pretty hard to really make things with. There’s a huge opportunity for non-programmers to make new things bringing an explosion of true innovation inside and outside of large organizations — but we stand to miss out this opportunity until everyone can really wield the tools. What can be done to help us non-programmers become more wizard-y? What can be done to make the tools more accessible to folks with the need, insight, the creativity and the energy but not the knowledge to be makers?
Computer scientists Molly Q Feldman and Carolyn Jane Anderson are studying non-programmers (they refer to us as code Runners) trying to make things with AI.
Some key takeaways:
* During the experiment, code Runners were patient, willing to try, try, try again, sometimes dozens of times when their prompts failed. I wonder how much that patience will show up in the real world when the Runner is trying to accomplish something that matters, in a timeframe that matters.
* It’s really hard for Runners to communicate with LLMs in trying to make things. “We just don’t work well together!” If you’re not a programmer and you don’t already know what kinds of things machines can do, how can you hold in your mind a sufficiently detailed mental model to communicate the task?
* Unsurprisingly, Runners have no idea what they’re looking at when the LLM shows them code it generated. They can’t use code review to improve the prompt. Instead, Runners want the system to tell them why something didn’t work. But of course the system doesn’t know what you mean by “work” or why something doesn’t work. It’s just chugging along it’s stochastic little path. How can this desire on the part of Runners be met?
* Runners were surprisingly way better than beginning coders (junior wizards) at one particular task where it turned out the data type was not needed. There’s something really intriguing there. I am mighty curious about what kinds of problems are might be better solved by people without programming baggage.
* Runners in the study had no idea how LLMs work. They thought everything was intentionally hard coded in the background and they just hadn’t been able to say the right magic words to trigger pre-designed behaviors. So we’re going to have to help people understand what’s happening. This is consistent with what I’ve learned from Michele Zilli at TUI, Fireside chat coming in November!
* Runners tend to blame themselves when a prompt “fails”, rather than the system (sounds like anybody non-technical when faced with a crap interface of any kind).
* Even when they solve problems successfully, Runners have no idea what gremlins (thanks Heidi Araya for the word) or vulnerabilities might be hiding in the code.
Maybe Runners can’t ever really make things that pass muster for large enterprises. But surely we can make really good prototypes that a master wizard could realize. It’ll be an interesting ride getting there! Soon we’ll use our convening, learning design and innovation strategy superpowers to bring the magic where it needs to go. Let us know what you’ve been hungry for!
Responses