While the aim of the research was to not show that AI programs are able to changing people in artistic roles, it raises philosophical questions in regards to the traits that are distinctive to people, says Simone Grassini, an affiliate professor of psychology on the University of Bergen, Norway, who co-led the analysis.
“We’ve shown that in the past few years, technology has taken a very big leap forward when we talk about imitating human behavior,” he says. “These models are continuously evolving.”
Proving that machines can carry out nicely in duties designed for measuring creativity in people doesn’t show that they’re able to something approaching authentic thought, says Ryan Burnell, a senior analysis affiliate on the Alan Turing Institute, who was not concerned with the analysis.
The chatbots that had been examined are “black boxes,” that means that we don’t know precisely what knowledge they had been educated on, or how they generate their responses, he says. “What’s very plausibly happening here is that a model wasn’t coming up with new creative ideas—it was just drawing on things it’s seen in its training data, which could include this exact Alternate Uses Task,” he explains. “In that case, we’re not measuring creativity. We’re measuring the model’s past knowledge of this kind of task.”
That doesn’t imply that it’s not nonetheless helpful to check how machines and people method sure issues, says Anna Ivanova, an MIT postdoctoral researcher learning language fashions, who didn’t work on the venture.
However, we must always keep in mind that though chatbots are superb at finishing particular requests, slight tweaks like rephrasing a immediate may be sufficient to cease them from performing as nicely, she says. Ivanova believes that these sorts of research ought to immediate us to look at the hyperlink between the duty we’re asking AI fashions to finish and the cognitive capability we’re attempting to measure. “We shouldn’t assume that people and models solve problems in the same way,” she says.