-
OpenAI GPT sorts resume names with racial bias, test shows
Recruiters are eager to use generative AI, but a Bloomberg experiment found bias against job candidates based on their names alone. Emily Bender, professor of linguistics at the UW, is quoted. -
Why DK Metcalf's use of ASL means more than just talking smack
Seahawks receiver DK Metcalf has been learning American Sign Language and has taken some of this newfound knowledge to the field, signing his celebrations after scoring. What began as a hobby has become a means of self-expression, and as Metcalf has gained attention for signing during games, he has realized it has a great significance to those who use ASL to communicate and the deaf community. The UW's Dan Mathis, assistant teaching professor of linguistics, and Kristi Winter, associate teaching professor of linguistics, are quoted. -
How Microsoft’s hiring of OpenAI’s Altman could reshape AI development
Following a dramatic departure of two key leaders from ChatGPT-maker OpenAI, Microsoft, a major investor in the startup, ended up a winner on Monday. The Redmond-based tech giant said Monday it was hiring former OpenAI CEO Sam Altman and co-founder and former OpenAI President Greg Brockman, who left after Altman’s ouster Friday, to lead an in-house AI innovation lab. Emily M. Bender, professor of linguistics at the UW, is quoted. -
Ted Chiang and Emily Bender worry about the dark side of AI
What do you get when you put two of Time magazine’s 100 most influential people on artificial intelligence together in the same lecture hall? If the two influencers happen to be science-fiction writer Ted Chiang and Emily M. Bender, professor of linguistics at the UW, you get a lot of skepticism about the future of generative AI tools such as ChatGPT. -
Your personal information is probably being used to train generative AI models
Artists and writers are up in arms about generative artificial intelligence systems—understandably so. These machine learning models are only capable of pumping out images and text because they’ve been trained on mountains of real people’s creative work, much of it copyrighted. Major AI developers including OpenAI, Meta and Stability AI now face multiple lawsuits on this. Emily M. Bender, professor of linguistics at the UW, is quoted. -
A chatbot encouraged him to kill the queen — it’s just the beginning
Companies are designing AI to appear increasingly human. That can mislead users — or worse. Emily M. Bender, professor of linguistics at the UW, is quoted. -
ArtSci Roundup: A Conversation with Emily M. Bender, Dubal Memorial Lecture, and more
This week, learn why Emily Bender believes “AI” is a bad term, take part in the Dubal Memorial Lecture on ‘Race, Science, and Pregnancy Trials in the Postgenomic Era’, view the film screening of Tortoise Under the Earth, and more.
-
Chatbots sometimes make things up -- not everyone thinks AI hallucination problem is fixable
Spend enough time with ChatGPT and other artificial intelligence chatbots and it doesn't take long for them to spout falsehoods. Described as hallucination, confabulation or just plain making things up, it's now a problem. Emily Bender, professor of linguistics at the UW, is quoted.
-
Does Sam Altman know what he's creating?
Sam Altman has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service. This is the story of the OpenAI CEO's ambitious, ingenious, terrifying quest to create a new form of intelligence. Emily M. Bender, professor of linguistics at the UW, is referenced. -
Microsoft partner OpenAI reportedly under FTC investigation
The Federal Trade Commission is reportedly investigating OpenAI, the Microsoft-backed startup that makes the smash hit ChatGPT. Emily M. Bender, professor of linguistics at the UW, is quoted. -
Forget about the AI apocalypse -- the real dangers are already here
Two weeks after members of Congress questioned OpenAI CEO Sam Altman about the potential for artificial intelligence tools to spread misinformation, disrupt elections and displace jobs, he and others in the industry went public with a much more frightening possibility: an AI apocalypse. Emily M. Bender, professor of linguistics at the UW, is quoted. -
Words In Review: AI or 'stochastic parrots'?
You've probably heard chatbots like ChatGPT described as "artificial intelligence." Emily M. Bender, professor of linguistics at the UW, wants you to call it a "text synthesis machine" or "stochastic parrot."
-
The 'AI apocalypse' is just PR
Big Tech's warnings about an AI apocalypse are distracting us from years of actual harms their products have caused. Emily M. Bender, professor of linguistics at the UW, is quoted.
-
How AI and ChatGPT are full of promise and peril, according to 5 experts
Is AI going to kill us? Or take our jobs? Or is the whole thing overhyped? Depends on who you ask. Emily M. Bender, professor of linguistics at the UW, is quoted.
-
How ChatGPT and similar AI will disrupt education
A lot of people have been using ChatGPT out of curiosity or for entertainment. But students can also use it to cheat. ChatGPT marks the beginning of a new wave of AI, a wave that's poised to disrupt education. Emily M. Bender, professor of linguistics at the UW, is quoted.