HomeTechnologyChatbots Got Big—and Their Ethical Red Flags Got Bigger

Chatbots Got Big—and Their Ethical Red Flags Got Bigger


master mentalism tricks

Irene Solaiman, policy director at open source AI startup Hugging Face, believes outside pressure can help hold AI systems like ChatGPT to account. She is working with people in academia and industry to create ways for nonexperts to perform tests on text and image generators to evaluate bias and other problems. If outsiders can probe AI systems, companies will no longer have an excuse to avoid testing for things like skewed outputs or climate impacts, says Solaiman, who previously worked at OpenAI on reducing the system’s toxicity

Each evaluation is a window into an AI model, Solaiman says, not a perfect readout of how it will always perform. But she hopes to make it possible to identify and stop harms that AI can cause because alarming cases have already arisen, including players of the game AI Dungeon using GPT-3 to generate text describing sex scenes involving children. “That’s an extreme case of what we can’t afford to let happen,” Solaiman says.

Solaiman’s latest research at Hugging Face found that major tech companies have taken an increasingly closed approach to the generative models they released from 2018 to 2022. That trend accelerated with Alphabet’s AI teams at Google and DeepMind, and more widely across companies working on AI after the staged release of GPT-2. Companies that guard their breakthroughs as trade secrets can also make the forefront of AI less accessible for marginalized researchers with few resources, Solaiman says.

As more money gets shoveled into large language models, closed releases are reversing the trend seen throughout the history of the field of natural language processing. Researchers have traditionally shared details about training data sets, parameter weights, and code to promote reproducibility of results.

“We have increasingly little knowledge about what database systems were trained on or how they were evaluated, especially for the most powerful systems being released as products,” says Alex Tamkin, a Stanford University PhD student whose work focuses on large language models.

He credits people in the field of AI ethics with raising public consciousness about why it’s dangerous to move fast and break things when technology is deployed to billions of people. Without that work in recent years, things could be a lot worse.

In fall 2020, Tamkin co-led a symposium with OpenAI’s policy director, Miles Brundage, about the societal impact of large language models. The interdisciplinary group emphasized the need for industry leaders to set ethical standards and take steps like running bias evaluations before deployment and avoiding certain use cases.

Tamkin believes external AI auditing services need to grow alongside the companies building on AI because internal evaluations tend to fall short. He believes participatory methods of evaluation that include community members and other stakeholders have great potential to increase democratic participation in the creation of AI models.

Merve Hickok, who is a research director at an AI ethics and policy center at the University of Michigan, says trying to get companies to put aside or puncture AI hype, regulate themselves, and adopt ethics principles isn’t enough. Protecting human rights means moving past conversations about what’s ethical and into conversations about what’s legal, she says.

Hickok and Hanna of DAIR are both watching the European Union finalize its AI Act this year to see how it treats models that generate text and imagery. Hickok said she’s especially interested in seeing how European lawmakers treat liability for harm involving models created by companies like Google, Microsoft, and OpenAI.

“Some things need to be mandated because we have seen over and over again that if not mandated, these companies continue to break things and continue to push for profit over rights, and profit over communities,” Hickok says.

While policy gets hashed out in Brussels, the stakes remain high. A day after the Bard demo mistake, a drop in Alphabet’s stock price shaved about $100 billion in market cap. “It’s the first time I’ve seen this destruction of wealth because of a large language model error on that scale,” says Hanna. She is not optimistic this will convince the company to slow its rush to launch, however. “My guess is that it’s not really going to be a cautionary tale.”

Updated 2-16-2023, 12.15 pm EST: A previous version of this article misspelled Merve Hickok’s name.

Read The Full Article Here


trick photography
Advertisingfutmillion

Popular posts

Hollywood Spotlight: Director Jon Frenkel Garcia
The Dutchman Cast: André Holland, Zazie Beetz & More Join
The Creator Reactions: Gareth Edwards’ Latest Is One of 2023’s
Company Paid Critics For Rotten Tomatoes Reviews
The Good Doctor Season 7 Episode 4 Review: Date Night
These Horror Book Series Would Make Great TV Adaptations
‘Constellation,’ ‘Manhunt’ & More Apple Stars Dazzle in Our Portraits
Stormy Daniels Reveals Gutting Details About Her Life in ‘Stormy’
Bill McBirnie’s Reflections (For Paul Horn) 
“Be Big” by Stephanie Bettman
“Ride On” by Roots Asylum
Touch the Buffalo’s “Bodhicitta”
9 Boob Tapes That Work For All Busts, Shapes, and
Here’s Why Apple Cider Vinegar Is the Ingredient Your Hair
I Travel a Lot for Work—These Are the Useful Items
The Best Street Style Looks From the Fall 2023 Couture
Winter 2024 Pick: The Heaven & Earth Grocery Store
Bookshelf: Development Diary
No Preview
The Housemaid: Recap, Summary & Spoilers
What You’re Getting Wrong About Book Bans
Researchers pump brakes on ‘blue acceleration’ harming the world ocean
The surprising ways animals react to a total solar eclipse
61 Unexpected ‘Forever Chemicals’ Found in Food Packaging
Why Short Naps Are Good for You
Killing TikTok
Comedy or Tragedy?
BYD Atto 3 Electric SUV With Blade Battery Technology Launched
Bitcoin Falls to $19,000 in Anticipation of Tighter Fed Policy