HomeTechnologyGenerative AI Systems Aren't Just Open or Closed Source

Generative AI Systems Aren’t Just Open or Closed Source


master mentalism tricks

Recently, a leaked document, allegedly from Google, claimed that open-source AI will outcompete Google and OpenAI. The leak brought to the fore ongoing conversations in the AI community about how an AI system and its many components should be shared with researchers and the public. Even with the slew of recent generative AI system releases, this issue remains unresolved.

Many people think of this as a binary question: Systems can either be open source or closed source. Open development decentralizes power so that many people can collectively work on AI systems to make sure they reflect their needs and values, as seen with BigScience’s BLOOM. While openness allows more people to contribute to AI research and development, the potential for harm and misuse—especially from malicious actors—increases with more access. Closed-source systems, like Google’s original LaMDA release, are protected from actors outside the developer organization but cannot be audited or evaluated by external researchers.

I’ve been leading and researching generative AI system releases, including OpenAI’s GPT-2, since these systems first started to become available for widespread use, and I now focus on ethical openness considerations at Hugging Face. Doing this work, I’ve come to think of open source and closed source as the two ends of a gradient of options for releasing generative AI systems, rather than a simple either/or question.

Chart showing various generative AI softwares and their level openness with corresponding risk considerations

Illustration: Irene Solaiman

At one extreme end of the gradient are systems that are so closed they are not known to the public. It’s hard to cite any concrete examples of these, for obvious reasons. But just one step over on the gradient, publicly announced closed systems are becoming increasingly common for new modalities, such as video generation. Because video generation is a relatively recent development, there is less research and information about the risks it presents and how best to mitigate them. When Meta announced its Make-a-Video model in September 2022, it cited concerns like the ease with which anyone could make realistic, misleading content as reasons for not sharing the model. Instead, Meta stated that it will gradually allow access to researchers.

In the middle of the gradient are the systems casual users are most familiar with. Both ChatGPT and Midjourney, for instance, are publicly accessible hosted systems where the developer organization, OpenAI and Midjourney respectively, shares the model through a platform so the public can prompt and generate outputs. With their broad reach and a no-code interface, these systems have proved both useful and risky. While they can allow for more feedback than a closed system, because people outside the host organization can interact with the model, those outsiders have limited information and cannot robustly research the system by, for example, evaluating the training data or the model itself.

On the other end of the gradient, a system is fully open when all components, from the training data to the code to the model itself, are fully open and accessible to everyone. Generative AI is built on open research and lessons from early systems like Google’s BERT, which was fully open. Today, the most-used fully open systems are pioneered by organizations focused on democratization and transparency. Initiatives hosted by Hugging Face (to which I contribute)—like BigScience and BigCode, co-led with ServiceNow—and by decentralized collectives like EleutherAI are now popular case studies for building open systems to include many languages and peoples worldwide.

There is no definitively safe release method or standardized set of release norms. Neither is there any established body for setting standards. Early generative AI systems like ELMo and BERT were largely open until GPT-2’s staged release in 2019, which sparked new discussions about responsibly deploying increasingly powerful systems, such as what the release or publication obligations ought to be. Since then, systems across modalities, especially from large organizations, have shifted toward closedness, raising concern about the concentration of power in the high-resource organizations capable of developing and deploying these systems.

Read The Full Article Here


trick photography
Advertisingfutmillion

Popular posts

Hollywood Spotlight: Director Jon Frenkel Garcia
The Dutchman Cast: André Holland, Zazie Beetz & More Join
The Creator Reactions: Gareth Edwards’ Latest Is One of 2023’s
Company Paid Critics For Rotten Tomatoes Reviews
‘Fire Country’ Sneak Peek: Sharon Gets Honest With Vince During
Anna Paquin Reveals Health Issues Have Not ‘Been Easy’ as
Why X-Men 97 is the Greatest Reboot of All Time
The 50 Best Historical Dramas: ‘Shirley,’ ‘The Chosen’ & More
Bob Green – Silver Screams for Silent Screens Review
Streaking in Tongues’ “Einstein’s Napkin”
Greye is Back With New Album
Universal Dice’s “Curse”
9 Boob Tapes That Work For All Busts, Shapes, and
Here’s Why Apple Cider Vinegar Is the Ingredient Your Hair
I Travel a Lot for Work—These Are the Useful Items
The Best Street Style Looks From the Fall 2023 Couture
Physician by Day, Vigilante by Night in This Action-Packed Cyberpunk
10 Of The Best New Children’s Books Out April 2024
Interview with James Ungurait, Author of I’m The Same
Child Psychologist and Mother Shares CBT Teaching Techniques That Work
Positive associations between premenstrual disorders and perinatal depression
Poem: ‘SnapShot, 1968’
What is the smallest animal on Earth?
Experimental weight loss pill seems to be more potent than
Killing TikTok
Killing TikTok
Comedy or Tragedy?
BYD Atto 3 Electric SUV With Blade Battery Technology Launched