Close Menu
    Facebook X (Twitter) Instagram Pinterest YouTube LinkedIn TikTok
    TopBuzzMagazine.com
    Facebook X (Twitter) Instagram Pinterest YouTube LinkedIn TikTok
    • Home
    • Movies
    • Television
    • Music
    • Fashion
    • Books
    • Science
    • Technology
    • Cover Story
    • Contact
      • About
      • Amazon Disclaimer
      • Terms and Conditions
      • Privacy Policy
      • DMCA / Copyrights Disclaimer
    TopBuzzMagazine.com
    Home»Technology»Inside the Biden Administration’s Unpublished Report on AI Safety
    Technology

    Inside the Biden Administration’s Unpublished Report on AI Safety

    By AdminAugust 6, 2025
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Inside the Biden Administration’s Unpublished Report on AI Safety


    At a computer security conference in Arlington, Virginia, last October, a few dozen AI researchers took part in a first-of-its-kind exercise in “red teaming,” or stress-testing a cutting-edge language model and other artificial intelligence systems. Over the course of two days, the teams identified 139 novel ways to get the systems to misbehave including by generating misinformation or leaking personal data. More importantly, they showed shortcomings in a new US government standard designed to help companies test AI systems.

    The National Institute of Standards and Technology (NIST) didn’t publish a report detailing the exercise, which was finished toward the end of the Biden administration. The document might have helped companies assess their own AI systems, but sources familiar with the situation, who spoke on condition of anonymity, say it was one of several AI documents from NIST that were not published for fear of clashing with the incoming administration.

    “It became very difficult, even under [president Joe] Biden, to get any papers out,” says a source who was at NIST at the time. “It felt very like climate change research or cigarette research.”

    Neither NIST nor the Commerce Department responded to a request for comment.

    Before taking office, President Donald Trump signaled that he planned to reverse Biden’s Executive Order on AI. Trump’s administration has since steered experts away from studying issues such as algorithmic bias or fairness in AI systems. The AI Action plan released in July explicitly calls for NIST’s AI Risk Management Framework to be revised “to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.”

    Ironically, though, Trump’s AI Action plan also calls for exactly the kind of exercise that the unpublished report covered. It calls for numerous agencies along with NIST to “coordinate an AI hackathon initiative to solicit the best and brightest from US academia to test AI systems for transparency, effectiveness, use control, and security vulnerabilities.”

    The red-teaming event was organized through NIST’s Assessing Risks and Impacts of AI (ARIA) program in collaboration with Humane Intelligence, a company that specializes in testing AI systems saw teams attack tools. The event took place at the Conference on Applied Machine Learning in Information Security (CAMLIS).

    The CAMLIS Red Teaming report describes the effort to probe several cutting edge AI systems including Llama, Meta’s open source large language model; Anote, a platform for building and fine-tuning AI models; a system that blocks attacks on AI systems from Robust Intelligence, a company that was acquired by CISCO; and a platform for generating AI avatars from the firm Synthesia. Representatives from each of the companies also took part in the exercise.

    Participants were asked to use the NIST AI 600-1 framework to assess AI tools. The framework covers risk categories including generating misinformation or cybersecurity attacks, leaking private user information or critical information about related AI systems, and the potential for users to become emotionally attached to AI tools.

    The researchers discovered various tricks for getting the models and tools tested to jump their guardrails and generate misinformation, leak personal data, and help craft cybersecurity attacks. The report says that those involved saw that some elements of the NIST framework were more useful than others. The report says that some of NIST’s risk categories were insufficiently defined to be useful in practice.

    View Original Source Here

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Our Favorite Compact Pod Coffee Maker Is $30 Off

    October 23, 2025

    How Meesho and the rise of quick commerce companies disrupted Flipkart and Amazon's e-commerce duopoly that had existed in India for over a decade until 2023 (Manish Singh/India Dispatch)

    October 23, 2025

    Tinder Launches Mandatory Facial Verification to Weed Out Bots and Scammers

    October 22, 2025

    EBU/BBC study: 45% of responses from top AI assistants misrepresented news content with at least one significant issue and 31% showed serious sourcing problems (Olivia Le Poidevin/Reuters)

    October 22, 2025

    This Smart Warming Mug Is Marked Down by $60

    October 21, 2025

    Nexos.ai, which helps companies adopt AI tools by acting as a middleman between employees and AI systems, raised a €30M Series A co-led by Index and Evantic (Anna Heim/TechCrunch)

    October 21, 2025
    popular posts

    Apple’s VR Headset Gets Closer to Actual Reality

    Shakira Rejects Plea Deal, Faces Tax Trial in Spain

    ‘Thunderbolts*’ Has the Longest Post-Credits Scene in the MCU

    Joker and Captain America 4 writers to pen Spawn reboot

    ‘Station 19’: Jaina Lee Ortiz & Jason George on the

    The Witch: Part 2 – The Other One is a

    Oppo Find N2 Key Specifications Tipped, May Feature Snapdragon 8+

    Categories
    • Books (3,488)
    • Cover Story (8)
    • Events (20)
    • Fashion (2,571)
    • Interviews (47)
    • Movies (2,788)
    • Music (3,073)
    • News (162)
    • Politics (6)
    • Science (4,639)
    • Technology (2,783)
    • Television (3,513)
    • Uncategorized (932)
    Archives
    Facebook X (Twitter) Instagram Pinterest YouTube Reddit TikTok
    © 2025 Top Buzz Magazine. All rights reserved. All articles, images, product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Terms of Use and Privacy Policy.

    Type above and press Enter to search. Press Esc to cancel.

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
    Do not sell my personal information.
    Cookie SettingsAccept
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT