The National Institute of Standards and Technology (NIST), a division of the U.S. Department of Commerce dedicated to advancing technology for government, industry, and the public, has unveiled NIST GenAI. This innovative program aims to evaluate advancements in generative AI, particularly in content and image generation. Let’s delve into NIST’s pioneering tool.
NIST GenAI will establish benchmarks, facilitate content authenticity detection (such as deepfake detection), and support the development of systems to identify the origin of AI-generated data, as outlined on the newly launched NIST GenAI website and in a press release.
The program will issue challenge tasks to assess generative AI technologies’ capabilities and limitations, promoting data understanding advancement and responsible usage of digital content.
The initial focus of NIST’s innovative tool is a pilot project to develop systems capable of distinguishing between human-generated and AI-generated media, starting with text. While many methods claim to detect deepfakes, their reliability remains questionable, particularly concerning textual content.
NIST GenAI invites participation from academic and industry research labs to contribute generators (AI systems that produce content) or discriminators (systems designed to identify AI-generated content).
Generators will be tasked with producing 250-word summaries based on given topics and documents, while discriminators will determine if a given summary is potentially AI-generated. To ensure fairness, NIST GenAI will provide necessary data for testing, with systems trained on publicly available data adhering to legal regulations.
Registration for the pilot begins on May 1, with the first round expected in early August. Final results are anticipated to be released by February 2025.
The launch of NIST’s innovative tool coincides with the exponential growth of AI-generated deception and disinformation, addressing concerns raised by the proliferation of deepfakes.
As per Clarity, a deepfake detection firm, there has been a 900% increase in deepfakes this year compared to the same period last year, sparking widespread concern. A recent YouGov survey found that 85% of Americans are worried about the spread of deceptive deepfakes online.
NIST’s tool is part of its response to President Joe Biden’s executive order on AI, which emphasizes transparency from AI companies regarding their models’ operations and introduces measures like labeling AI-generated content. This marks NIST’s first AI-related initiative since the appointment of Paul Christiano, a former OpenAI researcher, to the agency’s AI Security Institute.
Christiano’s appointment has raised concerns among critics, including some within NIST, who fear a focus on “doomsday scenarios” rather than practical AI threats.
By : Kruthiga V S
—-
#NIST #AI #GenerativeAI #Deepfakes #Technology #Innovation #DataUnderstanding #ContentAuthenticity #DeepfakeDetection #ArtificialIntelligence #PilotProject #Research #Regulations #Ethics #Security #MindVoice #MindVoiceNews #CurrentAffairs #CurrentNews #LatestNews #IPSC #IASPreparation #UPSC