SAN FRANCISCO: Four US leaders in artificial intelligence (AI) announced on Wednesday the formation of an industry group devoted to addressing risks that cutting edge versions of the technology may pose. Anthropic, Google, Microsoft, and ChatGPT-maker OpenAI said the newly created Frontier Model Forum will draw on the expertise of its members to minimise AI risks and support industry standards. The companies pledged to share best practices with each other, lawmakers and researchers. "Frontier" models refer to nascent, large-scale machine-learning platforms that take AI to new levels of sophistication — and also have capabilities that could be dangerous. "Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control," Microsoft president Brad Smith said in a statement. "This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity." US President Joe Biden evoked AI's "enormous" risks and promises at a White House meeting last week with tech leaders who committed to guarding against everything from cyberattacks to fraud as the sector grows. Standing alongside top representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, Biden said the companies had made commitments to "guide responsible innovation" as AI spreads ever deeper into personal and business life.
top of page
bottom of page
Comments