Major tech companies urge lawmakers to avoid heavy-handed regulation on AI

Summarised by Centrist

AI startups OpenAI and Anthropic have agreed to allow the US government to have early access to AI models to ‘mitigate potential issues.’

Under this agreement, the US AI Safety Institute, part of the National Institute for Standards and Technology (NIST), will examine upcoming AI models from these companies. 

Elizabeth Kelly, the director of the institute, noted, “Safety is essential to fuelling breakthrough technological innovation…these agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

OpenAI’s Chief Strategy Officer Jason Kwon said, “We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence.” 

Meanwhile, Anthropic’s Jack Clark remarked that, “Safe, trustworthy AI is crucial for the technology’s positive impact… We’re proud to contribute to this vital work, setting new benchmarks for safe and trustworthy AI.”

Read more over at Bloomberg

Enjoyed this story? Share it around.​

Subscribe
Notify of
guest
1 Comment
Most Voted
Newest Oldest
Inline Feedbacks
View all comments

Read More

NEWS STORIES

Sign up for our free newsletter

Receive curated lists of news links and easy-to-digest summaries from independent, alternative and mainstream media about issues affect New Zealanders.

GRAHAM ADAMS: Trans ‘No Debate’ policy collapses

If you want to attack anyone for what you and a few of your fellow ideological travellers see as doctrinal error, you really shouldn’t make your target the grieving parents of a child who died alone of starvation in a motel room.

GRAHAM ADAMS: Trans ‘No Debate’ policy collapses

If you want to attack anyone for what you and a few of your fellow ideological travellers see as doctrinal error, you really shouldn’t make your target the grieving parents of a child who died alone of starvation in a motel room.

1
0
Would love your thoughts, please comment.x
()
x