Lords of AI – Tech giants and an International Agency

Certified AT watermark
Promethean building blocks

UPDATE May 17, 2023: Proper funding for a new federal AI agency is needed to match the tech industry’s speed and power. The name for the prospective agency and a map of its possible functions are yet to be determined. But does having one agency regulate all AI make sense? – vs. adding AI oversight to the existing federal regulatory framework.

• Wired > “Spooked by ChatGPT, US Lawmakers Want to Create an AI Regulator” by Khari Johnson (May 17, 2023) – At a Senate Judiciary subcommittee hearing, senators from both parties and OpenAI CEO Sam Altman said a new federal agency was needed to protect people from AI gone bad.

The lords of AI … an International Agency for AI?

• Wired > email Newsletter > Steven Levy > Plaintext > The Plain View > “Gary Marcus used to call AI stupid – now he calls it dangerous” (May 5, 2023) – There’s a difference between power and intelligence.

Marcus [1], always loquacious, has an answer: “Yes, I’ve said for years that [LLMs] are actually pretty dumb, and I still believe that. But there’s a difference between power and intelligence. And we are suddenly giving them a lot of power.”

Marcus has an idea for who might do the enforcing. He has lately been insistent that the world needs, immediately, “a global, neutral, nonprofit International Agency for AI,” …

The success of large language models like OpenAI’s ChatGPT, Google’s Bard, and a host of others has been so spectacular that it’s literally scary. This week President Biden summoned the lords of AI to figure out what to do about it. Even some of the people building models, like OpenAI CEO Sam Altman, recommend some form of regulation. And the discussion is a global one; Italy even banned OpenAI’s bot for a while.

Hello Dave

Toward a regulatory framework for AI …

• The Technology 202 > “Biden’s enforcers see antitrust threats in AI rush” by Cristiano Lima with research by David DiMolfetta (May 9, 2023) – Will a small group of large tech companies – with power and resources that rival nation-states – corner the AI market?

Key officials including Justice Department antitrust chief Jonathan Kanter and Federal Trade Commission Chair Lina Khan have issued several warnings against potential anti-competitive abuses by companies as they look to grow their AI businesses.

Khan issued a more pointed warning last week, writing in an op-ed in the New York Times that, “The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms.”

Khan recently touted the agency’s creation of an Office of Technology as crucial toward their AI work.

Related articles

• CNET > “Google’s Bard Chatbot Opens to the Public” by Stephen Shankland [2] (May 10, 2023) – Google is trying to balance AI progress with caution.

Google is ready to open the Bard floodgates, at least to English speakers around the world. After two months of testing, access to the AI-powered chatbot no longer is gated by a waitlist.

• Wired > “How ChatGPT and Other LLMs Work – and Where They Could Go Next” by David Nield (Apr 30, 2023) – Large language models like AI chatbots seem to be everywhere. If you understand them better, you can use them better.


[1] Gary Marcus: one of the “go-to talking heads on this breakout topic” … “53-year-old entrepreneur and NYU professor emeritus who now lives in Vancouver” … TED talk on constraining AI… Substack “The Road to A.I. We Can Trust” … podcast Humans vs. Machines. For his 23 years at NYU, he was in psychology, not computer science. … cofounded an AI company called Geometric Intelligence (sold to Uber in 2016) … cofounded a robotics firm, Robust AI, which he left in 2021.

History: deep learning neural networks vs. old-school AI, based on reasoning and logic … with some references to Geoffrey Hinton, known as the godfather of deep learning, …

[2] Stephen Shankland has been a reporter at CNET since 1998 … covering the technology industry for 24 years and was a science writer for five years before that.


  1. A chatbot with rules that choose the response for the greater good … positronically

    • Wired > “A Radical Plan to Make AI Good, Not Evil” by Will Knight (May 9, 2023) – OpenAI competitor Anthropic says its Claude chatbot has a built-in “constitution” that can instill ethical principles and keep systems from going rogue.

    Jared Kaplan, a cofounder of Anthropic, says the design feature [chatbot Claude’s set of ethical principles] shows how the company is trying to find practical engineering solutions to sometimes fuzzy concerns about the downsides of more powerful AI.

    Anthropic’s approach doesn’t instill an AI with hard rules it cannot break. But Kaplan says it is a more effective way to make a system like a chatbot less likely to produce toxic or unwanted output. He also says it is a small but meaningful step toward building smarter AI programs that are less likely to turn against their creators.

    The principles that Anthropic has given Claude consist of guidelines drawn from the United Nations Universal Declaration of Human Rights and suggested by other AI companies, including Google DeepMind. More surprisingly, the constitution includes principles adapted from Apple’s rules for app developers, which bar “content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy,” among other things.


    • Wired > “How To Delete Your Data From ChatGPT” by Matt Burgess (May 9, 2023) – OpenAI has new tools that give you more control over your information—although they may not go far enough.

    • Wired > “What Really Made Geoffrey Hinton Into an AI Doomer” by Will Knight (May 8, 2023) – The AI pioneer is alarmed by how clever the technology he helped create has become. And it all started with a joke.

    The good bot

  2. In his latest article, Steven Levy draws a parallel between his assessment of the first iPhone in 2007 (before rise of the app store) and the current state of chatbots. A failure of foresight. And “prompt-and-pronounce” stunts.

    • Wired > email Newsletter > Steven Levy > Plaintext > The Plain View > “You’re probably underestimating AI chatbots” (May 12, 2023) – We risk failing to anticipate the potential trajectories of our AI-infused future.

    Typically, prompt-and-pronounce columns involve sitting down with one of these way-early systems and seeing how well it replaces something previously limited to the realm of the human.

    Today’s chatbots are taking baby steps in a journey that will rise to Olympic-level strides.

    The Writers’ Guild understands that GPT-4 can’t crank out an acceptable version of Young Sheldon right now but GPT-19 might actually make that series funny.

    As the tech improves, our new era will be marked by a fuzzy borderline between copilot and autopilot.

    Borderline between copilot and pilot

  3. This article discusses a forecast for the industrial cost of AI services – a massive increase, despite ongoing improvements in hardware performance [1] – “As demand for GenAI continues exponentially.”

    Will future personal computers and smartphones carry some of the load?

    • Forbes > “Generative AI Breaks The Data Center: Data Center Infrastructure And Operating Costs Projected To Increase To Over $76 Billion By 2028” by Jim McGregor, Contributor; Tirias Research, Contributor Group

    Tirias Research forecasts that on the current course, generative AI data center server infrastructure plus operating costs will exceed $76 billion by 2028, with growth challenging the business models and profitability of emergent services such as search, content creation, and business automation incorporating GenAI.

    For perspective, this cost is more than twice the estimated annual operating cost of Amazon’s cloud service AWS, which today holds one third of the cloud infrastructure services market according to Tirias Research estimates.


    [1] Cf. Hot Chips semiconductor technology conference

    The incresing cost of AI

  4. How to “mitigate the dark side of AI?”

    Reference: Office of Science and Technology Policy > Blueprint for an AI Bill of Rights – Making Automated Systems Work For The American People

    • Wired > email Newsletter > Steven Levy > Plaintext > The Plain View > “Everyone wants to regulate AI. No one can agree how” (May 26, 2023) – We blew it when it came to regulating social media, so let’s not mess up with AI.

    … not only government types but also people who build AI systems are suggesting that some new laws might be helpful in stopping the technology from going bad. The idea is to keep the algorithms in the loyal-partner-to-humanity lane, with no access to the I-am-your-overlord lane.

    Though since the dawn of ChatGPT many in the technology world have suggested that legal guardrails might be a good idea, the most emphatic plea came from AI’s most influential avatar of the moment, OpenAI CEO Sam Altman. “I think if this technology goes wrong, it can go quite wrong,” he said in a much anticipated appearance before a US Senate Judiciary subcommittee earlier this month. “We want to work with the government to prevent that from happening.”

    Choosing and implementing those solutions won’t be easy [re the “enormity of … work … to be done”]. It’s a giant challenge to strike the right balance between industry innovation and protecting rights and citizens.

    … the Blueprint nicely summarizes the goals [“uplifting suggestions”] of possible legislation.

    • You should not face discrimination by algorithms, and systems should be used and designed in an equitable way.

    • You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.

    • You should know that an automated system is being used [watermark] and understand how and why it contributes to outcomes that impact you.

    • You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

    AI Bill of Rights

  5. A vision for how to regulate AI …

    • The Washington Post > “The Technology 202” by Cristiano Lima (May 30, 2023)

    Tim Wu, an architect of President Biden’s antitrust policy, left the White House in January … Wu, now back at Columbia Law School, has been meeting in recent weeks with officials at the White House, the Justice Department and on Capitol Hill … to lay out his vision for how to regulate AI.

    Wu, an influential voice in discussions around tech regulation, outlined what he thinks officials should do to keep AI in check — and what they should avoid.

    AI regulation framework

  6. Legislation and guardrails … scaffolding for AI regulation.

    • Washington Post > “Europe moves ahead on AI regulation, challenging tech giants’ power” by Cat Zakrzewski and Cristiano Lima (June 14, 2023) – European lawmakers voted to approve the E.U. AI Act, putting Brussels a step closer to shaping global standards for artificial intelligence

    The European Parliament adopted its position on legislation known as the E.U. AI Act, which would ban systems that present an “unacceptable level of risk,” such as predictive policing tools, or social scoring systems, like those used in China to classify people on the basis of their behavior and socioeconomic status. The legislation also sets limits on “high-risk AI,” such as systems that could influence voters in elections or harm people’s health.

    Key points

    • Framework for a dialogue with the rest of the world on building “responsible AI.”
    • Stark contrast between E.U.’s progress on AI legislation and the picture in the U.S. Congress
    • Adds to European laws on data privacy, competition in the tech sector and the harms of social media – as de facto global tech regulator
    • Generative AI guardrails – labeled content
    • Pulished summaries of copyrighted data used for training the technology

    Lords of AI

  7. What’s happening in the US Congress on AI legislation?

    The Lieu Review 6-23-2023

    More and more, you’ve probably heard about artificial intelligence (AI) and how this rapidly developing technology has the potential to change our society. As a recovering computer science major, I’m fascinated by AI and all it can do for us. However, it is a powerful technology that, if left completely unchecked, could do significant harm. We must ensure that AI develops in a way that prioritizes the safety and wellbeing of Americans.

    That is why I introduced the National AI Commission Act this week. This bipartisan, bicameral bill will establish a blue-ribbon commission to focus specifically on our country’s approach to regulating AI in a responsible way. The expert panel will be composed of industry specialists, researchers, academics, and those in the creative community that will evaluate the current landscape and make recommendations for a risk-based framework to ensure that we are harnessing the awesome power of this technology while protecting our personal safety and national security. There’s so much we don’t know about AI. It’s our responsibility as Members of Congress to hear from the experts on this issue and take reasonable, thoughtful action based on their recommendations.

    AI is everywhere, from your smart phone to self-driving cars. But not all AI is created equal. For instance, there’s no need to regulate a smart toaster if it turns out to prefer English muffins over whole wheat bread. On the other hand, AI in moving objects – like self-driving cars going 60 miles per hour – need to be safe. In April I introduced bipartisan legislation to ensure that AI cannot launch nuclear weapons without human oversight.

    I joined MSNBC’s Morning Joe earlier this week to talk about the status of AI legislation and what we still don’t know.

    • YouTube > Rep. Ted Lieu > “Rep Lieu Discusses Need for Federal Regulation of Artificial Intelligence on Msnbc’s Morning Joe” (June 20, 2023) [1]

    Congressional bills


    [1] Transcript

    US Congressman Rep Lieu discusses need for federal regulation of artificial intelligence on msnbc’s morning joe 6-20-2023

    welcome back to “morning joe.”

    president biden will be in san francisco later today to meet with artificial intelligence experts to learn more about the growing technology.

    this meeting comes as “politico” reports dozens of democratic strategists scattered recently to discuss the coming election.

    however, their focus wasn’t in president biden or donald trump but, rather, how to combat disinformation spread by artificial intelligence in 2024.

    currently, there are no restrictions over using a.i. in political ads and campaigns are not required to disclose when they use the technology. that has led some strategists sounding the alarm on the unregulated, new innovation.

    let’s bring in democratic congressman ted lieu of california. he’s been calling for regulations over a.i., and he is proposing a bill. we want to get to that in a moment.

    congressman, explain the danger of just in political campaigns, of the use of unregulated a.i.

    >> thank you for your question. as a recovering computer science major, i’m fascinated with a.i. and all the good things it is going to do for society. it can also cause harm, and i think that’s why it is important that we have regulations and laws that allow a.i. to innovate but prevent avoidable harms and put in guardrails.

    we also have to be humble and understand there’s a lot we don’t know.

    as members of congress, we have to acknowledge that we have to have experts sometimes advise us on new technologies. that’s why later this morning, i’m creating an a.i. commission, that’s a bipartisan bill, and it’ll be carried on the senate side, as well. it’ll look at what a.i. we might want to regulate and how we might want to go about doing so, including a.i. for use in political campaigns.

    >> congressman, speak to us about the challenges of trying to regulate something that is developing so rapidly. a.i. is expanding seemingly by the day. technology improves by the day. how hard is it going to be to wrap your arms around something that is evolving so quickly?

    >> that is a great question. i don’t know we’d even know what we were regulating because it is moving so quickly. look at the applications that came out since chat gpd debuted. it is hundreds and probably thousands by now. some of these harms may, in fact, happen, but maybe they don’t. maybe we see some new harm. i think it is good to have some time pass.

    it is good to have a commission of experts advice us.

    if we make a mistake as a member of congress and writing legislation, you need another act of congress to correct that.

    >> for americans who really know nothing about this, can you talk a little bit about your greatest areas of concern? maybe some examples of ways this technology could run amuck, could cause problems? what was it that you heard that made you say, we need to look at this more closely?

    >> sure. as a legislative, i view this as two bodies of water. a big ocean of a.i. and the small lake. in this big ocean, there’s all the a.i. we don’t care about. a.i. and smart toaster as a preference for english muffins over wheat toast, we don’t care about that.

    in the small lake, there’s a.i. we care about. you ask, why would we want to care about that?

    first, there’s a preference that might cause harm to society, such as facial regular in addition, which is amazing technology but it is bias toward people with darker skin. if you deploy that nationwide with law enforcement agencies, you’ll have a violation because minorities will be misidentified as higher rates.

    i introduced legislation for guardrails on that. that’s an example of harm a.i. can cause.

    >> thank you. >> congressman, claire mccaskill here. i’m concerned about political campaigns. as you well know, the most powerful weapon in a political campaign is video of the candidate speaking in their own words.

    many people don’t do town halls during congress because they’re afraid their tracker will get them on film in a moment they say something awkward or misspeak, and it can be used against them later.

    i have a sense of urgency about what is going to happen in the next cycle, when people start airing commercials of candidates speaking words they never said. what would your legislation do for that?

    is there any urgency to move, at least on whether you have to disclose a.i. being used in advertising?

    >> thank you for your question. nothing in the bill precludes congress from acting in discreet areas of a.i. regulation.

    i also note that there is a.i. that can counter bad a.i. for example, you have some companies working on a.i. that can authenticate videos and original images, so that could be something that campaigns can use.

    in addition, i support legislation that requires disclosure on ads and social media and so on. next time, for example, if you see a pro trump ad, it might say at the bottom, paid for by the kremlin. that’s a disclosure we’d like to see.

    >> okay, yeah. that would be good.

  8. International AI watermark

    Will pledges by the most influential AI tech companies – to mitigate the risks of emerging AI platforms – lead to industry standards? What’s the history of keeping their safety and security commitments? What could go wrong, eh.

    • Washington Post > “Top tech firms sign White House pledge to identify AI-generated images” by Cat Zakrzewski (July 21, 2023) – Google, and ChatGPT-maker OpenAI agreed to the voluntary safety commitments, e.g., watermarking.

    The White House on Friday announced that seven of the most influential companies building artificial intelligence have agreed to a voluntary pledge to mitigate the risks of the emerging technology, escalating the Biden administration’s involvement in the growing debate over AI regulation.

    The companies — which include Google, Amazon, Microsoft, Meta and Chat GPT-maker OpenAI — vowed to allow independent security experts to test their systems before they are released to the public and committed to sharing data about the safety of their systems with the government and academics.

    The firms also pledged to develop systems to alert the public when an image, video or text is created by artificial intelligence, a method known as “watermarking.”

  9. AI elephant in the room

    The tenor of AI regulation debate in Congress: the elephant in the room – Cold War missle gap redux

    • The Washington Post > The Technology 202 (email newsletter) > “China is the elephant in the room in the AI debate” by Cristiano Lima (July 27, 2023)

    As lawmakers ramp up discussions about artificial intelligence legislation, concerns about losing ground to China are looming large in Congress.

    Leaders of the House select committee on China highlighted the issue ahead of a hearing late Wednesday dedicated to “ensuring U.S. leadership” in “emerging technologies of the 21st century” — like AI.

    “We need to make sure that on the one hand we take targeted steps to avoid fueling [China’s] potential advancement of technologies that could harm us or our values,” Rep. Raja Krishnamoorthi (Ill.), the panel’s top Democrat, told The Technology 202 on Wednesday.

    Tech industry leaders for years have warned that sweeping new regulations for emerging tech could put the United States at a disadvantage against China. Now as the AI debate gains steam, it’s clear the argument has found a foothold on Capitol Hill.

    “We don’t want to overregulate our advantage in the AI race out of existence,” Rep. Mike Gallagher (R-Wis.), who chairs the China panel, told reporters after a separate hearing on AI in warfare last week. Gallagher said lawmakers instead should pursue a “targeted” approach to the technology.

    “At this point, we have to do something,” he [Krishnamoorthi] said in a phone interview. “The problem is that while having too much regulation could inhibit innovation and potentially cripple our ability to lead in AI or other areas, having zero regulation could lead to algorithms that are nontransparent to the world, that can be plagued by bias.”

    “There cannot be the same safe harbor that we see … where Big Tech is immune from liability for anything bad that happens as a consequence of misinformation, disinformation or other informational problems on their platforms,” he said.

Comments are closed.