ChatGPT – an illusion of understanding?

[Update: February 8, 2023 – see comments as well]

Following the company’s investment in OpenAI, Microsft has released OpenAI-infused new versions of Bing and Edge.

• LA Times > “Microsoft unveils Bing and Edge with OpenAI technology” by Dina Bass (Feb 8, 2023) – Tech giant upgraded its search engine and browser in hopes of gaining ground on the Google juggernaut.

“This technology is going to reshape pretty much every software category,” Microsoft Chief Executive Satya Nadella said at an event Tuesday at the company’s Redmond, Wash., headquarters. It’s “high time” innovation was restored to internet search, he said.


[Original post January 30, 2023]

Much media buzz about ChatGPT since December 2022. Lots of $$$ already in play.

In last week’s newsletter from my US Congressman, Rep Lieu referenced an op-ed in which he discusses the pros and cons of AI, and a press release about using ChatGPT to draft a resolution.

• House.gov > Media Center > Editorials > “New York Times Op-Ed: I’m a Congressman Who Codes. A.I. Freaks Me Out.” (January 23, 2023) – I am freaked out by A.I., specifically A.I. that is left unchecked and unregulated.

(quote) Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. This dystopian future may sound like science fiction, but the truth is that without proper regulations for the development and deployment of Artificial Intelligence (AI), it could become a reality. The rapid advancements in AI technology have made it clear that the time to act is now to ensure that AI is used in ways that are safe, ethical and beneficial for society. Failure to do so could lead to a future where the risks of AI far outweigh its benefits.

I didn’t write the above paragraph. It was generated in a few seconds by an A.I. program called ChatGPT, which is available on the internet. I simply logged into the program and entered the following prompt: “Write an attention grabbing first paragraph of an Op-Ed on why artificial intelligence should be regulated.

… I will be introducing legislation to create a nonpartisan A.I. Commission to provide recommendations on how to structure a federal agency to regulate A.I., what types of A.I. should be regulated and what standards should apply.

• House.gov > Media Center > Press Releases > “Rep Lieu Introduces First Federal Legislation Ever Written by Artificial Intelligence” (Jan 26, 2023)

(quote) WASHINGTON – Today, Congressman Ted W. Lieu (D-Los Angeles County) introduced the first ever piece of federal legislation written by artificial intelligence. Using the artificial language model ChatGPT, Congressman Lieu offered the following prompt: “You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI.” The resulting resolution introduced today is the first in the history of Congress to have been written by AI.

As yesterday’s Wired newsletter noted:

(quote) Bots are nothing new, but ChatGPT is unusually slick with language thanks to a training process that included digesting billions of words scraped from the web and other sources. Its ability to generate short essays, literary parodies, and even functional computer code made it a social media sensation and the tech industry’s newest obsession.

I remember in the 1970’s an interactive terminal-based program named ELIZA, which used Rogerian-like scripts (person-centered psychotherapy) to chat. I was at a college computer center which showcased the chatty program. It was amazing how much personal information visitors (“outside the ivory tower”) provided in such (“human-ish”) conversations.

(quote from podcast article below) [Will Knight] The funny thing is, going back to the very early days of AI, the first chatbots, people were willing to believe that those were human. There’s famously this one that was made at MIT called ELIZA, where it was a fake psychologist and people would tell it their secrets.

(quote from 2nd article below) [Bindu Reddy, CEO of Abacus.AI] Reddy, the AI startup CEO, knows ChatGPT’s limitations but is still excited about the potential. She foresees a time when tools like it are not just useful, but convincing enough to offer some form of companionship. “It could potentially make for a great therapist,” she says.

Do open-ended use of language and somewhat artful conversation evince something’s intelligence? Imagine if one of your dear pets, like a parrot, started talking like ChatGPT, eh. Or online avatars.

abracadabra
Language, typically in the form of gibberish, used to give the impression of arcane knowledge … abracadabra
FAQ

What is ChatGPT? Chat Generative Pre-Trained Transformer. What’s a generative AI program? A (language) transformer?

What can it do (and not do – shortcomings)? Mimicry without actually understanding how the world works.

What’s the worry? Flaws: nonsense, biases, plagiarism, misinformation, outdated data … fuzzy “guardrails” – data sets and “monsters from the ID” scraped from the web.

Is ChatGPT free? … will it stay free?

Here’s a Wired podcast / transcript which discusses this tech – with an intro that was written by ChatGPT.

• Wired > “How These AI-Powered Chatbots Keep Getting Better” by Wired Staff (Dec 8, 2022) – Gadget Lab discusses the advances in generative AI tools like ChatGPT that make computer-enabled conversations seem more human than ever.

(quote) Will Knight [WIRED senior writer] … the thing that’s really important to remember is that they are just slurping up and regurgitating in a statistically clever way stuff that people have made. … So I think we’re just really, really well designed to use language and conversation as a way to imbue intelligence on something.

Lauren Goode: OpenAI is a super interesting company. It claims its mission is to make AI open and accessible and safe. It started as a nonprofit, but now it has a for-profit arm. … Google owns a company called DeepMind that is working on similar large language models.

Will Knight: … I think there’s a really good argument that these tools should be more available and not just in the hands of these big companies.

Here’s another article about ChatGPT’s quirks / shortcomings.

• Wired > “ChatGPT’s Most Charming Trick Is Also Its Biggest Flaw” by Will Knight (Dec 7, 2022) – “Each time a new one of these models comes out, people get drawn in by the hype,” says Emily Bender, a professor of linguistics at the University of Washington.

(quote) ChatGPT, created by startup OpenAI, has become the darling of the internet since its release last week. Early users have enthusiastically posted screenshots of their experiments, marveling at its ability to generate short essays on just about any theme, craft literary parodies, answer complex coding questions, and much more. It has prompted predictions that the service will make conventional search engines and homework assignments obsolete.

Yet the AI at the core of ChatGPT is not, in fact, very new. It is a version of an AI model called GPT-3 that generates text based on patterns it digested from huge quantities of text gathered from the web.

ChatGPT stands out because it can take a naturally phrased question and answer it using a new variant of GPT-3, called GPT-3.5.

… the team fed human-written answers to GPT-3.5 as training data, and then used a form of simulated reward and punishment known as reinforcement learning to push the model to provide better answers to example questions.

But because they mimic human-made images and text in a purely statistical way, rather than actually learning how the world works, such programs are also prone to making up facts and regurgitating hateful statements and biases – problems still present in ChatGPT. Early users of the system have found that the service will happily fabricate convincing-looking nonsense on a given subject.

How we got to ChatGPT

This long article chronicles the roadmap to ChatGPT. With lots of diagrams.

• ars technica > “The generative AI revolution has begun – how did we get here?” by Haomiao Huang [1] (Jan 30, 2023) – A new class of incredibly powerful AI models has made recent breakthroughs possible.

(quote) There’s a holy trinity in machine learning: models, data, and compute. Models are algorithms that take inputs and produce outputs. Data refers to the examples the algorithms are trained on. To learn something, there must be enough data with enough richness that the algorithms can produce useful output. Models must be flexible enough to capture the complexity in the data. And finally, there has to be enough computing power to run the algorithms.

The big breakthrough in language models… was discovering an amazing model for translation and then figuring out how to turn (transform) general language tasks into translation problems.

The “generative” part is obvious—the models are designed to spit out new words in response to inputs of words. And “pre-trained” means they’re trained using this fill-in-the-blank method on massive amounts of text.

[The evolution of computer vision research … to Dall-E] Deep learning started to change all of this. Instead of researchers manually creating and working with image features by hand, the AI models would learn the features themselves – and also how those features combine into objects like faces and cars and animals.

Transformers are general-purpose tools for figuring out the rules in one language and then mapping them to another. So if you can figure out how to represent something in a similar way as to a language, you can train transformer models to translate between them.

OpenAI was able to scrape the Internet to build a massive data set that can be used to translate between the world of images and text.

As long as there’s a way to represent something with a structure that looks a bit like a language [ordered sequences], together with the data sets to train on, transformers can learn the rules and then translate between languages.

Another consideration is that these AI models are fundamentally stochastic. … There’s no explicit concept of a right or wrong answer – just how close it is to being correct.

The basic workflow of these models is this: generate, evaluate, iterate.

… many of the capabilities these new models are showing are emergent, so they aren’t necessarily being formally programmed. … to explicitly answer questions … without having to be explicitly designed to answer Q&As.

The future

• Wired > “ChatGPT Has Investors Drooling – but Can It Bring Home the Bacon?” by Will Knight (Jan 13, 2023) – The loquacious bot has Microsoft ready to sink a reported $10 billion into OpenAI. It’s unclear what products can be built on the technology.

• Wired > “How to Stop ChatGPT from Going Off the Rails” by Amit Katwala (Dec 16, 2022) – The viral chatbot wasn’t up to writing a WIRED newsletter. But it’s fluent enough to raise questions about how to keep eloquent AI systems accountable.

• Wired > “ChatGPT Is Coming for Classrooms. Don’t Panic” by Pia Ceres (Jan 26, 2023) – The AI chatbot has stoked fears of an educational apocalypse. Some teachers see it as the reboot education sorely needs.

Notes

[1] Haomiao Huang is an investor at Kleiner Perkins, where he leads early-stage investments in hardtech and enterprise software. Previously, he founded the smart home security startup Kuna, built self-driving cars during his undergraduate years at Caltech and, as part of his Ph.D. research at Stanford, pioneered the aerodynamics and control of multi-rotor UAVs.

12 comments

  1. Concerns about plagiarism quickly arose after ChatGPT was released. Not just in education.

    And there’s a technical challenge as well: as generative AI models increasingly contribute to online content (text, images, etc.), not only will web search engines increasingly index AI-written fabrications, but those AI’s ongoing training data sets themselves will be polluted (as noted in “ChatGPT Has Investors Drooling – but Can It Bring Home the Bacon?” above).

    So, spotting (and flagging) AI-written texts (vs. human-written texts) is important.

    This CNBC article notes that the accuracy of OpenAI’s new classifier tool is relatively low. Even false positives. And with some limitations on input size (number of characters).

    • CNBC > TECH > “ChatGPT maker OpenAI comes up with a way to check if text was written by a human” by Jordan Novet (Jan 31, 2023) – identifying synthetic text is no easy task.

    Artificial intelligence research startup OpenAI has introduced a tool that’s designed to figure out if text is human-generated or written by a computer.

    Key points

    ChatGPT maker OpenAI says its latest tool makes mistakes but is more prepared to handle outputs from recent AI systems than a version from 2019.

    The startup, which built ChatGPT, wants feedback on the tool from parents and teachers.

    The release comes two months after OpenAI captured the public’s attention when it introduced ChatGPT.

    Flag on the play

  2. “Google’s search interface … [is] bloated with ads and marketers trying to game the system.”

    So, OpenAI started with a noble concept:

    The company wanted to protect against a future in which big tech companies, like Google, mastered AI technology and monopolized its benefits.

    Non-profit no more, eh. Here’s a FAQ.

    • Washington Post > “What to know about OpenAI, the company behind ChatGPT” by Pranshu Verma (Feb 6, 2023)

    WHAT TO KNOW

    • What is OpenAI’s history, and how was Elon Musk involved?
    • What does OpenAI make and who can use it?
    • Why are people excited about ChatGPT, and what does Silicon Valley think?
    • Who are the big players in AI right now?
    • Does Microsoft own OpenAI?

    See also:

    Meta’s chabot was released before ChatGPT debuted. It was boring. No giddiness ensued.

    In playing catch-up, will Big Tech forgo safety guardrails? (And invite reputational risk or liability if a response is found to be harmful or plagiarized?)

    Moving from providing a range of answers to queries that link directly to their source material, to using a chatbot to give a single, authoritative answer, would be a big shift that makes many inside Google nervous, said one former Google AI researcher.

    • Washington Post > “Big Tech was moving cautiously on AI. Then came ChatGPT” by Nitasha Tiku, Gerrit De Vynck and Will Oremus (Feb 3, 2023) – Google, Facebook and Microsoft helped build the scaffolding of AI. Smaller companies are taking it to the masses, forcing Big Tech to react.

    Some AI ethicists fear that Big Tech’s rush to market could expose billions of people to potential harms — such as sharing inaccurate information, generating fake photos or giving students the ability to cheat on school tests — before trust and safety experts have been able to study the risks. Others in the field share OpenAI’s philosophy that releasing the tools to the public, often nominally in a “beta” phase after mitigating some predictable risks, is the only way to assess real world harms.

    Microsoft’s chief of communications, Frank Shaw, said his company works with OpenAI to build in extra safety mitigations when it uses AI tools like DALLE-2 in its products. “Microsoft has been working for years to both advance the field of AI and publicly guide how these technologies are created and used on our platforms in responsible and ethical ways,” Shaw said.

    In the past year or so, top AI researchers from Google have left to launch start-ups around large language models, including Character.AI, Cohere, Adept, Inflection.AI and Inworld AI, in addition to search start-ups using similar models to develop a chat interface, such as Neeva, run by former Google executive Sridhar Ramaswamy.

  3. As with other technologies … with eyes (wide) open – how instill “keen or complete knowledge, awareness, or expectations” about ChatGPT et al.

    Here’s an interesting article about public education and AI chatbot literacy à la media literacy. A story about middle / high school computer science teachers using ChatGPT to create lesson plans for students to assess. A sort of Turing test for educational tools, eh?

    • Cambridge Dictionary > media awareness:

    an understanding of the different methods for presenting information in newspapers, on television, on the internet, etc., and of the possible uses and dangers of these methods:

    • NY Times > “At This School, Computer Science Class Now Includes Critiquing Chatbots” (Feb 6, 2023) [subscription wall]

  4. If you’ve been following the “giddiness” about ChatGPT, then you knew this was coming … using that AI tech to enhance search engines – with a chat interface. How to avoid releasing an unreality engine, eh.

    • Wired > “The Race to Build a ChatGPT-Powered Search Engine” by Will Knight (Feb 6, 2023) – A search bot you converse with could make finding answers easier—if it doesn’t tell fibs. Microsoft, Google, Baidu, and others are working on it.

    But the way the technology works is in some ways fundamentally at odds with the idea of a search engine that reliably retrieves information found online. There’s plenty of inaccurate information on the web already, but ChatGPT readily generates fresh falsehoods. Its underlying algorithms don’t draw directly from a database of facts or links but instead generate strings of words aimed to statistically resemble those seen in its training data, without regard for the truth.

    A regular search might return pages for several Will Knights, but the chatbot conflated them into a single person.

    … it is also unclear how compatible chat interfaces are with the primary revenue model for search engines – advertising.

    The cost … may be 10 times more expensive to run a ChatGPT search than a Google search.

    … it may take a while to figure out how to prevent language models like GPT from making things up.

  5. Google’s “Bard” chatbot will be released this week [1]. Not sure how that’s related to their investment in Anthropic AI, as discussed in this LA Times article.

    • LA Times > “Google invests millions in AI startup rival to ChatGPT” by Davey Alba and Dina Bass (2-6-2023)

    Founded in 2021 by former leaders of OpenAI, including siblings Daniela and Dario Amodei, Anthropic AI in January released a limited test of a new chatbot named Claude to rival to OpenAI’s wildly popular ChatGPT.

    Alphabet’s Google has invested almost $400 million in artificial intelligence startup Anthropic, which is testing a rival to OpenAI’s ChatGPT, according to a person familiar with the deal.

    Google and Anthropic declined to comment on the investment, but announced a partnership in which Anthropic will use Google’s cloud computing services. The deal is the latest alliance between a tech giant and an AI startup as the field of generative AI — technology that can generate text and art in seconds — heats up.

    Notes

    [1] What’s in a name, eh?

    • NY Times > “Racing to Catch Up With ChatGPT, Google Plans Release of Its Own Chatbot” by Cade Metz and Nico Grant (Feb. 6, 2023) – Google said on Monday that it would soon release an experimental chatbot called Bard as it races to respond to ChatGPT, …

    Bard — so named because it is a storyteller, the company said — is based on experimental technology called LaMDA, short for Language Model for Dialogue Applications, which Google has been testing inside the company and with a limited number of outsiders for several months.

    Google has plans to release more than 20 A.I. products and features this year, The New York Times has reported. The A.I. search engine features, which the company said would arrive soon, will try to distill complex information and multiple perspectives to give users a more conversational experience.

    Google said Bard would be a “lighter weight” version of LaMDA that would allow the company to serve up the technology at a lower cost.

  6. As predicted, Microsoft released an AI-enhanced version of their search engine.

    • Macworld > “Hands-on: Microsoft’s new AI-powered Bing can write essays and plan vacations” by Mark Hachman, Senior Editor (Feb 8, 2023) – the fresh AI experience already works shockingly well more often than not..

    For now, the experience is entirely free, though you’ll need to be logged into your Microsoft account to see the benefits of the chat and the new Bing experience. Until Microsoft pushes the entire Bing experience live, you’ll be forced to join a waitlist. Even then, you’ll be allowed to ask the new Bing a limited number of queries, we’re told. For now, Microsoft isn’t allowing anonymous queries, though that should be added in the future, Microsoft said.

    Bing’s new AI-powered chatbot is basically ChatGPT with ads … and one that refuses to do your homework for you. Well, sometimes.

    The new Bing experience is basically two parts. There’s the traditional search, with a list of search results and a new contextual interface to the right; and the new “Chat” interface, which can be accessed either by swiping up from the list of links or via its own link.

    The key difference between the left and the right side appears to be that Bing is collating the results listed on the left — saving you a click or two, in other words

    If you click them — or the related button, “Let’s chat” — the entire interface will scroll upwards, opening up a new space above the search results.

    The key difference between Bing and, say, Google Bard, is that Bing footnotes its responses, visually indicating what portion of its response comes from what site.

    Is Bing better than ChatGPT? … In other ways, Bing limits itself. much more.

    And there’s the financial infatuation:

    • Seeking Alpha > “Microsoft rises as analysts praise new AI-powered Bing, Edge” (February 8, 2023) [Subscription wall]

    • CNBC > “Microsoft CEO Nadella calls A.I.-powered search biggest thing for company since cloud 15 years ago” by Ashley Capoot (Updated Feb 8, 2023)

  7. The current obsession of the market with all things labeled AI reminds me of all the predatory business models still in play based on hype about scalability of an app-based service – build the app of dreams and “they will come,” eh.

    • Yahoo Finance > “Investors obsessing over AI is latest symptom of the ‘Amazon disease’” – Morning Brief by Myles Udland, Head of News (February 8, 2023)

    As my colleague Julie Hyman wrote yesterday, the market’s obsession with anything “AI” is starting to feel a little 2017, the year when anyone and everyone began tacking “blockchain technology” onto an idea.

    The speed of the infatuation with AI, chatbots, and all associated “innovations” has been stunning.

    Speaking on Bloomberg’s Odd Lots podcast earlier this week, Steve Eisman of “The Big Short” fame … outlined what he calls the “Amazon disease.”

    “What I mean by the Amazon disease is when Amazon came public, there was a lot of skepticism that this would work, and Amazon has basically conquered the world,” Eisman said. “And so people are always looking for the next Amazon when the sell side writes a research report. And the first sentence is, ‘The TAM is huge,’ which means the total [addressable] market is huge.”

    And we think this offers a great heuristic for understanding the basis for so many of the market’s recent bull cases that overhyped flawed business models. Winning small portions of big markets has been the consensus framework for investing in high growth businesses.

  8. Synthetic content: What could possibly go … ? The darkside: “a sophisticated propaganda campaign from a foreign government.” Sans any watermarks or “poisoned, planted content,” eh.

    • Wired > “How to Detect AI-Generated Text, According to Researchers” by Reece Rogers (Feb 8, 2023) – there’s an underlying capricious quality to our human style of communication …

    While these [recently released] detection tools are helpful for now, Tom Goldstein, a computer science professor at the University of Maryland, sees a future where they become less effective, as natural language processing grows more sophisticated.

    What unique qualities are left to human-composed writing? Noah Smith, a professor at the University of Washington and NPL researcher at the Allen Institute for AI, points out that while the models may appear to be fluent in English, they still lack intentionality. “It really messes with our heads, I think,” Smith says. “Because we’ve never conceived of what it would mean to have fluency without the rest [intentionality, understanding of the world, etc.]. Now we know.”

    What could possibly go ...

  9. Microsoft’s Bing chat and Google’s Bard are much in the chatbot (generative) AI news. And Apple? Here’s a tech recap and possible arc for Apple.

    • Macworld > “Is Apple paying any attention to the ChatGPT AI arms race?” by Jason Cross, Senior Editor (Feb 9, 2023) – Does Apple have something amazing up its sleeve – beyond its current Neural Engine?

    TABLE OF CONTENTS

    AI chat is old, AI creation is new
    ChatGPT, Bard, and Bing
    Stable Diffusion, Midjourney, DALL-E

    The AI that can find all the potatoes in your photo library is a totally different thing from one that can draw a potato from scratch in a wide variety of artistic styles.

    With Siri, Apple was at the forefront of bringing an AI voice assistant to the masses. As that technology evolved, Apple fell way behind, and now Siri is often viewed as a disappointment that can’t compare with Google Assistant or Alexa.

    We might not hear anything at all about generative AI out of Apple, and then at WWDC, BAM! World-class generative AI all over Apple’s products!

    chatbots

  10. A chatbot AI $$$ race?

    • CNET > “Microsoft’s AI-Powered Bing Challenges Google Search” by Stephen Shankland (Feb 8, 2023) – Microsoft will show ads next to the new AI search results, Mehdi [Yusuf Mehdi, chief consumer marketing officer] said.

    Microsoft touted its Responsible Artificial Intelligence policy as an important framework for guiding its work and providing engineering tools to follow them.

    Bing now is an “AI-powered co-pilot [not pilot – there’s a thumbs-down button] for the web,” the tech giant said, delivering search results infused with information from the large language model from Microsoft partner OpenAI.

    Bing is the first step, but Microsoft expects the AI technology to help you everywhere, whether writing documents in Word, crunching data in spreadsheets or creating PowerPoint presentations.

    As of January, Bing had a 3% share of search engine usage, far less than Google’s 92%, according to analytics firm StatCounter. Search is Google’s top revenue source, since the company places ads next to search results.

    Microsoft and OpenAI didn’t detail exactly how Bing is using OpenAI’s technology [in the hybrid answering process]. … OpenAI helps with phrasing answers in readable language, delving deeper into search queries and generating new text for prompts requiring more creativity.

    Trust the ads

    Search, ask for more details, write something, get context for a website, ask a broad question, …

    • CNET > “5 Things to Try With Microsoft’s New AI-Powered Bing” by Laura Hautala (Feb 10, 2023) – There’s a waiting list for the AI-powered Bing service now, and Microsoft says it’ll be broadly available and free to use in the coming months.

    A wizard for all things

  11. Over-the-top claims for ChatGPT-4?

    As Spock often says, “fascinating.”

    So, does the tendency for AI Large Language Models to make things up (“hallucinate”) make them sort of like us? [1]

    • Wired > “Some Glimpse AGI* in ChatGPT. Others Call It a Mirage” by Will Knight (Apr 18, 2023) – Understanding the potential or risks of AI’s new abilities means having a clear grasp of what those abilities are – and are not.

    * AGI == artificial general intelligence [2]

    AI researchers at Microsoft … also suggest that these systems demonstrate an ability to reason, plan, learn from experience, and transfer concepts from one modality to another …

    The fact that Microsoft has invested more than $10 billion in OpenAI suggested to some researchers that the company’s AI experts had an incentive to hype GPT-4’s potential while downplaying its limitations. Others griped that the experiments are impossible to replicate because GPT-4 rarely responds in the same way when a prompt is repeated, and because OpenAI has not shared details of its design. Of course, people also asked why GPT-4 still makes ridiculous mistakes if it is really so smart.

    [A team of cognitive scientists, linguists, neuroscientists, and computer scientists from MIT, UCLA, and the University of Texas, Austin] concluded that while large language models demonstrate impressive linguistic skill—including the ability to coherently generate a complex essay on a given theme—that is not the same as understanding language and how to use it in the world. That disconnect may be why language models have begun to imitate the kind of commonsense reasoning needed to stack objects or solve riddles. But the systems still make strange mistakes when it comes to understanding social relationships, how the physical world works, and how people think.

    Notes

    [1] This question is more relevant after a couple of weeks talking with many Customer Support agents at a major telecomm services provider. All of them “hallucinated” – provided incorrect information – time after time, never just saying that they did not know. The typical agent always seemed inexperienced, and narrowly trained. Generally okay at routine things but not for more complicated actions. Intentionally so – as corporate policy – or via high turnover?

    As noted in the article: “If GPT-4 succeeds on some commonsense reasoning tasks for which it was explicitly trained and fails on others for which it wasn’t, it’s hard to draw conclusions based on that.”

    [2] Remember the HAL 9000 in the 1968 film 2001: A Space Odyssey? How was HAL trained?

    As noted in the article: “We can’t help but see flickers of intelligence in something that uses language so effortlessly. ‘If the pattern of words is meaning-carrying, then humans are designed to interpret them as intentional, and accommodate that,’ Goodman [Noah Goodman, an associate professor of psychology, computer science, and linguistics at Stanford University] says.”

    Shades of reasoning ... air heads

  12. Twilight Zone obscura

    This Wired article reminded me of the classic Twilight Zone Season 1 Episode 7 “The Lonely” (November 13, 1959). In which a prisoner’s solitude on an asteroid is broken when he is given a relational fembot, to which he eventually bonds. The fembot “develops a personality that mirrors” the prisoner’s.

    • Wired > “What Isaac Asimov’s Robbie Teaches About AI and How Minds ‘Work’” by Samir Chopra (Jul 330, 2023) – Even as our ancient ancestors granted natural elements (like the sun, the ocean) mental qualities, do most people want to know how AI agents “really work” internally? Do they even care?

    In Isaac Asimov’s classic science fiction story “Robbie,” the Weston family owns a robot who serves as a nursemaid and companion for their precocious preteen daughter, Gloria. … After several failed attempts to wean Gloria off Robbie, … Gloria does not learn how Robbie “really works,” and in a plot twist, Gloria and Robbie become even better friends.

    Similarly, once we lose our grasp on the internals of artificial intelligence systems, or grow up with them, not knowing how they work, we might ascribe minds to them too. This is a matter of pragmatic decision, not discovery. For that might be the best way to understand why and what they do.

    This philosophical analysis matters because there is an important balancing act we must engage in when thinking about legal regulation of artificial intelligence research: We want the technical advantages and social benefits of artificial intelligence … But these companies need liability cover … otherwise, the designers of artificial intelligence systems would stay out of such a potentially financially risky arena. But we want society to be protected too from the negative effects of such smart programs …

Comments are closed.