[Update: February 8, 2023 – see comments as well]
Following the company’s investment in OpenAI, Microsft has released OpenAI-infused new versions of Bing and Edge.
• LA Times > “Microsoft unveils Bing and Edge with OpenAI technology” by Dina Bass (Feb 8, 2023) – Tech giant upgraded its search engine and browser in hopes of gaining ground on the Google juggernaut.
“This technology is going to reshape pretty much every software category,” Microsoft Chief Executive Satya Nadella said at an event Tuesday at the company’s Redmond, Wash., headquarters. It’s “high time” innovation was restored to internet search, he said.
[Original post January 30, 2023]
Much media buzz about ChatGPT since December 2022. Lots of $$$ already in play.
In last week’s newsletter from my US Congressman, Rep Lieu referenced an op-ed in which he discusses the pros and cons of AI, and a press release about using ChatGPT to draft a resolution.
• House.gov > Media Center > Editorials > “New York Times Op-Ed: I’m a Congressman Who Codes. A.I. Freaks Me Out.” (January 23, 2023) – I am freaked out by A.I., specifically A.I. that is left unchecked and unregulated.
(quote) Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. This dystopian future may sound like science fiction, but the truth is that without proper regulations for the development and deployment of Artificial Intelligence (AI), it could become a reality. The rapid advancements in AI technology have made it clear that the time to act is now to ensure that AI is used in ways that are safe, ethical and beneficial for society. Failure to do so could lead to a future where the risks of AI far outweigh its benefits.
I didn’t write the above paragraph. It was generated in a few seconds by an A.I. program called ChatGPT, which is available on the internet. I simply logged into the program and entered the following prompt: “Write an attention grabbing first paragraph of an Op-Ed on why artificial intelligence should be regulated.”
… I will be introducing legislation to create a nonpartisan A.I. Commission to provide recommendations on how to structure a federal agency to regulate A.I., what types of A.I. should be regulated and what standards should apply.
• House.gov > Media Center > Press Releases > “Rep Lieu Introduces First Federal Legislation Ever Written by Artificial Intelligence” (Jan 26, 2023)
(quote) WASHINGTON – Today, Congressman Ted W. Lieu (D-Los Angeles County) introduced the first ever piece of federal legislation written by artificial intelligence. Using the artificial language model ChatGPT, Congressman Lieu offered the following prompt: “You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI.” The resulting resolution introduced today is the first in the history of Congress to have been written by AI.
As yesterday’s Wired newsletter noted:
(quote) Bots are nothing new, but ChatGPT is unusually slick with language thanks to a training process that included digesting billions of words scraped from the web and other sources. Its ability to generate short essays, literary parodies, and even functional computer code made it a social media sensation and the tech industry’s newest obsession.
I remember in the 1970’s an interactive terminal-based program named ELIZA, which used Rogerian-like scripts (person-centered psychotherapy) to chat. I was at a college computer center which showcased the chatty program. It was amazing how much personal information visitors (“outside the ivory tower”) provided in such (“human-ish”) conversations.
(quote from podcast article below) [Will Knight] The funny thing is, going back to the very early days of AI, the first chatbots, people were willing to believe that those were human. There’s famously this one that was made at MIT called ELIZA, where it was a fake psychologist and people would tell it their secrets.
(quote from 2nd article below) [Bindu Reddy, CEO of Abacus.AI] Reddy, the AI startup CEO, knows ChatGPT’s limitations but is still excited about the potential. She foresees a time when tools like it are not just useful, but convincing enough to offer some form of companionship. “It could potentially make for a great therapist,” she says.
Do open-ended use of language and somewhat artful conversation evince something’s intelligence? Imagine if one of your dear pets, like a parrot, started talking like ChatGPT, eh. Or online avatars.
What is ChatGPT? Chat Generative Pre-Trained Transformer. What’s a generative AI program? A (language) transformer?
What can it do (and not do – shortcomings)? Mimicry without actually understanding how the world works.
What’s the worry? Flaws: nonsense, biases, plagiarism, misinformation, outdated data … fuzzy “guardrails” – data sets and “monsters from the ID” scraped from the web.
Is ChatGPT free? … will it stay free?
Here’s a Wired podcast / transcript which discusses this tech – with an intro that was written by ChatGPT.
• Wired > “How These AI-Powered Chatbots Keep Getting Better” by Wired Staff (Dec 8, 2022) – Gadget Lab discusses the advances in generative AI tools like ChatGPT that make computer-enabled conversations seem more human than ever.
(quote) Will Knight [WIRED senior writer] … the thing that’s really important to remember is that they are just slurping up and regurgitating in a statistically clever way stuff that people have made. … So I think we’re just really, really well designed to use language and conversation as a way to imbue intelligence on something.
Lauren Goode: OpenAI is a super interesting company. It claims its mission is to make AI open and accessible and safe. It started as a nonprofit, but now it has a for-profit arm. … Google owns a company called DeepMind that is working on similar large language models.
Will Knight: … I think there’s a really good argument that these tools should be more available and not just in the hands of these big companies.
Here’s another article about ChatGPT’s quirks / shortcomings.
• Wired > “ChatGPT’s Most Charming Trick Is Also Its Biggest Flaw” by Will Knight (Dec 7, 2022) – “Each time a new one of these models comes out, people get drawn in by the hype,” says Emily Bender, a professor of linguistics at the University of Washington.
(quote) ChatGPT, created by startup OpenAI, has become the darling of the internet since its release last week. Early users have enthusiastically posted screenshots of their experiments, marveling at its ability to generate short essays on just about any theme, craft literary parodies, answer complex coding questions, and much more. It has prompted predictions that the service will make conventional search engines and homework assignments obsolete.
Yet the AI at the core of ChatGPT is not, in fact, very new. It is a version of an AI model called GPT-3 that generates text based on patterns it digested from huge quantities of text gathered from the web.
ChatGPT stands out because it can take a naturally phrased question and answer it using a new variant of GPT-3, called GPT-3.5.
… the team fed human-written answers to GPT-3.5 as training data, and then used a form of simulated reward and punishment known as reinforcement learning to push the model to provide better answers to example questions.
But because they mimic human-made images and text in a purely statistical way, rather than actually learning how the world works, such programs are also prone to making up facts and regurgitating hateful statements and biases – problems still present in ChatGPT. Early users of the system have found that the service will happily fabricate convincing-looking nonsense on a given subject.
How we got to ChatGPT
This long article chronicles the roadmap to ChatGPT. With lots of diagrams.
• ars technica > “The generative AI revolution has begun – how did we get here?” by Haomiao Huang  (Jan 30, 2023) – A new class of incredibly powerful AI models has made recent breakthroughs possible.
(quote) There’s a holy trinity in machine learning: models, data, and compute. Models are algorithms that take inputs and produce outputs. Data refers to the examples the algorithms are trained on. To learn something, there must be enough data with enough richness that the algorithms can produce useful output. Models must be flexible enough to capture the complexity in the data. And finally, there has to be enough computing power to run the algorithms.
The big breakthrough in language models… was discovering an amazing model for translation and then figuring out how to turn (transform) general language tasks into translation problems.
The “generative” part is obvious—the models are designed to spit out new words in response to inputs of words. And “pre-trained” means they’re trained using this fill-in-the-blank method on massive amounts of text.
[The evolution of computer vision research … to Dall-E] Deep learning started to change all of this. Instead of researchers manually creating and working with image features by hand, the AI models would learn the features themselves – and also how those features combine into objects like faces and cars and animals.
Transformers are general-purpose tools for figuring out the rules in one language and then mapping them to another. So if you can figure out how to represent something in a similar way as to a language, you can train transformer models to translate between them.
OpenAI was able to scrape the Internet to build a massive data set that can be used to translate between the world of images and text.
As long as there’s a way to represent something with a structure that looks a bit like a language [ordered sequences], together with the data sets to train on, transformers can learn the rules and then translate between languages.
Another consideration is that these AI models are fundamentally stochastic. … There’s no explicit concept of a right or wrong answer – just how close it is to being correct.
The basic workflow of these models is this: generate, evaluate, iterate.
… many of the capabilities these new models are showing are emergent, so they aren’t necessarily being formally programmed. … to explicitly answer questions … without having to be explicitly designed to answer Q&As.
• Wired > “ChatGPT Has Investors Drooling – but Can It Bring Home the Bacon?” by Will Knight (Jan 13, 2023) – The loquacious bot has Microsoft ready to sink a reported $10 billion into OpenAI. It’s unclear what products can be built on the technology.
• Wired > “How to Stop ChatGPT from Going Off the Rails” by Amit Katwala (Dec 16, 2022) – The viral chatbot wasn’t up to writing a WIRED newsletter. But it’s fluent enough to raise questions about how to keep eloquent AI systems accountable.
• Wired > “ChatGPT Is Coming for Classrooms. Don’t Panic” by Pia Ceres (Jan 26, 2023) – The AI chatbot has stoked fears of an educational apocalypse. Some teachers see it as the reboot education sorely needs.
 Haomiao Huang is an investor at Kleiner Perkins, where he leads early-stage investments in hardtech and enterprise software. Previously, he founded the smart home security startup Kuna, built self-driving cars during his undergraduate years at Caltech and, as part of his Ph.D. research at Stanford, pioneered the aerodynamics and control of multi-rotor UAVs.