Category: Computer

Posts related to computer hardware and software

  • When AI grows up – no longer ‘really cute tiger cub’

    When AI grows up – no longer ‘really cute tiger cub’

    Geoffrey Hinton on the future of AI

    So, after watching the video interview with the ‘Godfather of AI’ (CBS News below), I was struck by something that was assumed or just left implicit. Namely, that AIs (AGIs) will be a monolithic threat (or benefit). Whether globally or at a international corporate or state level. That such super-intelligent machines will share a common purpose or perspective regarding humanity.

    Any hive-like alignment is particularly curious because Hinton discusses the stewardship of corporations and nations and bad actors. And that AIs can reflect on their own reasoning, use deception, and (at some point) resist manipulation. Which likely entails different cultural values in the mix. And he notes that “human interests don’t align with each other.” So, why would AI interests? – in the long run.

    So, while the interview raises the problem of AI-human misalignment, might AIs have different personalities? Diverge in temperament and virtue? “Evolve” in different ways? Tribes.

    I sketch such possible futures, tales of agency, in my Ditbit’s Guide to Blending in with AIs.

    Here’re some quotes from The Singju Post’s transcript (see below) of the interview.

    … if I had a job in a call center, I’d be very worried. … We know what’s going to happen is the extremely rich are going to get even more extremely rich and the not very well off are going to have to work three jobs.

    [The risk of AI takeover, the existential threat] … these things will get much smarter than us … But let’s just take as a premise that there’s an 80% chance that they don’t take over and wipe us out. … If we just carry on like now, just trying to make profits, it’s going to happen. They’re going to take over. We have to have the public put pressure on governments to do something serious about it. But even if the AIs don’t take over, there’s the issue of bad actors using AI for bad things.

    AI is potentially very dangerous. And there’s two sets of dangers. There’s bad actors using it for bad things, and there’s AI itself taking over.

    For AI taking over, we don’t know what to do about it. We don’t know, for example, if researchers can find any way to prevent that, but we should certainly try very hard. … Things that are more intelligent than you, we have no experience of that. … how many examples do you know of less intelligent things controlling much more intelligent things?

    I think the situation we’re in right now [“A change of a scale we’ve never seen before … hard to absorb … emotionally”], the best way to understand it emotionally is we’re like somebody who has this really cute tiger cub. It’s just such a cute tiger cub. Now, unless you can be very sure that it’s not going to want to kill you when it’s grown up, you should worry.

    And with super intelligences, they’re going to be so much smarter than us, we’ll have no idea what they’re up to.

    We worry about whether there’s a way to build a superintelligence so that it doesn’t want to take control. … The issue is, can we design it in such a way that it never wants to take control, that it’s always benevolent?

    People say, well, we’ll get it to align with human interests, but human interests don’t align with each other. … So if you look at the current AIs, you can see they’re already capable of deliberate deception.

    • The Singju Post (Our mission is to provide the most accurate transcripts of videos and audios online) > “Transcript of Brook Silva-Braga Interviews Geoffrey Hinton on CBS Mornings” (April 28, 2025) by Pangambam S / Technology

    • CBS News > CBS Saturday Morning > Artificial Intelligence > “‘Godfather of AI’ Geoffrey Hinton warns AI could take control from humans: ‘People haven’t understood what’s coming’” by Analisa Novak, Brook Silva-Braga (April 26, 2025) – Video interview (52′) [See the The Singju Post’s transcript]

    [CBS’ article contains only a few highlights from the video.]

    (quotes)
    While Hinton believes artificial intelligence will transform education and medicine and potentially solve climate change, he’s increasingly concerned about its rapid development.

    “The best way to understand it emotionally is we are like somebody who has this really cute tiger cub,” Hinton explained. “Unless you can be very sure that it’s not gonna want to kill you when it’s grown up, you should worry.”

    The AI pioneer estimates a 10% to 20% risk that artificial intelligence will eventually take control from humans.

    “People haven’t got it yet, people haven’t understood what’s coming,” he warned.

    According to Hinton, AI companies should dedicate significantly more resources to safety research — “like a third” of their computing power, compared to the much smaller fraction currently allocated.

    References

    • Wired > “Take a Tour of All the Essential Features in ChatGPT” by Reece Rogers (May 5, 2025) – If you missed WIRED’s live, subscriber-only Q&A focused on the software features of ChatGPT, you can watch the replay here (45′).

    What are some ChatGPT features that I wasn’t able to go deep on during the 45-minute session? Two come to mind: temporary chats and memory.

  • Prompt engineering – becoming an AI whisperer

    Prompt engineering – becoming an AI whisperer

    [Draft 1-17-2025]

    Introduction

    So, prompt engineering [1] is much in the news, as to wrangling a generative AI to create desirable results, “deliver the goods.” And perhaps not just information; but with (a chosen) style, or tailored to your audience or personal context (like a butler or assistant that knows you really well, eh).

    And, yes, there’re a bunch of books with titles like The AI Whisperer, …

    Whisperer

    b : a person who is unusually skilled at calmly guiding, influencing, or managing other people [or AIs?]

    c : a person considered to possess some extraordinary skill or talent in managing or dealing with something specified.

    Kudos to ZDNET (David Gewirtz) for some excellent articles on becoming an AI whisperer. Outlining the craft: what you need to know, things to avoid, the process, tools, reasonable expectations, decision points (e.g., how to avoid “sour grapes”).

    Table of contents

    • Introduction
    • ZDNET’s overview
    • Tom’s Guide Face-off – ChatGPT vs Grok
    • eWeek’s How to Become a Prompt Engineer
    • Forbes’ 10 Things ChatGPT Can Do
    • Forbes’ Success in an AI-driven World
    • And comments

    ZDNET’s overview

    This ZDNET article “The five biggest mistakes” (below after Tips & Quotes) provides a framework for creating successful prompts (but without examples). A summary of tips on how to avoid GIGO (garbage in, garbage out). A table of the “Biggest Prompting Mistakes.”

    There are links to additional articles which provide some examples: for personal planning (preparing for a marathon, learning a language for a trip, understanding a business technology) and creative writing (excerpted below).

    • “7 ways to write better ChatGPT prompts – and get the results you want faster” by David Gewirtz, Senior Contributing Editor (Dec 16, 2024)

    Tips

    • Talk to the AI like you would a person
    • Set the stage and provide context
    • Tell the AI to assume an identity or profession
    • Keep ChatGPT on track
    • Tell the AI to re-read the prompt.
    • Don’t be afraid to play & experiment
    • Refine & build on previous prompts

    Additional tips – quotes

    (quote re level of literacy)

    You can directly specify the complexity level by including it in your prompt. Add “… at a high school level” or “… at a level intended for a Ph.D. to understand” to the end of your question. You can also increase the complexity of output by increasing the richness of your input. The more you provide in your prompt, the more detailed and nuanced ChatGPT’s response will be. You can also include other specific instructions, like “Give me a summary,” “Explain in detail,” or “Provide a technical description.”

    (quote re using audience profiles)

    You can also pre-define profiles. For example, you could say “When evaluating something for a manager, assume an individual with a four-year business college education, a lack of detailed technical understanding, and a fairly limited attention span, who likes to get answers that are clear and concise. When evaluating something for a programmer, assume considerable technical knowledge, an enjoyment of geek and science fiction references, and a desire for a complete answer. Accuracy is deeply important to programmers, so double-check your work.”

    If you ask ChatGPT to “explain C++ to a manager” and “explain C++ to a programmer,” you’ll see how the responses differ.

    Excerpt (for creative writing)

    [the prompt]

    Write a short story for me, no more than 500 words [article explains why the limit].

    The story takes place in 2339, in Boston. The entire story takes place inside a Victorian-style bookstore that wouldn’t be out of place in Diagon Alley. Inside the store are the following characters, all human:

    The proprietor: make this person interesting and a bit unusual, give them a name and at least one skill or characteristic that influences their backstory and possibly influences the entire short story.

    The helper: this is a clerk in the store. His name is Todd.

    The customer and his friend: Two customers came into the store together, Jackson and Ophelia. Jackson is dressed as if he’s going to a Steampunk convention, while Ophelia is clearly coming home from her day working in a professional office.

    Another customer is Evangeline, a regular customer in the store, in her mid-40s. Yet another customer is Archibald, a man who could be anywhere from 40 to 70 years old. He has a mysterious air about himself and seems both somewhat grandiose and secretive. There is something about Archibald that makes the others uncomfortable.

    A typical concept in retail sales is that there’s always more inventory “in the back,” where there’s a storeroom for additional goods that might not be shown on the shelves where customers browse. The premise of this story is that there is something very unusual about this store’s “in the back.”

    Put it all together and tell something compelling and fun.

    [end of prompt]

    [author’s commentary]

    You can see how the detail provides more for the AI to work with. First, feed “Write me a story about a bookstore” into ChatGPT and see what it gives you. Then feed in the above prompt, and you’ll see the difference.

    • “7 advanced ChatGPT prompt-writing tips you need to know” by David Gewirtz, Senior Contributing Editor (Oct 5, 2023)

    • Specify output format
    • Tell it to format in HTML
    • Iterate with multiple attempts
    • Don’t be afraid to use long prompts or set of prompts
    • Provide explicit constraints to a response
    • Tell it number of words, sentences, characters
    • Give the AI the opportunity to evaluate its answers

    • ZDNET > “The five biggest mistakes people make when prompting an AI” by David Gewirtz, Senior Contributing Editor, reviewed by Elyse Betters Picaro (Jan 15, 2025) – Ready to transform how you use AI tools? Learn how to refine your prompts, avoid common pitfalls, and maximize the potential of generative AI tools.

    [Table of contents]

    1. Not being specific enough
    2. Not specifying how you want the response formatted
    3. Not remembering to clear or start a new session
    4. Not correcting, clarifying, or guiding the AI after an answer
    5. Not knowing when to give up [sour grapes, eh]

    [Advice]

    • ChatGPT’s advice
    • Copilot’s advice
    • Grok’s grokkings
    • Gemini’s advice
    • Meta AI’s advice

    [More tips]

    How to be successful when writing prompts


    Face-off

    • tom’s guide > face-off > “I put ChatGPT vs Grok to the test with 7 prompts — here’s the winner” by Ryan Morrison (January 8, 2025) – Grok has come a long way in a very short time, going from a glorified “toy” feature in X to something rivaling the likes of ChatGPT, Claude and Google’s Gemini.

    This is the latest in a series of head-to-head challenges [link] between leading AI models, all of which ChatGPT has won so far. I’ve put ChatGPT up against Gemini, then against Claude. I’ve also put Claude up against Google Gemini [link].

    The [seven] prompts follow the same pattern as previous comparisons and include coding, creative writing, problem-solving and advanced planning.

    .1. Image Generation

    The prompt: “Create an image of a minimalist home office setup with these specific elements: A 34-inch ultrawide monitor mounted on a white wall, an ergonomic chair in sage green, a light oak standing desk, three hanging potted plants (must be monstera, pothos, and snake plant), and a MacBook Pro in space grey. The room should have large windows letting in natural light from the left side, with sheer white curtains. Include a grey Persian cat sleeping on a round cushion under the desk.”

    .4. Creative Writing

    Prompt: “Write a heartwarming story about two people who meet while waiting in line for a new product launch. The story must include: specific details about the product they’re waiting for, at least three interactions between them before the store opens, a surprising connection they discover, and a flash-forward to one year later. Keep it under 500 words.”


    Becoming a prompt engineer

    This article by eWeek’s Liz Ticong is a comprehensive guide to becoming a generative AI whisperer. Useful diagrams, lists, even online AI training courses.

    • eWeek > “How to Become a Prompt Engineer (2025): The Path to Success” by Liz Ticong (September 20, 2024) – Discover what it takes to become a prompt engineer, from understanding the key skills to gaining practical experience and advancing in this growing field.

    DEFINITION (quoted)

    • A prompt engineer shapes artificial intelligence outputs by crafting precise, context-rich prompts to guide the AI model in generating relevant and accurate responses.
    • Prompt engineering is a growing career that bridges human language and AI, requiring a mix of linguistic, technical, and creative skills.
    • As AI technologies become increasingly integrated into diverse enterprise applications – particularly generative AI – the demand for skilled prompt engineers is growing rapidly.
    • Learning how to become a prompt engineer involves developing the right skills, completing a range of training, and gaining hands-on experience.

    KEY TAKEAWAYS (quoted)

    • Prompt engineers work in various sectors, including customer service, healthcare, education, and creative industries. (Jump to Section)
    • After learning the basics, there are certifications you can complete to acquire advanced prompt engineering skills. (Jump to Section)
    • While prompt engineering introduces significant benefits, prompt engineers also encounter some challenges that must be addressed, including complex models, biases, sensitive data, insufficient training data, and collaboration. (Jump to Section)

    TABLE OF CONTENTS

    • What is Prompt Engineering?
    • Understanding the Role of a Prompt Engineer
    • How to Become a Prompt Engineer
    • Career Development in Prompt Engineering
    • 3 Courses for Continuous Learning and Professional Growth
    • Real-World Contributions of Prompt Engineers
    • Overcoming Prompt Engineering Challenges
    • Frequently Asked Questions (FAQs)
    • Bottom Line: Learning How to Become Prompt Engineer Starts With Building AI and Language Skills

    [excerpt]

    Real-World Contributions of Prompt Engineers

    Customer Service Automation: Prompt engineers design interaction flows with AI chatbots and virtual assistants that handle customer queries and give customized solutions. By fine-tuning interactions, AI systems accurately interpret and appropriately respond to user needs, boosting customer satisfaction.

    Healthcare Solutions: In the healthcare sector, prompt engineers refine AI outputs to aid with medical diagnosis support and patient interactions. Their prompts ensure that the AI delivers relevant and precise medical information.

    Content Generation: They compose prompts for AI systems that produce articles, marketing copy, and other content types. With their efforts, the AI-generated content meets the user’s desired style, tone, and context.

    Educational Tools: Prompt engineers write inputs for educational AI applications that facilitate learning new concepts. These prompts make sure that the AI tools provide clear and error-free responses.

    Creative Arts: In the creative field, they design prompts that guide generative AI tools to produce artwork or music. Prompt engineers help shape the AI’s output to meet particular artistic visions and goals.

    Business Analytics: They craft detailed inputs that guide AI tools to analyze business data and generate valuable information. Skilled prompt engineers support deriving actionable insights from complex data sets.


    10 ChatGPT things

    • Forbes > “10 Things You Didn’t Know ChatGPT Could Do” by Jodie Cook, Senior Contributor (Jan 10, 2025) – Team productivity beyond simple questions and simple answers.

    • Create keyboard shortcut guides
    • Review terms and conditions [a shout-out to Jeff!]
    • Build your SEO strategy
    • Write your standard operating procedures [like for a health club?]
    • Find funding opportunities
    • Spot patterns in customer feedback [like re hospitality friction points, eh]
    • Create job descriptions that attract talent
    • Turn complex data into simple visuals
    • Design your lead magnet
    • Write spreadsheet formulas that work

    AI job success

    The future of the creator economy? Will AI ease effort and emphasize creativity? Do forecasts of AI boosts resemble Victorian steam tech hubris …

    • Forbes > “The One Skill That Will Define Success In An AI-Driven World” by Chris Westfall, Contributor (Jan 15, 2025) – Will AI lead to a more flexible workforce?

    By 2034, traditional 9-to-5 jobs will become obsolete, giving way to more flexible and dynamic work structures. That’s one of many bold predictions from LinkedIn co-founder, Reid Hoffman. And Hoffman has a pretty strong track record when it comes to betting on the future.

    TIPS

    • Slow down to go fast (separate signal from static)
    • Two heads are better than one (the power of conversation)
    • Cultivate vital soft skills (e.g., collaboration)

    Notes

    [1] Wiki > Prompt engineering

    A prompt is natural language text describing the task that an AI should perform. A prompt for a text-to-text language model can be a query such as “what is Fermat’s little theorem?”, a command such as “write a poem in the style of Edgar Allan Poe about leaves falling”, or a longer statement including context, instructions, and conversation history. Prompt engineering may involve phrasing a query, specifying a style, choice of words and grammar, providing relevant context, or assigning a role to the AI such as “act as a native French speaker”.

    [2] Apple Intelligence > Pages > Compose > ChatGPT prompt > …

    • Macworld > “Where is Apple Intelligence on my Mac?” by Roman Loyola, Senior Editor (Jan 20, 2025) – Looking for Apple Intelligence features on your Mac? Here’s how to get Apple’s AI features including ChatGPT and Image Playground on your Mac.

    TABLE OF CONTENTS

    • What you need for Apple Intelligence
    • What countries can run Apple Intelligence?
    • How to turn on Apple Intelligence
    • How to turn on ChatGPT
    • What are the Apple Intelligence features on the Mac?

    [3] Microsoft > Copilot in Word

  • Elevating humanity – OpenAI’s narrative for AGI

    Elevating humanity – OpenAI’s narrative for AGI

    Regarding the timeframe for achieving and qualifying Artificial General Intelligence (AGI), recently (December 4, 2024) on CNBC, Andrew Ross Sorkin interviewed Sam Altman, co-founder and C.E.O. of OpenAI, at the New York Times annual DealBook summit at Jazz at Lincoln Center in New York City.

    Computer problem
    I’ve been elevated …

    Altman said that quite capable AI agents (able to choreograph complex processes) will become available for businesses in a few years.

    I wonder how this might reshape individual merit [1] and trust in the workplace. And when AGI (whatever the scope) arrives, …

    • CNBC > “OpenAI’s Sam Altman on launching GPT4” – Sam Altman, OpenAI CEO, discusses the release of ChatGPT (12-4-2024)

    • The Verge > “Sam Altman lowers the bar for AGI” by Alex Heath (Dec 4, 2024) – OpenAI’s charter once said that AGI will be able to “automate the great majority of intellectual labor.”

    Nearly two years ago, OpenAI said that artificial general intelligence — the thing the company was created to build — could “elevate humanity” and “give everyone incredible new capabilities.”

    “My guess is we will hit AGI sooner than most people in the world think and it matter much less,” he said during an interview with Andrew Ross Sorkin at The New York Times DealBook Summit on Wednesday. “And a lot of the safety concerns that we and others expressed actually don’t come at the AGI moment. AGI can get built, the world mostly goes on in mostly the same way, things grow faster, but then there is a long continuation from what we call AGI to what we call super intelligence.”

    We at The Verge have heard OpenAI intends to weave together its large language models and declare that to be AGI.

    Notes

    [1] The Aristocracy of Talent

    … there is one idea that still commands widespread enthusiasm: that an individual’s position in society should depend on his or her combination of ability and effort. Meritocracy, a word invented as recently as 1958 by the British sociologist Michael Young, is the closest thing we have today to a universal ideology. – Wooldridge, Adrian. The Aristocracy of Talent: How Meritocracy Made the Modern World (2021) (p. 1). Skyhorse. Kindle Edition.

  • AI regulation faces a test in California

    AI regulation faces a test in California

    Benefits and costs of the new AI gold rush.

    Move fast and break things … vying for supremacy (“America’s AI edge,” like another Manhattan Project) … but … who’s a developer, and what are their responsibilities in some type of regulatory framework?

    Who does AI safety testing? Are there 3rd party evaluations? Certifications? Kill switches? Incident logging and reporting? AI “meltdowns” and lawsuits? Collateral damage from embedded AI medical devices? Fine-tuning and fines?

    Regulation of the AI “frontier” faces a milestone in California with SB 1047, a softer draft of its original version.

    The original version of SB 1047 was bold and ambitious. Introduced by state Senator Scott Wiener as the California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, …

    It’s a political tightrope.

    • The Verge > “Will California flip the AI industry on its head?” by Kylie Robison, a senior AI reporter working with The Verge’s policy and tech teams, who previously worked at Fortune Magazine and Business Insider (Sep 11, 2024) – SB 1047 aims to regulate AI, and the AI industry is out to stop it.

    SB 1047, which passed the California State Assembly and Senate in late August, is now on the desk of California Governor Gavin Newsom — who will determine the fate of the bill. While the EU and some other governments have been hammering out AI regulation for years now, SB 1047 would be the strictest framework in the US so far.

    Critics have painted a nearly apocalyptic picture of its impact, calling it a threat to startups, open source developers, and academics.

    Supporters call it a necessary guardrail for a potentially dangerous technology — and a corrective to years of under-regulation.

    Related articles

    • LA Times 9-9-2024 > “Overinflated AI bubble is beginning to leak” by Michael Hiltzik – After a huge run-up on Wall Street, users now wonder whether the craze will fall flat.

    Companies that plunged into the AI market for fear of missing out on useful new applications for their businesses have discovered that usefulness is elusive.

    One persistent concern about AI is its potential for misuse for nefarious ends, such as making it easier to shut down an electric grid, melt down the financial system, or produce deepfakes to deceive consumers or voters. That’s the topic of Senate Bill 1047, a California measure awaiting the signature of Gov. Gavin Newsom (who hasn’t said whether he’ll approve it).

    The bill mandates safety testing of advanced AI models and the imposition of “guardrails” to ensure they can’t slip out of the control of their developers or users and can’t be employed to create “biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities.” It’s been endorsed by some AI developers but condemned by others who assert that its constraints will drive AI developers out of California.

    That brings us to doubts not about AI risks, but about its real-world utility for business. These have been spreading in industry as more businesses try to use it, and find that it has been oversold.

    That may be true of projected economic gains from AI more broadly. In a recent paper, MIT economist Daron Acemoglu forecast that AI would produce an increase of only about 0.5% in U.S. productivity and an increase of about 1% in gross domestic product over the next 10 years, mere fractions of standard economic projections.

    Related posts

    Lords of AI – Tech giants and an International Agency

  • The darkside of AI – outages and shortages

    The darkside of AI – outages and shortages

    What could possibly go ...

    Another gold rush is underway. Recast as the AI tech boom. Yet, AI is power hungry, a voracious beast consuming vast amounts of electricity. And water [1]. And those gas-fired power plants [2] …

    Generative AI data centers are expensive to build, expensive to operate, and tax the electric grid and our limited water resources (like the Colorado River).

    Companies that invest in AI see profit. Electric utilities see profit from AI-induced demand for power. Local governments see profit from property taxes paid by data centers. Sort of a greed cycle .. what could go …

    This LA Times article reminds us that there’s no “free lunch” – the seductive offer by tech and social media to entice AI-sipping customers.

    The “free lunch” in the saying refers to the formerly common practice in American bars of offering a “free lunch” in order to entice drinking customers. – Wiki

    • LA Times > “Power demands of AI data centers raise concerns over cost, blackouts” by Melody Petersen (8-31-2024) – Experts warn construction frenzy could delay state’s transition away from fossil fuels.

    In Santa Clara — the heart of Silicon Valley — electric rates are rising as the municipal utility spends heavily on transmission lines and other infrastructure to accommodate the voracious power demand from more than 50 data centers, which now consume 60% of the city’s electricity.

    While the benefits and risks of AI continue to be debated, one thing is clear: The technology is rapacious for power. Experts warn that the frenzy of data center construction could delay California’s transition away from fossil fuels and raise electric bills for everyone else. The data centers’ insatiable appetite for electricity, they say, also increases the risk of blackouts.

    According to the International Energy Agency, a ChatGPT-powered search consumes 10 times the power as a search on Google without AI.

    And because those new chips generate so much heat, more power and water is required to keep them cool.

    By 2030, data centers could account for as much as 11% of U.S. power demand — up from 3% now, according to analysts at Goldman Sachs.

    Notes

    [1] This article uses infographics to visualize data center cooling footprints (water and electricity loads, which vary by state) for processing ChatGPT prompts.

    Those transactional footprints beg the months of cooling required to first train chatbots. Estimates range from 700,000 to 22,000,000 liters of water to train some AI models.

    Water replenishment rates by tech companies are unclear, while they pledge to make AI less thirsty.

    References on methodology are included.

    • Washington Post > POWER GRAB > “A bottle of water per email: the hidden environmental costs of using AI chatbots” by Pranshu Verma and Shelly Tan (September 18, 2024) – AI bots generate a lot of heat, and keeping their computer servers running exacts a toll.

    While the exact burden is nearly impossible to quantify, The Washington Post worked with researchers at the University of California, Riverside to understand how much water and power OpenAI’s ChatGPT, using the GPT-4 language model released in March 2023, consumes to write the average 100-word email.

    Even in ideal conditions, data centers are often among the heaviest users of water in the towns where they are located, environmental advocates said. But data centers with electrical cooling systems also are raising concerns by driving up residents’ power bills and taxing the electric grid.

    An earlier article uses infographics as well to better understand what goes on in data centers.

    • Washington Post > POWER GRAB > “Our digital lives need massive data centers. What goes on inside them?” by Antonio Olivo and William Neff (September 17, 2024) – We toured a facility in Northern Virginia to see how it works and to understand why water use and energy consumption are such a concern.

    More than half a million people in Northern Virginia live in a neighborhood that’s less than a mile from a data center. That’s more than 1 in 5 residents.

    There are four main types of data centers:

    An “enterprise” data center serves the needs of the company that owns it. Think of a corporation that stores in-house information on its own computers.

    Larger “hyperscale” data centers, owned by companies such as Amazon or Meta, have computer servers that cater solely to the company’s customers.

    “Edge” data centers are smaller buildings in or near major population centers, where digital connectivity becomes almost instantaneous for, say, a passing driverless car.

    Equinix is among the world’s largest owners of “colocation” data centers. Those facilities lease space to other businesses that hook up their servers to cables that belong to the data center company.

    [2] As in plans for global data centers …

    • Data Center Knowledge > “Oracle CloudWorld 2024: Embracing Multi-Cloud, Nuclear Energy, and Other Event Highlights” (September 12, 2024) – Alternative energy sources are crucial to the success of data centers globally.

    Nuclear Power in the AI Age

    Ellison made one of Oracle’s biggest headlines of the week with his Monday earnings call announcement the company would invest in three small nuclear reactors to power a data center with over 1 GW of AI capacity. Although his keynote address at CloudWorld didn’t provide any additional details, Ellison said that while Oracle now runs over 162 data centers globally, they could soon be operating over 1,000 facilities.

    Alternative energy sources are crucial to their success,” Kevin Sullivan, a principal at Pricewaterhouse Cooper, told Data Center Knowledge. “Nuclear, on a personal level, makes me a little nervous. Other alternative energy sources might be better options. But all options need to be available.”

    As reported in this article (and elsewhere), here is yet another portent of the thirst for electric power by the boom in artificial intelligence. A novel idea? More similar deals to come? More move fast and let things break (again), eh.

    • Washington Post > “Microsoft deal would reopen Three Mile Island nuclear plant to power AI” by Evan Halper (September 20, 2024) – Supposedly shuttered for good (decommissioned) in 2019, the Pennsylvania plant would come back online by 2028 if approved by regulators.

    Pennsylvania’s dormant Three Mile Island nuclear plant [site of the 1979 partial reactor meltdown] would be brought back to life to feed the voracious energy needs of Microsoft under an unprecedented deal announced Friday in which the tech giant would buy 100 percent of its power for 20 years. … the energy equivalent it takes to power 800,000 homes, or 835 megawatts.

    The four-year restart plan would cost Constellation [plant owner Constellation Energy] about $1.6 billion, he said, and is dependent on federal [public] subsidies in the form of [undisclosed] tax breaks earmarked for nuclear power in the 2022 Inflation Reduction Act.

    Constellation will also need to clear steep regulatory hurdles, including intensive safety inspections from the federal Nuclear Regulatory Commission, which has never before authorized the reopening of a plant. The deal also raises thorny questions about the federal tax breaks, as the energy from the plant would all be produced for a single private company rather than a utility serving entire communities.

    “It doesn’t address the core issues that are making the current practice of AI unsustainable by definition,” she [Sasha Luccioni, the top climate executive at sustainable AI start-up Hugging Face] said of the deal. “Instead of monopolizing decommissioned nuclear power plants, we should be focusing on integrating sustainability into AI.”

    Related posts

    AI chatbot reality check – the bottom line?