Category: Computer

Posts related to computer hardware and software

  • Generative AI sans horse sense – unreliable narrators by design?

    Generative AI sans horse sense – unreliable narrators by design?

    I like the way that I can do a Google search using a photo – drag & drop it into a search box – to identify its content. I like the way I can extract text from a photo (providing the photo has sufficient resolution on that content). And Amazon uses AI to summarize the gist of product reviews (to some degree).

    But in the rush to embed AI for directly answering user questions – in a manner like “Ask Mr. Wizard,” this article notes that something is amiss. With large language models. Perhaps with no way out.

    Not all movies can be saved in post. Not all software can be saved by updates. GIGO.

    Rolling the dice is not a good basis for trustworthiness. “Mostly correct” incurs hapless harms. A form of quantum uncertainty: horse sense or horse pucky, eh.

    Who vets fact-checking mechanisms? And if it comes to using low-wage human labor to fact-check, …

    • Washington Post > Tech Brief > email news > “Google’s AI search problem may never be fully solved” by Will Oremus (May 29, 2024) – Last week, Google’s new “AI Overviews” stretched factuality.

    “All large language models, by the very nature of their architecture, are inherently and irredeemably unreliable narrators,” said Grady Booch, a renowned computer scientist. At a basic level, they’re designed to generate answers that sound coherent — not answers that are true. “As such, they simply cannot be ‘fixed,’” he said, because making things up is “an inescapable property of how they work.”

    But that [citing and summarizing specific sources] can still go wrong in multiple ways, said Melanie Mitchell, a professor at the Santa Fe Institute who researches complex systems. One is that the system can’t always tell whether a given source provides a reliable answer to the question, perhaps because it fails to understand the context. Another is that even when it finds a good source, it may misinterpret what that source is saying.

    Other AI tools … may not get the same answers wrong that Google does. But they will get others wrong that Google gets right. “The AI to do this in a much more trustworthy way just doesn’t exist yet,” Mitchell said.

  • Spending for AI’s pot of gold – powering profit?

    Spending for AI’s pot of gold – powering profit?

    High hype … trippy chips … gooey guardrails … pending legislation … the electric grid … pay, pay, pay, payday … cloudy cloud forecasts …

    Everyone wants to save the world, they just disagree on how.” – Fallout Season 1, Amazon Prime Video 2024 (Image credit: pixabay.com)

    • Washington Post > “Big Tech keeps spending billions on AI. There’s no end in sight.” by Gerrit De Vynck and Naomi Nix (April 25, 2024) – Much of the money is going to new data centers, which are predicted to place huge demands on the U.S. power grid.

    In quarterly earnings calls this week, Google, Microsoft and Meta all underlined just how big their investments in AI are. On Wednesday, Meta raised its predictions for how much it will spend this year by up to $10 billion. Google plans to spend around $12 billion or more each quarter this year on capital expenditures, much of which will be for new data centers, Chief Financial Officer Ruth Porat said Thursday. Microsoft spent $14 billion in the most recent quarter and expects that to keep increasing “materially,” Chief Financial Officer Amy Hood said.

  • AI chatbot hallucinations – mind those P’s and Q’s

    AI chatbot hallucinations – mind those P’s and Q’s

    AI seal of approval
    No promises …

    This is no joke! You’ve heard about this – whether AI chatbots mind their Ps and Qs. So, beware of nonsense.

    Statistically, how often do AI hallucinations happen?

    Yes, there’ll be updates … “We can’t stop hallucinations, but we can manage them.” (Maybe like the Id and Ego?)

    • Wiki > Hallucination (artificial intelligence)

    In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation or delusion) is a response generated by AI which contains false or misleading information presented as fact.

    • CNET > “Hallucinations: Why AI Makes Stuff Up, and What’s Being Done About It” by Lisa Lacy (April 1, 2024) – If you’re using generative AI to answer questions, it’s wise to do some external fact-checking to verify responses.

    … the [AI] model is trained to generate data that is “statistically indistinguishable” from the training data, or that has the same type of generic characteristics. There’s no requirement for it to be “true,” Soatto [Stefano Soatto, vice president and distinguished scientist at Amazon Web Services] said.

    “It generalizes or makes an inference based on what it knows about language, what it knows about the occurrence of words in different contexts,” said Swabha Swayamdipta, assistant professor of computer science at the USC Viterbi School of Engineering and leader of the Data, Interpretability, Language and Learning (DILL) lab. “This is why these language models produce facts which kind of seem plausible but are not quite true because they’re not trained to just produce exactly what they have seen before.”

    Another solution is to embed the model within a larger system — more software — that checks consistency and factuality and traces attribution.

    “Hallucination as a property of an AI model is unavoidable, but as a property of the system that uses the model, it is not only unavoidable, it is very avoidable and manageable,” Soatto said.

  • Apple’s M-Series security – promises ≠ perfection

    “GoFetch” is in the tech news cycle this week.

    Hopefully you know whether you have an Apple M-Series Mac. The M-Series (aka Apple silicon) computers advanced better performance and energy efficiency. How about security? – faster data pipelining is tricky (via so-called optimizations). Obscurity’s no guarantee (like, really, the front door key’s not in a nearby flower pot, eh).

    This article (below) provides an overview of the situation (and references for more technical detail). The author uses a car safety analogy to frame advice for the latest security vulnerability. No need to panic (and there’s no recall, like for a car).

    Hopefully most people understand what encryption is – how it keeps our data and communications safe. Like spy-versus-spy stuff, eh.

    • PC World > “Apple’s unfixable CPU exploit: 3 practical security takeaways” by Alaina Yee (Mar 22, 2024) – After Intel’s and AMD’s past vulnerabilities, Apple’s vulnerability demonstrates that security is a dynamic goal.

    As reported by Ars Technica, this security flaw allowed academic researchers to pull end-to-end encryption keys from Apple’s processors, using an app with normal third-party software permissions in macOS. Called GoFetch, the attack they created works through what’s called a side-channel vulnerability – using sensitive information discovered through watching standard behavior. It’s a bit akin to observing armored-car guards carry bags out of a business, and valuing the contents based on how heavy they seem (e.g., gold vs. paper cash).

    … you should create a multilayered approach to protecting yourself, … Think of it like a car – we know that a car crashes happen, with deadly results. Over time, we’ve mandated seatbelts, upgraded materials to have better force absorption, standardized airbags, switched to anti-lock brakes, devised proximity detectors and audio warnings, and more, all to improve safety.

  • Big Tech’s rocking credos on the rocks – a love story

    Some billionaires are bullies. But can they be bullied? Sure, in autocracies. But in the United States? Well … disrupting the so-called disrupters. As the song goes, “traveling twice the speed of sound, it’s easy to get burned” [1].

    Normally, sucking up to power isn’t news in the corporate world, but Silicon Valley was supposed to be different. – Kara Swisher

    Kara Swisher‘s been on a whirlwind book tour for a week promoting her new book, “Burn Book: A Tech Love Story.” And her tale’s a tall one, an important one, on Big Tech. Something which I followed as well over the decades in magazines, tech columns by Walt Mossberg, online articles, AllThingsD, Recode, books about Silicon Valley, etc. And experientially with all the gadgets.

    The setup for her opinion essay (an excerpt from her book) in the Washinton Post arrived in my email on February 18: “The Week in Ideas: The day Silicon Valley rode Trump’s escalator to nowhere” by Michael Larabee. He opened with a question, the big question:

    Is Big Tech about inventing the future? Changing the world? ‘Disrupting’ entrenched systems that benefit the few to improve life for the many? Or is it about making money?

    … Swisher shares a deeply illuminating quote from French philosopher Paul Virilio that she said she thinks about a lot: “When you invent the ship, you also invent the shipwreck.” I’m thinking about that a lot now, too.

    • Washington Post > Opinion > “How Trump pushed Silicon Valley off the rails” by Kara Swisher (February 15, 2024) – Tech’s pop culture visions: Star Wars and Star Trek.

    … casual hypocrisy became increasingly common over the decades that I covered Silicon Valley’s elite. Over that time, I watched founders transform from young, idealistic strivers in a scrappy upstart industry into leaders of some of America’s largest and most influential businesses. And while there were exceptions, the richer and more powerful people grew, the more compromised they became — wrapping themselves in expensive cashmere batting until the genuine person fell deep inside a cocoon of comfort and privilege where no unpleasantness intruded.

    The [2016] Trump tech summit was a major turning point for me and how I viewed the industry I’d been covering since the early 1990s. The lack of humanity was overwhelming.

    I love tech, I breathe tech. And I believe in tech. But for tech to fulfill its promise, founders and executives who ran their creations needed to put more safety tools in place. They needed to anticipate consequences more. Or at all. They needed to acknowledge that online rage might extend into the real world in increasingly scary ways.

    Notes

    [1] “Just a Song Before I Go” by Crosby, Stills & Nash (1977).