• Lumian
  • Posts
  • Lumian Gen AI Newsletter Issue #55

Lumian Gen AI Newsletter Issue #55

Anthropic’s Claude 3.5 Sonnet, ChatGPT Search, Perplexity’s Launch

Welcome to the 55th edition of the Lumian Weekly Gen AI Newsletter!

If you read the headlines over the past year, 2024 was supposed to be the year AI redefined American democracy. There were warnings of deepfake attacks, robotic phone banks, and micro-targeted ads drilling into the core of voter psychology. The expectation was that generative AI would bring in a new era of influence and manipulation, upending the game of electoral politics. But here we are, just days from Election Day, and AI hasn’t made a splash—more of a ripple. Why didn’t the revolution arrive?

There’s an eerie quietness in the election tech landscape, almost as if the real AI disruption is lying in wait. It raises the question: could it be that AI’s most profound impact on elections has yet to arrive, or has the technology been overhyped from the start?

The reality is that AI has definitely entered the campaign toolkit, just not in the ways that make for clicky headlines. We’ve seen generative AI used to transcribe canvassing notes, predict voter turnout, and create multilingual ads. These applications are useful, but in the same way having better spreadsheets is useful. The Harris campaign, for instance, limited its AI usage to data analysis and productivity, and the Trump campaign claims not to use AI at all. Both camps have, ironically, leaned into very human-centric campaigns. They’re focused on traditional rallies, debates, and public endorsements—the very strategies that AI was supposed to make obsolete. Perhaps we should take a lesson from history: while technology can make processes faster, it doesn’t necessarily transform the game’s core.

Deepfakes were the tech boogeyman of this election, the one application of AI that was supposed to turn American politics on its head. The fear was that AI-powered impersonations of candidates would mislead voters en masse, casting doubt on even the most trusted appearances. There was a brief surge of AI-generated misinformation—Trump shared an AI-crafted image of Taylor Swift endorsing him, which generated some laughs and eye-rolls but didn’t actually change anyone’s mind.

So why didn’t deepfakes have the anticipated impact? It’s a mix of ethics, cost, and practicality. Many voters and political strategists alike still perceive deepfake use as borderline dystopian. There’s also a threshold of trust that’s simply hard to breach with AI alone. The sophistication of today’s deepfakes may be good enough to fool at first glance, but most people can spot them with a little scrutiny. In a world where authenticity sells, deepfakes are still too jarring to be persuasive.

In the end, it’s the good old “cheap fakes” that still do the trick—basic, edited images, misleading captions, or crafty headlines. These don’t need AI at all, just the right context. And ironically, it seems that the most potent “fake” content is less about technological precision and more about simple manipulation.

AI was supposed to allow campaigns to understand voters at a near-intrusive level, targeting them with specific ads based on psychological nuances. But what we’re learning is that the promise of microtargeting might have been a bit of a mirage.

Take a 2023 MIT study that examined political microtargeting. Surprisingly, the study found that old-fashioned demographic targeting—age, income, region—was as effective as complex AI-driven targeting models. Political preferences are deeply emotional, built over years of experience, culture, and identity.

AI’s biggest enemy in the political sphere might just be…politicians. As eager as tech companies are to deploy generative models, the actual campaigns are reluctant to rely on AI. Much of this hesitation comes from public perception and regulatory risks. The federal government’s recent ruling banning AI-generated robocalls that impersonate politicians highlights the quick and severe backlash that can come with using AI in elections. No one wants to be the first to publicly embrace an untrustworthy or potentially exploitative technology.

Campaigns are also wary of public trust. Even the most AI-friendly voters aren’t necessarily comfortable with AI-generated persuasion. For example, while voters don’t seem to mind AI-generated voiceovers, testing has shown that they aren’t more persuasive than a human’s voice. The trade-off for campaigns is high: using AI could appear manipulative, techy, or just too clinical for the average voter, especially when authenticity is such a prized political asset.

If we’re honest, the AI election may simply be a few cycles away. The technology has arrived, but the infrastructure—networks of influence, channels of trust, methods of subtle persuasion—is still in its early stages. By 2028, these may be fully built out. AI may become more effective, blending seamlessly into the media landscape rather than sticking out like a sore, synthetic thumb. Imagine a world where it’s harder to distinguish deepfakes, where AI-generated content is tailored with enough subtlety to bypass suspicion.

And that’s the real danger: not that AI hasn’t impacted the 2024 election but that it’s getting better quietly. Today, we can still tell when something’s a bit too polished, too perfect, or too stilted. But as these rough edges are smoothed out, AI’s true electoral influence might arrive without any fanfare at all.

For all the noise around AI’s political influence, the truth is that technology usually advances in steps, not leaps. Campaigns are fundamentally human exercises: they depend on stories, on symbols, on narratives. AI, for now, just doesn’t understand how to wield those tools the way people do.

Perhaps AI will come into its own by 2028, ushering in the feared “AI election.” Or maybe it won’t. But for now, AI remains the dog that didn’t bark—a reminder that technology alone rarely changes anything. It’s the people who use it, and the systems that support it, that truly shape the future.

Happy reading! 📚🤖🎵

In this week’s issue:

  • News Flash: Claude 3.5 Sonnet, ChatGPT Search, Perplexity’s Launch

  • AI Frontier: AI Accounting and Insurance Tools you can use today

  • Fundraising: The biggest deals in AI

  • Nerd Out: Technical and Business Content for Everyone

⏱️ News Flash

The 2-Minute Scoop to Keep You in the Loop

What's the Buzz?
Anthropic has introduced Claude 3.5 Sonnet, a next-gen AI model with groundbreaking capabilities in coding and navigating computer interfaces, designed to help developers automate complex workflows.

Breaking It Down
Claude 3.5 Sonnet is an enhanced AI model with top-tier coding performance and a new feature that enables it to use computers like humans—controlling a mouse, navigating software, and entering data. This feature is currently in beta and has captured the interest of major players like Canva and Replit for automating intricate, multi-step tasks.

Why It Matters
Claude’s computer-use abilities could change productivity, allowing users to automate tedious tasks, reduce errors, and focus on more critical work—bringing us a step closer to AI-assisted workforces in every industry. I recommend checking out Ethan’s article on this.

What's the Buzz?
OpenAI has launched ChatGPT Search, allowing users to access real-time web search results directly within the ChatGPT interface, integrating up-to-date information like news, sports, and stocks.

Breaking It Down
With ChatGPT Search, Plus and Team users can ask questions naturally and receive answers sourced from current web content, with links to verify and explore further. This feature aims to streamline the search process, combining conversational AI with reliable web sources, and includes partnerships with prominent news providers like Reuters and The Associated Press.

Why It Matters
ChatGPT Search positions OpenAI as a direct competitor to search giants like Google and Bing.

What's the Buzz?
Perplexity, an AI search engine, has launched Internal Knowledge Search and Spaces, empowering users to search both web and internal files and collaborate within customizable AI-powered hubs.

Breaking It Down
Internal Knowledge Search allows Pro and Enterprise users to seamlessly search across public web content and internal files, enhancing productivity for teams in finance, sales, HR, and beyond. Additionally, Perplexity Spaces offers a collaborative environment where teams can securely organize research, connect files, and customize AI interactions.

Why It Matters
By unifying internal and external search capabilities and creating collaborative Spaces, Perplexity positions itself as a key tool for faster, more integrated research. This also comes on the backs of Perplexity attempting to raise their next round of funding at a $9B valuation. I think the product is great but my take is that products like Perplexity might become like Slack—a super cool product, but with usage paling in comparison to larger competitors due to the power of distribution.

🚀 AI in Practice

Cutting-Edge AI Accounting and Insurance Tools You Can Use Today
  • Kick - Accounting software that does the work for you

  • Insuresmart - Convert your insurance chaos into clarity

🤑 Fundraising

The (AI) Intelligent Investor

🤖 Nerd Out

Technical and Business Readings

😜 Ctrl-Alt-Delete… Your Job

Tech Support Not Included!

How did you like this week's newsletter?

Vote below:

Login or Subscribe to participate in polls.

If you were forwarded this newsletter, you can access more of our content by subscribing here.

Best,

Reply

or to participate.