AI Is an Answer, But Not the Only Answer — Here’s Why It Can’t Replace Humans

AI Is an Answer, But Not the Only Answer — Here’s Why It Can’t Replace Humans


Opinions expressed by Entrepreneur contributors are their own.

As we emerge from Spotify Wrapped season, many will agree that this past year’s recaps looked a bit … different, disappointing some who proclaimed this iteration a “flop” due to over-reliance on generative AI, barely a year after Spotify’s conspicuous layoff of 1,500 people.

This sort of narrative is not unique to the music industry. It’s an ongoing conversation across sectors: How do companies strike a balance between AI’s benefits and its human cost? How should AI be regulated? And who is responsible for policing AI while we work out the answers to these questions?

A balancing act

The potential AI offers is well-documented: the intelligent automation of clerical tasks and advanced decision-making, increased capacity to process and infer from data, and the ability to mimic human creativity.

The real-world implications here are significant. Publications have questioned, for example, “will we still need software developers” in a world where AI can write code or, in the legal industry — where even junior associates may bill nearly $1,000/hour for the sort of legal research and drafting that AI is already becoming adept at replicating — whether the billable-hour will remain viable (or ethical).

Qualms about AI, too, are well-documented: ethical and moral concerns centered on bias, privacy and job loss; environmental concerns; and existential concerns about the displacement of human labor by nonhuman models trained on the output of those very same humans they seek to mimic (or replace).

Related: AI Agents Are Becoming More Humanlike — and OpenAI Launched a New One in January. Are Entrepreneurs Ready to Embrace the Future?

The regulatory dance

The collective uncertainty clouding today’s largely pre-regulated AI landscape is not altogether dissimilar from past technological disruption. Those familiar with the music industry, for example, will recall the uneasy transition to digital streaming, seemingly cannibalizing revenues derived from paid downloads. Downloads had themselves risen to prominence as something of a defensive maneuver — an attempt to salvage something in the post-Napster world, which had thoroughly destroyed the CD-driven sales boom of the 1990s. Even the CD itself was only the last of many dominant 20th-century music technologies to rise and fall. In each instance, the industry adapted and survived.

In some cases, the industry’s internal response happened in a vacuum; in others, legislative, regulatory or judicial actions shaped that response — from recent legislation tailoring licensing practices to the realities of streaming, to 1990s and 2000s case law clarifying the rules surrounding sampling, all the way back to WWII-era consent decrees imposed upon licensing societies formed by rightsholders in the early days of radio.

In each of those cases, though, the response from the applicable branch of government came several years after the industrial rise of the relevant technology. The same is likely to be true of AI. Scores of AI bills are currently stalled before Congress. Dozens of AI-focused lawsuits, too, continue to inch through the judiciary. At the regulatory level, there is significant uncertainty as to how the looming shift in Executive control will affect AI policy, even as current regulatory efforts by the U.S. Copyright Office to propose AI policy recommendations have already fallen well behind initial deadlines.

This is going to take a while to sort out. n the interim, industries will continue to experiment with new ways to use AI. And bad actors will find new ways to exploit this underregulated frontier.

Related: AI Could Ruin Your Life or Business — Unless You Take These Critical Steps

Who’s minding the store?

Meanwhile, absent an effective regulatory schema, industries are left to self-police those bad actors. But whose job, exactly, is it to do that?

In the music industry, there are a number of practical realities that are particularly attractive to fraudsters: a sprawling streaming ecosystem where millions of tracks are uploaded monthly; the billions of hours of music that are streamed each year for fractions of a penny; and a convoluted licensing regime where the streaming services best-positioned to police fraud often pay a blanket percentage of revenue (rather than per-stream) to license music, and thus are perhaps less incentivized to police fraud than the individual creator whose share of the overall streaming pie necessarily narrows when fraudulent slices of that pie disappear, but who has no realistic means to counter that fraud.

In one high-profile example, an individual was indicted for using AI to create music distributed under fake “artist” monikers and then again using AI-powered bots to inflate stream counts and drain around $10 million from the royalty pool available to legitimate creators. The fact that someone may have scammed the music industry for monetary gain is not surprising; that’s a tale as old as time. Two things are noteworthy, however: The alleged fraudster in this case turned to AI only after traditional methods of fraud had floundered; and it took nearly six years for his scheme to be flagged by an industry licensing entity (and it may have altogether eluded many of the streaming services themselves).

Federal prosecution notwithstanding, even this example is just a drop in a much larger bucket of AI-powered fraud that either goes entirely undetected, or goes undetected for longer than would be the case if the incentives and the ability to police fraud were aligned or if an effective regulatory framework to police fraud existed.

Related: Nearly Half of Americans Think They Could Be Duped By AI. Here’s What They’re Worried About.

The human touch

While one can understand why businesses across sectors want to embrace AI in their zeal for efficiency, these recent headlines caution against an absolutist approach. AI is an answer, not the answer. Though it can be tempting to lose patience with governmental entities lagging behind industrial experimentation with AI, regulators and the regulated alike should proceed with caution, balancing both innovation and integrity, both efficiency and human-centricity — not simply because it is the right thing to do, but because we have plenty of examples for why abandoning that approach is self-defeating.

Both art and fraud derive from human ingenuity, and the effects of both are experienced by real human beings. Even if both can be enhanced or disrupted by AI, both are fundamentally human endeavors. As AI’s infancy transitions into an uncertain adolescence, industries and regulators alike should act accordingly.



Source link