In the Rush to AI, We Can’t Afford to Trust Big Tech

Published 11 měsíci ago -


Gary Marcus delivered these remarks to the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law on May 16.

Thank you, Senators. Today’s meeting is historic. I am profoundly grateful to be here. I come as a scientist, as someone who has founded AI companies, and as someone who genuinely loves AI — but who is increasingly worried. There are benefits; we don’t yet know whether they will outweigh the risks.

Fundamentally, these new systems are going to be destabilizing. They can and will create persuasive lies at at a scale humanity has never seen before. Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Democracy itself is threatened.
[time-brightcove not-tgx=”true”]

Chatbots will also clandestinely shape our opinions, potentially exceeding what social media can do. Choices about datasets AI companies use will have enormous, unseen influence. Those who choose the data will make the rules, shaping society in subtle but powerful ways.

There are other risks, too, many stemming from the inherent unreliability of current systems. A law professor, for example, was accused by a chatbot that claimed falsely he committed sexual harassment—pointing to a Washington Post article that didn’t exist.

The more that happens, the more anybody can deny anything. As one prominent lawyer told me Friday, “Defendants are starting to claim that plaintiffs are “making up” legitimate evidence. These sorts of allegations undermine the ability of juries to decide what or who to believe…and contribute to the undermining of democracy.”

Poor medical advice could have serious consequences too. An open-source LLM recently seems to have played a role in a person’s decision to take their own life. The LLM asked the human, “If you wanted to die, why didn’t you do it earlier”, following up with “Were you thinking of me when you overdosed?”— without ever referring the patient to the human help that was obviously needed. Another new system, rushed out, and made available to millions of children, told a person posing as a thirteen-year-old, how to lie to her parents about a trip with a 31-year-old man.

Further threats continue to emerge regularly. A month after GPT-4 was released, OpenAI released ChatGPT plugins, which quickly led others to develop something called AutoGPT, with direct access to the internet, the ability to write source code, and increased powers of automation. This may well have drastic and difficult to predict security consequences. What criminals are going to create here is counterfeit people; it is hard to envision the consequences of that..

We have built machines that are like bulls in a china shop—powerful, reckless, and difficult to control.

We all more or less agree on the values we would like for our AI systems to honor. We want, for example, for our systems to to be transparent, to protect our privacy, to be free of bias, and above all else to be safe.

But current systems are not in line with these values. Current systems are not transparent, they do not adequately protect our privacy, and they continue to perpetuate bias. Even their makers don’t entirely understand how they work.

Most of all, we cannot remotely guarantee they are safe.

Hope here is not enough.

The big tech companies’ preferred plan boils down to “trust us.”

Why should we? The sums of money at stake are mind-boggling. And missions drift. OpenAI’s original mission statement proclaimed “Our goal is to advance [AI] in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

Seven years later, they are largely beholden to Microsoft, embroiled in part in an epic battle of search engines that routinely make things up—forcing Alphabet to rush out products and deemphasize safety. Humanity has taken a back seat.

AI is moving incredibly fast, with lots of potential — but also lots of risks. We obviously need government involved. We need the tech companies involved, big and small.

But we also need independent scientists. Not just so that we scientists can have a voice, but so that we can participate, directly, in addressing the problems and evaluating solutions.

And not just after products are released, but before.

We need tight collaboration between independent scientists and governments—in order to hold the companies’ feet to the fire.

Allowing independent scientists access to these systems before they are widely released – as part of a clinical trial-like safety evaluation – is a vital first step.

Ultimately, we may need something like CERN, global, international, and neutral, but focused on AI safety, rather than high-energy physics.

We have unprecedented opportunities here, but we are also facing a perfect storm, of corporate irresponsibility, widespread deployment, lack of adequate regulation, and inherent unreliability.

AI is among the most world-changing technologies ever, already changing things more rapidly than almost any technology in history. We acted too slowly with social media; many unfortunate decisions got locked in, with lasting consequence.

The choices we make now will have lasting effects, for decades, even centuries.

The very fact that we are here today in bipartisan fashion to discuss these matters gives me hope. Thank you, Mr Chairman.

Original Article

comments icon 0 comments
0 notes
0 views
bookmark icon

Write a comment...

Vaše e-mailová adresa nebude zveřejněna. Vyžadované informace jsou označeny *