Getting ahead of AI: Why we should act now to manage an emerging tech threat | Op-Ed By state Reps. Bob Merski and Chris Pielli

en flag
es flag


By state Reps. Bob Merski and Chris Pielli

It’s hard to turn on the news without seeing stories about the potential dangers of artificial intelligence. Rapid advances in AI technology have prompted warnings about both real and speculative threats. In fact, earlier this year, a group of AI researchers, developers and experts penned an open letter warning that AI technologies pose “profound risks to society and humanity.”

While the existential threat of an uprising by AI-based robots may seem far-fetched for now, there are other very real dangers – including the misuse of this technology to mislead and defraud Pennsylvanians – that we need to address today.

What is AI? 

Artificial intelligence is the ability of computer-based machines to simulate tasks normally associated with the human brain, including language recognition, visual perception, learning and decision-making.

Although the concept is not new, the technology has grown much more powerful in recent years because of advances like the neural network – a mathematical system patterned after the neurons in the human brain. Large amounts of data are fed into the system’s processing nodes, along with initial parameters and training, and the system learns by finding statistical patterns in that data.

Over the past several years, large language models – neural networks that are trained using huge amounts of text – have led to systems that are capable of writing stories and carrying on natural-sounding conversations.

Alexa, Siri, chatbots, and thousands of other systems are powered by AI, from text-to-voice translation and facial recognition programs to vehicle accident-avoidance mechanisms. And it’s anticipated that future AI technology could lead to substantial health advances, like earlier detection and diagnosis of cancer.

Deepfakes and other threats 

Despite all its promise, AI in the wrong hands – like any technology – has the potential to cause great harm, and one major area of concern is the spread of disinformation.

Pennsylvanians have the right to know if the content they are consuming was created by a human or by AI. That won’t be possible, however, as AI becomes more sophisticated and its interactions more natural sounding. That’s especially problematic as more and more people will rely on AI-generated content – with no guarantees of accuracy – for medical advice or other important life decisions.

Beyond the danger of misinformation are more sinister uses, including widescale fraud and the intentional spread of disinformation through fabricated content and images.

Especially troubling is the proliferation of “deepfakes” – images, videos or audio recordings that have been manipulated to replace a person’s likeness with that of another. As the technology advances – making it nearly impossible to tell an original from a counterfeit – so does the potential for malicious use.

Consider the chaos created by a call digitally manipulated to sound exactly like it is coming from a company’s CFO requesting a transfer of funds, or by the digitally manipulated image of a political leader or trusted expert offering fake information about a public safety issue. Consider the implications if we lose any sense of shared reality about the world.

Taking control of the situation 

With so much at stake, we need to put safeguards in place now to protect Pennsylvanians and preserve the integrity of information. As a first step, we are introducing legislation to help Pennsylvania regulate and control this technology so it can be used responsibly.

Our bills would:

  • Require a disclosure on all AI-generated content to give people reading or viewing it the information they need to make informed decisions and not be misled.
  • Impose criminal penalties for disseminating AI or computer-generated impersonations of someone without their consent – and making it a third-degree felony for engaging in this conduct with the intent to defraud or injure.
  • Create a taskforce to study the need for a commonwealth agency to monitor and license AI products used in Pennsylvania to protect residents from fraud.
  • Create policies and guidelines for PA agencies developing or using AI systems to ensure they are used safely in ways that protect and benefit residents.

We also have introduced a resolution that would encourage the commonwealth to establish an advisory committee to study AI and all its potential impacts on Pennsylvanians, from the spread of false information to the potential threat automation could pose to labor and blue-collar jobs. As legislators, we need to be aware of how AI is used so we can ensure that it is used ethically and responsibly.

While all the risks of AI are not yet known, one thing is abundantly clear: as the technology continues advancing rapidly, so do the possible dangers. We need to be proactive and put safeguards in place now, so Pennsylvanians are protected in the future.

Merski represents Pennsylvania’s 2nd Legislative District in Erie County and Pielli represents the 156th Legislative District in Chester County.

Informtaion provided to TVL by:
Liane Leshne
House Democratic Communications Office