Countries in the European Union (EU) are one step closer to introducing regulations for artificial intelligence.
The EU Parliament, which is made up of 705 members from 27 states, signed off on a suite of AI reforms this week. It was supported by almost 500 members, with the legislation being led by members from Italy and Romania.
The draft laws will ban some high-risk AI and place safeguards on systems like ChatGPT.
Background on the EU AI laws
The rapid growth of AI programs, largely led by ChatGPT, has stoked concerns over the misuse of the technology in the EU.
AI programs have remained largely unregulated, which has added to these concerns.
The bans
Bans for AI technologies deemed intrusive, discriminatory, or a significant safety risk, are being considered by the EU.
This includes public bans on biometric surveillance and predictive AI policing systems. These can be used for profiling, and further determining the location of past criminal behaviour.
The EU has also flagged concern for high-risk AI systems that could influence voter decisions during elections.
ChatGPT crackdown
The proposal also includes new transparency requirements and disclosures for systems like ChatGPT.
This would mean any AI-generated content would come with a transparency disclosure. It would limit the danger of misleading material from AI systems, such as deepfake images.
One member of Parliament said the draft laws would harness AI’s “positive potential” but also “fight to protect our position and counter dangers”.
What’s next for the EU AI laws?
The laws will now be negotiated with the Council of the European Union, which is made up of senior government representatives from EU member states.
The Council and Parliament must agree on the draft laws before it can become law. Talks between the two bodies have already begun.
What about Australia?
There have been very few regulatory responses to the rise of AI in Australia.
However, the Federal Government signalled plans this year to regulate AI use, including through potential bans on programs that have raised significant privacy concerns.
AI programs creating deepfakes or misinformation have also been identified as possible areas of regulation.