This post is also available in: עברית (Hebrew)
AI and AI-based tools are here, they are everywhere, and they are here to stay. There have been many calls for some regulation on AI tools like ChatGPT since they came into public awareness, especially considering how quickly it has been spreading into our everyday lives.
Nations are clamoring to implement some regulations over AI and are faced with challenges due to its quickly ever-changing nature.
European Parliament lawmakers will vote Wednesday to kickstart talks to approve the world’s first sweeping rules on artificial intelligence systems like ChatGPT, aiming to curb potential harms while still nurturing innovation. Their goal is to reach an agreement on final legislation by the end of the year, although the law would not come into force until 2026 at the earliest.
Many experts and enthusiasts are praising this technology for how it will transform society, including work, healthcare and creative pursuits, while others are terrified by its potential to undermine democracy.
According to Tech Xplore, the law is meant to regulate AI by the level of risk: the higher the risk to individuals’ rights or health, for example, the greater the systems’ obligations. The EU’s proposed high-risk list includes AI in critical infrastructure, education, human resources, public order and migration management.
The parliament has added extra conditions that need to be met to get the high-risk classification, including the potential to harm people’s health, safety, rights or the environment.
There are also special requirements for generative AI systems such as ChatGPT and DALL-E, which can produce text, images, code, audio and other media. The intention is to include a message of sorts informing users that a machine, not a human, produced the content.
As we use and integrate AI-based technology and tools further and further into our daily lives, there is definitely a great need for regulation, before this tech gets ahead of us, and causes real damage with misinformation or privacy breaches.