Are Big AI Models Compliant Of EU Rules? The Answer In This Study Might Surprise You

m
featured image

A new research study is said to be the first to translate the EU’s AI regulations for models and their requirements. However, you’ll be amazed to learn that as per the results, no big AI model is compliant with EU’s rules.

A total of 12 different LLMs were studied by authors in this research. Out of those, just one came close to being compliant but still didn’t meet all the regulations outlined in the EU Law.

Thanks to experts at ETH Zurich, we got more insights about the spin-off of AI and EU laws. Interestingly, no one seems to be mindful of the law, raising eyebrows about the safe use of AI in this part of the world.

For the research, the authors designed a compliance checker called COMPL-AI that comprises several benchmarks to evaluate how well AI models must adhere to the EU rules. The law is based on several ethical principles highlighted in the EU AI Act. They include fairness, human agency, no discrimination, data protection, diversity, and transparency.

Several LLMs were used in this research including giants like OpenAI’s ChatGPT, Mistral, Llama, and Claude. While data privacy might be ticked off as the most common feature for compliance in all, we can’t say the same for diversity, fairness, and no discrimination.

OpenAI’s GPT-4 Turbo came the closest to being overall compliant in all domains while trailing behind were Llama 3-70B from Meta and Claude 3 from Anthropic. The authors concluded that they all had several shortcomings including diversity and fairness. They also failed in terms of explaining key concepts. It looks like ethical and social requirements were never considered important by the makers of these models, the results showed.

The main priority for design appeared to be general capabilities and how well they can perform under pressure which is great but that was it. As per the experts of this study, taking into consideration the EU AI Act is always important for responsible and reliable AI use. Seeing a lack of this in these models was alarming.

The results were further shared with the EU AI Office. The authors similarly made their COMPL-AI an open tool on GitHub so others can do similar studies and document their findings as contributions.

As expected, the EU Commission was happy to see the results and efforts put out by the experts. They welcomed the study as one of the first to translate the EU AI Act into tech requirements that assist AI models in implementing the AI Act.

The law was first adopted in March of this year and then came into play in August. But the actual tech standards for models deemed high risk won’t get enforced until several years from now. So this means developers do have time to make the necessary changes to meet the law.

Image: DIW-Aigen

Read next: Reddit’s CEO Takes Stand Against Data Misuse by Tech Giants in AI Arms Race

https://zabollah.com/are-big-ai-models-compliant-of-eu-rules-the-answer-in-this-study-might-surprise-you/
m
Read Also :
Labels : #AI ,#Artificial Intelligence ,#Business ,#News ,#technology ,#Uncategorized ,
Getting Info...
A tech blog focused on blogging tips, SEO, social media, mobile gadgets, pc tips, how-to guides and general tips and tricks

Post a Comment