👀 家人們,每天看行情、刷大佬觀點,卻從來不開口說兩句?你的觀點可能比你想的更有價值!
廣場新人 & 回歸福利正式上線!不管你是第一次發帖還是久違回歸,我們都直接送你獎勵!🎁
每月 $20,000 獎金等你來領!
📅 活動時間: 長期有效(月底結算)
💎 參與方式:
用戶需爲首次發帖的新用戶或一個月未發帖的回歸用戶。
發帖時必須帶上話題標籤: #我在广场发首帖 。
內容不限:幣圈新聞、行情分析、曬單吐槽、幣種推薦皆可。
💰 獎勵機制:
必得獎:發帖體驗券
每位有效發帖用戶都可獲得 $50 倉位體驗券。(注:每月獎池上限 $20,000,先到先得!如果大家太熱情,我們會繼續加碼!)
進階獎:發帖雙王爭霸
月度發帖王: 當月發帖數量最多的用戶,額外獎勵 50U。
月度互動王: 當月帖子互動量(點讚+評論+轉發+分享)最高的用戶,額外獎勵 50U。
📝 發帖要求:
帖子字數需 大於30字,拒絕純表情或無意義字符。
內容需積極健康,符合社區規範,嚴禁廣告引流及違規內容。
💡 你的觀點可能會啓發無數人,你的第一次分享也許就是成爲“廣場大V”的起點,現在就開始廣場創作之旅吧!
Current AI Language Models Fall Short of EU AI Legislation Requirements
Stanford Study Reveals Non-Compliance of AI Tools
Researchers at Stanford University recently discovered that present-day large language models (LLMs), including AI technologies like OpenAI’s GPT-4 and Google’s Bard, do not conform to the regulations set forth by the European Union (EU) Artificial Intelligence (AI) Act.
The EU AI Act: A Groundbreaking Legislation for AI
This Act, a groundbreaking legislation that seeks to govern AI on both a national and regional scale, has been recently ratified by the European Parliament. It sets rules for AI within the EU, influencing a population of around 450 million, and stands as an avant-garde model for AI legislation globally.
However, the Stanford study suggests that for AI corporations to become compliant, they face a challenging journey ahead.
Assessment of Compliance with the AI Act
In their research, the Stanford team examined ten leading model providers, scrutinizing their compliance level with the 12 criteria stated in the AI Act, on a scale of 0 to 4.
The study found a stark disparity in levels of compliance among providers. Some scored less than 25% on compliance with the Act’s stipulations, with only one provider, Hugging Face/BigScience, achieving a score above 75%.
Evidently, even the providers with high scores have substantial room for enhancement.
Key Non-Compliance Issues and the Need for Improvement
The research highlighted key areas of non-compliance, including a transparency deficiency in reporting the status of copyrighted training data, energy consumption, emissions, and the strategy to manage potential risks.
In addition, a noticeable contrast was observed between open and closed model releases, where open releases offered a more comprehensive disclosure of resources, but presented greater challenges in monitoring and controlling deployment.
The Stanford team posited that all providers, irrespective of their release strategy, have room for substantial improvements.
Diminishing Transparency and the Complex Relation with Regulatory Bodies
In recent times, there has been a significant decrease in transparency with major model releases. For instance, OpenAI failed to disclose data and compute information for GPT-4, attributing it to competitive reasons and safety considerations.
These observations are part of a broader trend. OpenAI has recently been advocating for changes in countries’ stance towards AI, even suggesting a departure from Europe if regulations were overly restrictive, a statement they later withdrew. These actions highlight the often contentious relationship between AI providers and regulators.
Recommendations for Regulatory Improvement
The Stanford team proposed several measures to enhance AI regulation. They suggested that EU legislators should ensure that the AI Act holds larger model providers accountable for transparency and accountability. They also underscored the need for technological expertise and talent to implement the Act effectively, given the complexity of the AI landscape.
The researchers believe that the primary hurdle is the speed at which model providers can adapt their business models to comply with the regulatory requirements. They noted that without substantial regulatory pressure, many providers could achieve high scores (in the 30s or 40s out of 48) with meaningful but feasible changes.
The Future of AI Regulation
The Stanford study provides a glimpse into the future of AI regulation, asserting that the AI Act, if enacted and enforced, will create a significant positive impact on the AI landscape, fostering transparency and accountability.
AI is revolutionizing society with its unparalleled abilities and inherent risks. As we stand on the brink of regulating this transformative technology, the critical role of transparency is coming to the fore, not just as an optional component, but as a fundamental requirement for responsible AI deployment.