According to IT House reported on October 20, Stanford University recently released the AI basic model “transparency indicator”, of which the highest indicator is Meta’s Lama 2, but the relevant “transparency” is only 54%, so researchers believe that almost all AI models on the market “lack transparency”.
It is reported that the research was led by Rishi Bommasani, head of the HAI Basic Model Research Center (CRFM), and investigated the 10 most popular basic models overseas. Rishi Bommasani believes that “lack of transparency” has always been a problem faced by the AI industry, and in terms of specific model “transparency indicators”, IT House found that the relevant evaluation content mainly revolves around “model training dataset copyright”, “computing resources used to train the model”, “credibility of the content generated by the model”, “the ability of the model itself”, “the risk of the model being induced to generate harmful content”, “the privacy of users using the model”, etc., a total of 100 items.
The final survey showed that Meta’s Lama 2 topped with 54% transparency, while OpenAI’s GPT-4 transparency was only 48%, and Google’s PaLM 2 ranked fifth with 40%.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Stanford University released AI-based model "transparency indicator", Llama 2 tops but "fails" with 54%
According to IT House reported on October 20, Stanford University recently released the AI basic model “transparency indicator”, of which the highest indicator is Meta’s Lama 2, but the relevant “transparency” is only 54%, so researchers believe that almost all AI models on the market “lack transparency”.
It is reported that the research was led by Rishi Bommasani, head of the HAI Basic Model Research Center (CRFM), and investigated the 10 most popular basic models overseas. Rishi Bommasani believes that “lack of transparency” has always been a problem faced by the AI industry, and in terms of specific model “transparency indicators”, IT House found that the relevant evaluation content mainly revolves around “model training dataset copyright”, “computing resources used to train the model”, “credibility of the content generated by the model”, “the ability of the model itself”, “the risk of the model being induced to generate harmful content”, “the privacy of users using the model”, etc., a total of 100 items.
The final survey showed that Meta’s Lama 2 topped with 54% transparency, while OpenAI’s GPT-4 transparency was only 48%, and Google’s PaLM 2 ranked fifth with 40%.