Stanford’s AI Index report provides key takeaways and trends in the rapidly evolving field of AI. The Institute for Human-Centered Artificial Intelligence’s annual report includes input from experts in academia and private industry to provide a broad survey of the industry. The 386-page report includes new research on foundation models, the environmental impact of AI systems, K-12 AI education, public opinion trends in AI, and policy in more than 100 new countries.
The report’s key takeaways include the fact that AI development has shifted from academia to industry, the energy footprint of AI training and use is growing, and there has been an increase in AI incidents and controversies. According to the report, AI-related skills and job postings are growing, but not as quickly as expected, and investment has temporarily stalled.
The report’s Chapter 3, Technical AI Ethics, delves deeper into bias and toxicity in AI, which are difficult to quantify. According to the report, unfiltered models are much easier to lead into problematic territory, and instruction tuning can help to alleviate this problem. Making models more fair or unbiased in one way, however, may have unanticipated consequences in other metrics.
AI has been shown to be particularly bad at fact-checking, as it struggles to evaluate factuality and can become a powerful source of misinformation. Because trust in AI is critical to the industry, there is a growing interest in improving AI fact-checking.
Overall, the report provides a comprehensive and readable overview of the state of artificial intelligence and its challenges, making it a valuable resource for those interested in the field.