Responsible AI

Key Takeaway:

In 2024 and onwards, there will be a significant shift towards skills-based hiring and flexible work arrangements propelled by AI integration. This evolution redefines traditional life stages and employment norms, emphasizing personal well-being and efficiency. As AI reshapes roles, it offers challenges and opportunities, necessitating a balanced approach to leverage its potential for productivity and innovation across various industries.

Trend Type: Social & Business

Sub-trends: Life Deconstruction, Working for Balance, From creators to curators, Augmented connected workforce, AI Ways of Work

In 2023, the concept of a responsible AI has emerged as a crucial trend in the technology industry, with companies and organizations increasingly recognizing the importance of developing and deploying artificial intelligence (AI) systems in an ethical and trustworthy manner. Although there is still a lack of statistical data on the matter, a recent survey conducted by Forrester, revealed that by the end of 2024, 25% of the largest companies in the Asia-Pacific region are expected to vocalize their commitment to customer trust, although only 5% will effectively codify this commitment through measurement and organizational key performance indicators (KPIs).
The rise of misinformation, deepfakes, and fake news on social media has led to a growing distrust among the public. In fact, 81% of online adults in the United States, the United Kingdom, Spain, and Italy agree that there is a significant amount of fake news and misinformation on social media platforms, according to the same Forrester report. Even in metropolitan China, 63% of online adults share this sentiment. As a result, traditional news organizations such as The New York Times and the BBC, along with independent journalists, are expected to experience a resurgence as trusted sources of information. And as AI becomes more deeply ingrained in society into various sectors—acting as personal assistants, medical advisors, or financial analysts— there is a growing concern about its misuse. According to a Ford report, over 80% of people believe that companies should openly disclose their use of AI, which could help build much needed trust.
 
Deloitte Tech Trends 2024
AI Regulations
While most individuals anticipate that AI will become an integral part of their lives in the near future, there is an increased sense of uncertainty and confusion compared to a few years ago, with the emergence of AI skepticism. This rapid pace of change has led to concerns about the potential impact of AI on others, even more so than on themselves. Despite these concerns, there is a general consensus that AI is here to stay and will continue to play an increasingly significant role in our daily lives. To address these concerns, a movement for responsible and ethical use of AI is emerging, with a focus on developing clear AI governance frameworks that respect human rights and values. In Europe, the Digital Services Act regulates the behavior of online platforms as they affect human rights, public health and safety, and democratic discourse – with further regulation on generative AI expected to be finalized by the end of 2024. Large platforms are required to conduct annual risk assessments and clearly communicate their policies to users. Independent auditing by nongovernmental groups is also emerging as a form of oversight. In detail, growing regulatory and privacy demands are placing an increased emphasis on data security. Namely on identifying security gaps, taking precautionary measures to prevent potential ransomware threats, and conducting internal threat assessments.
AI Policies
Although organizations must commit to transparency and trustworthiness in the development, use, and outcomes of AI systems, according to a Cisco report, 76% of organizations currently lack comprehensive AI policies. According to Gartner, by 2026, the adoption of AI Trust, Risk, and Security Management (AI TRiSM) controls can help organizations enhance the accuracy of their decision-making by eliminating 80% of faulty and illegitimate information, . These controls also enable enterprises to move more AI projects into production, achieve greater business value, and experience improved model precision and consistency.

Use Cases

AI Wary: Upstart is a fintech firm that employs AI for credit scoring. They use machine learning algorithms to evaluate borrower risk based on factors beyond traditional credit history, including education, employment, and income. Upstart’s AI-powered underwriting model considers a wide range of data points to assess creditworthiness and determine loan eligibility.

Growing regulatory & Privacy Demand: Stanford Internet Observatory, a cross-disciplinary lab for the study of abuse in IT, works on different perspectives including trust and safety, information integrity (including misinformation), emerging tech (including AI) and policy.

Use Cases

AI Wary:

Growing regulatory & Privacy Demand: Stanford Internet Observatory,

Sub-Trend Sources
Customer Trust: Forrester Predictions
AI Wary: Ford Trends, Economist Ten Business Trends
Responsible AI: Cisco Trends, Kantar's Media Trends, Future Today Institute
Marketers become privacy champs: Forrester Predictions
GenAI Regulation: Delloite TMT Predictions, Forrester Tech Predictions Europe
Growin' regulatory & privacy demands: MC Blogs IT Trends, MIT Strategy Summit Report, Future Today Institute
AI TRISM: Gartner Strategic Trends, PWC AI Trends, Future Today Institute
GenAI Governance: BCG The Next Wave, Deloitte Tech Trends
AI Trust: Ford Trends (social), Future Today Institute, PWC AI Trends, Deloitte Tech Trends, Finance & Development