According to a survey, there is a 50% probability that AI will outperform humans in all tasks within the next 20 years.

AI is advancing at an extraordinary rate, transforming discussions not only within the enterprise but also in everyday contexts in just over a year. The pace of progress has surprised and raised concerns among professionals working in the field, as indicated by a recent survey.

The 2023 Expert Survey on Progress in AI, the most extensive study of its kind, gathered opinions from 2,778 authors who had contributed to leading industry publications and forums, revealing insights into the rapid evolution of AI.

Participating individuals highlighted in the survey revealed that, assuming scientific advancements continue without disruptions, the likelihood of autonomous machines surpassing humans in all conceivable tasks may reach 10% within three years (by 2027) and could escalate to 50% by 2047.

Additionally, respondents expressed concerns that the probability of all human occupations becoming entirely automatable might stand at 10% by 2037. More alarmingly, there is at least a 10% chance that advanced AI could lead to “severe disempowerment” or even the extinction of the human race. These sentiments align with the anxieties shared by some in the industry who adhere to the “existential risk” or “x-risk” beliefs regarding AI, which are closely associated with the effective altruism (EA) movement. Critics argue that such beliefs are unrealistic and divert attention from the immediate, tangible harms of AI, such as job loss or inequality.

“As much as the optimistic outlooks underscore the potential for AI to revolutionize numerous facets of work and daily life, the more pessimistic forecasts, especially those related to risks of extinction, act as a sobering reminder of the significant stakes associated with the development and implementation of AI,” noted the researchers.

AI is on the verge of being able to effectively perform a variety of tasks and occupations.

This survey represents the third iteration in a series, with previous ones conducted in 2016 and 2022. Many opinions and projections have undergone significant changes.

The survey, conducted in the fall of 2023, included four times as many participants as the previous year’s study. This timeframe was marked by substantial progress, including the introduction of models like ChatGPT, Anthropic’s Claude 2, Google’s Bard and Gemini, the circulation of two AI safety letters, and governmental initiatives in the U.S., UK, and EU.

Respondents were initially asked to estimate the timeframe within which 39 specific tasks would be “feasible” for AI. Feasible, in this context, was defined as something that “one of the best-resourced labs could implement in less than a year.”

These tasks included:

  1. Translating text in a newly discovered language
  2. Recognizing objects seen only once
  3. Writing specific and example-based Python code
  4. Creating The New York Times list of best-selling fiction
  5. Autonomously building a payment processing site from scratch
  6. Fine-tuning a large language model

With the exception of four tasks, all 39 were projected to have at least a 50% chance of being feasible within the next 10 years. Over the course of just one year between surveys, aggregate predictions for 21 out of 32 tasks shifted to earlier timelines.

Abilities expected to take longer than 10 years included: 

  • After spending time in a virtual world, outputting the differential equations governing that world in symbolic form (12 years)
  • Physically installing the electrical wiring in a new home (17 years)
  • Proving mathematical theorems that are publishable in top mathematics journals today (22 years)
  • Solving long-standing unsolved problems in mathematics such as a Millennium Prize problem (27 years)

When AI achieves autonomy or surpasses human capabilities.

Researchers also inquired about the timeline for achieving human-level performance in “High-Level Machine Intelligence” (HLMI) for specific tasks and “Full Automation of Labor” (FAOL) for occupations.

Respondents indicated that HLMI would be attained when unassisted machines could outperform humans in accomplishing each task more effectively and affordably. In contrast, FAOL would occur when an occupation becomes entirely automatable because unassisted machines can perform it better and more cost-effectively than human workers.

The respondents predicted a 50% likelihood of achieving HLMI by 2047, marking a forward leap of 13 years from the 2060 projection in the 2022 survey. They placed FAOL at a 50% likelihood by the year 2116, indicating a significant reduction of 48 years from the previous year.

“While the range of views on how long it will take for milestones to be feasible can be broad, this year’s survey saw a general shift towards earlier expectations,” researchers noted.

AI leading to diverse outcomes, both positive and negative
Certainly, there are numerous apprehensions regarding the risks associated with AI systems, often tied to factors such as alignment, trustworthiness, predictability, self-directedness, capabilities, and the potential for unauthorized modifications, researchers emphasize.

To assess primary concerns in AI development, respondents were questioned about the likelihood of state-of-the-art AI systems exhibiting specific characteristics by the year 2043.

In the next two decades, a significant majority of participants believed that models would possess the ability to:

  • Discover unexpected methods to achieve objectives (82%)
  • Converse like a human expert across various topics (81%)
  • Frequently exhibit behaviors that are surprising to humans (69%)

Survey participants also indicated that as early as 2028, AI might become perplexing for humans, making it challenging to discern the true rationales behind AI system outputs.

Furthermore, respondents expressed significant or extreme concerns about the potential misuse of AI, including the dissemination of misinformation through deepfakes, manipulation of large-scale public opinion trends, the creation of potent tools by malicious groups (e.g., viruses), and the exploitation of AI by authoritarian leaders to exert control over their populations. Additionally, they highlighted the potential for AI systems to exacerbate economic inequality.

Given these apprehensions, there was a strong consensus that prioritizing AI safety research is crucial, particularly as AI tools continue to advance.

Participants were almost evenly divided in their perspectives on the positive and negative impacts of AI. A majority (68%) believed that the positive outcomes of AI were more likely than negative ones, while nearly 58% acknowledged that extremely adverse consequences were a “nontrivial possibility.”

Responses varied based on question framing, with approximately half of all participants indicating a greater than 10% likelihood of either human extinction or severe disempowerment.

On the more pessimistic side, one in 10 participants assigned at least a 25% chance to outcomes within the spectrum of human extinction, according to the researchers.

AI experts aren’t fortune tellers (yet)

AI experts are not clairvoyants, and the researchers were keen to highlight this fact. Despite their familiarity with the technology and the historical dynamics of progress, predicting future developments remains a challenging task, even for seasoned professionals.

As stated in the paper, participants “do not, to our knowledge, have any unusual skill at forecasting in general.” Given the potential for diverse responses and the influence of question framing, achieving a true consensus can be elusive. The researchers suggest that forecasts should be integrated into a broader discussion, incorporating factors such as trends in computer hardware, advancements in AI capabilities, and economic analyses.

Despite the inherent limitations, the report contends that AI researchers are well-positioned to enhance the accuracy of collective predictions about the future. Acknowledging their unreliability, the report emphasizes that educated guesses form the basis of our understanding.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to Top