Papers
arxiv:2411.06498

Barriers to Complexity-Theoretic Proofs that Achieving AGI Using Machine Learning is Intractable

Published on Nov 10, 2024
Authors:

Abstract

A critique of a complexity-theoretic proof claiming learning from data cannot achieve human-like intelligence, identifying flawed assumptions about input-output distributions and highlighting challenges in defining human-like capabilities and accounting for inductive biases.

AI-generated summary

A recent paper (van Rooij et al. 2024) claims to have proved that achieving human-like intelligence using learning from data is intractable in a complexity-theoretic sense. We identify that the proof relies on an unjustified assumption about the distribution of (input, output) pairs to the system. We briefly discuss that assumption in the context of two fundamental barriers to repairing the proof: the need to precisely define ``human-like," and the need to account for the fact that a particular machine learning system will have particular inductive biases that are key to the analysis.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2411.06498 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2411.06498 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.