The Measure of Intelligence メモ
A hypothetical ARC solver may take the form of a program synthesis engine that uses the demonstration examples of a task to generate candidates that transform input grig into corresponding output grids.
・Start by developing a domain-specific language (DSL) capable of expressing all possible solution programs for any ARC task.
Since the exact set of ARC tasks is purpously not formally definable, this may be challenging (the space of tasks is defined as anything expressible in terms of ARC pairs that would only involve Core Knouledge).
It would require hard-coding the Core Knouledge priors from III.1.2 in a sufficiently abstract and combinable program form, to serve as basis functions for a kind of "human-like reasoning DSL".
We believe that solving this specific subproblem is critical to general AI progress.
・Given a task, use the DSL to generate a set of candidate programs that turn the input grids into the corresponding output grids.
This step would reuse and recombine subprograms that previously proved useful in other ARC tasks.
・Select top candidates among these programs based on a criterion such as program simplicity or program likelihood (such a criterion may be trained on solution programs previously generated using the ARC training set).
Note that we do not expect that merely selecting the simplest possible program that works on training pairs will generalize well to test pairs (cf. our definition of generalization difficulty from II.2).
・Uses the top three candidates to generate output grids for the test examples.
III.1.2 Core Knouledge priors
Any test of intelligence is going to involve prior knowledge. ARC seeks to control for its own assumptions by explicitly listing the priors it assumes, and by avoiding reliance on any information that isn't part of these priors (e.g. acquired knowledge such as language). The ARC priors are designed to be as close as possible to Core Knoledge priors, so as to provide a fair ground for comparing human intelligence and artificial intelligence, as per our recommendations in II.3.1.
The Core Knowledge priors assumed by ARC are follows:
a. Objectness priors:
Object cohesion: Ability to parse grids into "objects" based on continuity criteria including color continuity or spatial contiguity (figure 5), ability to parse grids into zeros, partitions.
Object persistence: Objects are assumed to persist despite the presence of noise (figure 6) or occulution by other objects. In many cases (but not all) objects from the input persist on the output grid, often in a transformed form. Common geometric transformations of objects are covered in category d, "basic geometry and topology priors".
Object influence via contact: Many tasks feature physical contact between objects (e.g. one object being translated until it is in contact with another (figure 7), or a line "growing" until it "rebounds" against another object (figure 8).
b. Goal-directedness prior:
While ARC does not feature the concept of time, many of the input/output grids can be effectively modeled by humans as being the starting and end states of a process that involves intentionality (e.g. figure 9). As such, the goal-directedness prior may be not be strictly necessary to solve ARC, but it is likely to be useful.
c. Numbers and Counting priors:
Many ARC tasks involve counting or sorting objects (e.g. sorting by size), comparing numbers (e.g. which shape or symbole appears most (e.g. figure 10)? The least? The same number of times? Which is the largest object? The smallest? Which objects are the same size?), or repeating a pattern for a fixed number of time. The notions of addition and substruction are also featured (as they are part of the Core Knowledge number system as per ). All quantities featured in ARC are smaller than approximately 10.
d. Basic Geometry and Topology priors:
ARC tasks feature a range of elementary geometry and topology concepts, in particular:
・Lines, rectangular shapes (regular shapes are more likely to appear than complex shapes).
・Symmetries (e.g. figure 11), rotations, translations.
・Shape upscaling or downscaling, elastic distortions.
・Containing / beingcontained / being inside or outside of a perimeter.
・Drawing lines, connecting points, orthogonal projections.
・Copying, repeating objects.