Using AI for Research

I briefly discussed this on Twitter with Jack Clark and Sam Arbesman, and the idea has still been bouncing around my head —  are there types of problems or fields more generally that we would be better off designing AI to solve (or explore the space) rather than trying to solve everything “by hand?”

For a more concrete example: suppose that there is a limited knowledge space to explore in the field of random allocation market design.  Suppose further that if academics were to continue working at the present pace, the entire field will be illuminated within 100 years.  Finally, suppose that there is some version of AI that could autonomously explore and illuminate the knowledge space, although the AI has not been invented yet.  The question is this: at what point would it be more productive for us to try building that AI, rather than exploring the space directly ourselves?  Clearly, there is some breakeven point – if it takes us X years to develop the AI, and Y years for the AI to fully illuminate the space, then we should develop the AI at any point X+Y<100.

This raises two related lines of questioning.  First, what do we think are the features of a given space (or research field) that are more amenable to low values of X and/or Y?  Second, at what point are we confident enough in our estimates of X and Y to actively switch over human capital from ‘direct research’ to building a given AI?  Clearly, the social costs of getting this wrong are potentially huge – either because the AI takes much longer than expected, cannot be built at all, or the incremental knowledge that ‘direct research’ would’ve produced in the meantime ends up being critical to welfare.  As to the last point, we can easily imagine a sort of nightmare scenario, wherein researchers are on the cusp of some world-altering discovery, but the decision is made to switch to AI building instead and delays that discovery by years.  In short, there are massive detrimental effects to a nonlinear knowledge-returns function.

Given the level of uncertainty described above, I’d find it difficult ever to recommend completely shutting off ‘direct research’ in favor of AI, but the problems outlined still persist if any resource allocation decision has to be made (even at the individual level, which of course can be influenced by policy).  It may be, therefore, that this field itself could benefit from some ‘direct research,’ and only after building the field the old-fashioned way could we say anything useful about the potential for AI in any other field.

With minor edits, this text was originally produced for a Market Design course at Booth, when I decided to go way off on a tangent…

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s