Proposal for a Bayesian Conspiracy

May 23, 2021

I think we are in an AI overhang. By AI overhang I mean that we are a small number of theoretical breakthroughs away from superintelligence. By superintelligence, I mean an artificial system which is significantly superior to human beings in all important problems of general inference, especially data efficiency. A superintelligence need not be an agent.

Three things limit the power of our machine learning systems today:

Data is not the limiting factor between today's technology and a superintelligence. We can already feed way more data through a computer than through a human brain.

The crux of the issue is whether we are software-limited or hardware-limited. Some people believe we are hardware-limited. If civilization lacks the hardware to create a superintelligence then we are not in an AI overhang. I think we are software-limited.

I think there is at least one major software overhang because the human brain has capabilities our artificial neural networks (ANNs) lack. In particular, ANNs cannot solve small data problems because they validate by minimizing error over historical results.

What this tells me is that the human brain is using a different algorithm. While the human brain might be using some sort of predictive processing (which is equivalent to gradient descent), I think it is doing something else on top of that.

I have an idea what equation additional "something else" might look like, as well as some ideas on how to run an architecture search for it.

It is difficult to work on a project like this without colleagues. I am not part of an academic institution or a large company. My preferred method of finding colleagues is to publish my work online. Blogging works great for harmless subjects.

My theory is probably wrong and definitely incomplete. But if it is a major step in the right direction then it would be cosmically irresponsible to publish the details online.

My solution so far has been to post the harmless information on my blog and to discuss the sensitive details one-on-one with individual friends. I think there should be a middle ground. Research laboratories often have weekly research paper clubs. I could use something like that.

If you're publishing work online then you can let everyone in the world read it. But small groups limit how many people can participate. You must be very selective.

The most important thing is that participants be good people, easy to collaborate with and trustworthy enough to keep a secret. But that is true of all conspiracies. There is nothing special about this conspiracy's interpersonal dynamics.

What is special is the minimum intelligence required. All members of such a group must meet the following minimum criteria:

  1. Competent enough at math and programming they can read a research paper describing a theoretical approach in equations and then then write the actual code.
  2. Creative enough to be constantly inventing new methods of machine learning. Think "the kind of people Lee Smolin bemoans a lack of in physics".
  3. Ambitious enough they are already trying to build an AGI.

Testing technical competence is straighforwards. Whether someone can crack three problems of this difficulty in an hour or less might work. Among my friends, most of the programmers and none of the data scientists are smart enough for this work. The quants I know all meet the cut, along with my PhD cryptographer friend and even an exceptionally-capable Google engineer.

Testing someone for creativity is harder because good ideas often look like bad ideas. Fortunately, openmindedness is a major component of creativity and openmindedness is easy to measure just by talking to someone.

Openmindedness must be balanced with coherence. I don't think we need to worry too much about clear thinking. Math and programming require coherent thought. If someone meets the minimum technical competance then they probably think coherently enough to defy the pits of insanity.

Intelligence correlates with openmindedness. My smartest friends are all sufficiently openminded. The problem is they lack ambition. None of them are trying to invent an AGI.

I say "invent an AGI" and not "solve the problem of AI alignment" because if you are not smart enough to invent an AGI then you are not smart enough to solve the problem of AI alignment. It's like writing secure software. It is easier to write an insecure system than a secure one. It is easier to write an unaligned AGI than an aligned AGI. If you cannot invent an unaligned AGI then aligning an AGI is beyond you. Anyone who is trying to solve the problem of AGI alignment without inventing an AGI shall forever be among those cold and timid souls who neither know victory nor defeat.

How do you tell whether someone is seriously working on an AGI? I think the person must meet three criteria.

  1. They can articulate the core challenge of the problem.
  2. They have identified an overlooked attack vector.
  3. They have novel equations and/or computer code to show for it.

These are table stakes.