This wiki exists to help communicate our research agenda. It is intended to be a continual work-in-progress.
Agent foundations is both pre-paradigmatic and interdisciplinary. This makes it quite difficult to present a clear, linearized outline of our ideas, motivations, and research intentions. Thus, we are presenting the Dovetail research agenda in the form of a wiki, with dense hyperlinks between the pages. Hopefully this helps you find the information that interests you and lets you choose how much background to read. It is not written to persuade anyone of our positions, but instead to help those familiar with the field of AI risk know where Dovetail fits into the landscape.
At the time of creation, this wiki is all being written by Alex Altair. We’re going to use “we” throughout, to convey that these ideas are intended to be part of the background culture of Dovetail’s research. Any other individuals working under Dovetail will have their own beliefs, theories of change, priorities, et cetera.
Similarly, since the field of agent foundations is pre-paradigmatic, there are very few statements about the field that most researchers would agree on without modification. In this wiki, statements about agents, optimization, etc. should be taken as our opinion, while statements about mathematics outside agent foundations should be taken as non-controversial.