Alienation
L: something i’ve been worried about is whether it’s possible to be a mathematician and also have a life? by which i mean, keep up with current events and come out here to things like this and contribute to such communities. at oxford i felt alienated because i didn’t know any mathematicians who were, uh, ‘publicly situationally aware’ — that is, keeping up with what’s going on in ai and so on — or even shared more general rationality interests, like knowledge management systems and augmentative tech and applied rationality and optimization —
X: i think i felt very similar as a graduate student. having a foot in two cults helps you not get captured by either one, right? i always had the sense i could escape from one into the other if i ever needed to. and in a way i think that helped me to do both more fully.
L: do you know others?
X: scott aaronson certainly comes to mind as someone who has situational awareness and a foot in both worlds. also terence tao and timothy gowers; they’re starting to do things with ai for mathematics. lots of stanford professors are going to deepmind —
L: for sure — though i guess that won’t really help when i’m back in my ivory tower. i’d like to populate my mind with examples of people who knew but stayed grounded and kept going anyway —
X: also evan chen, yes, evan’s very savvy.
in my case i decided that whatever happens with the singularity happens, and to live as well as i could independent of that
L: yeah…
if fhi still existed, i could live there and things would all make sense. but it doesn’t, and there’s nothing else like it. there are increasingly many ais groups in cs departments, but some of them maybe lack the mathematical depth that i seek
Continuity
X: what academia gave me was continuity. it fulfilled some deep psychological needs. none of the industry jobs felt like they offered that — two years, three years, whatever
L: yeah. academia doesn’t offer me continuity, you know? living here (staying in the san francisco bay area) would give me continuity — i can already feel some momentum / accelerating returns building up.
i think i could be independent and have continuity; i think i’m confident enough in my skills & ability to do good research and build good things.
oxford is constant interruption. i spend eight weeks there, then another six out here — stop-start, stop-start…
academia is where i have to constantly restart context; out here is where i have continuity.
X: something that’s stuck with me is anna salamon’s post saying “the correct response to uncertainty is not half-speed”
Agency
L: i read something from another blogger recently. it said something like “academia is weird, because once you start getting distracted, it no longer makes sense to pursue this high-intellect, low-agency trajectory”
it’s true that if you’ve never thought about doing anything else, you get a lot of momentum / accelerating returns behind you, right?
X: yeah, i think i’d disagree with that frame. you can use ‘agency’ to get stuff done in academia. there are financial problems if you’re in mathematics academia, which you’ll be better able to overcome with high agency. there’re two-body problems — i don’t know if you’ve heard of that — yeah. even in pure mathematics, it’s not pure, you still have to think about external things, and then applied rationality will help you
L: nice
IC/researcher vs. PI/prof/research lead
X: also, when you make a career in academia as a professor, research is really only a very small part of what you do. for that reason, there’re a lot of grad students who say “grad student days were the best days of my life”
L: yeah, no, the part i really look forward to is being a PI / professor! i like every taste of it i’ve gotten already — and i do wonder / am open to the idea i might be able to get more of what i’m after outside academia —
i might be able to get to the PI stuff sooner — i mean, i currently have research mentees, because ai alignment is a deeply unhealthy field that allows 21yos to become research mentors. i like every part so far; i could keep doing this. i mean, it’s in a relatively shallow / superficial subfield of alignment right now, but i could probably pivot to deeper ones within a year’s effort. and that’s compared to probably ~6-7 years in academia to get to that PI/professor stage, which is what i’m doing all this for
X: yeah. fwiw i had research mentees even as a grad student! so you can squeeze that in at every step if you try
but i don’t know that it makes sense for someone working in ai safety to do academia. all the incentives seem to point against it.
or like, i think the field would benefit from more people who’ve been through this system. but is that a reason for one individual to do it?
L: :/ yeah.

