‘Preparation For Future Risks’
This was an excellent but difficult book to read.
At its root, it’s about what happens when something like AI achieves greater-than-human intelligence. The problem is when such a thing learns how to improve how it learns, intelligence could increase exponentially. And if it does, what does that mean for humanity? Will it be benevolent, or will it be destructive? And even if it’s not intentionally malevolent, how can we protect ourselves from it destroying us accidentally (turning the entire planet into paperclips, us included, as the canonical example)?
Big questions. There’s more to the book than just questions though – it does its best to provide directions for answers and some steps along the path to the answers, if it can’t provide actual answers themselves.
The book itself is taxing to read though. It is filled with jargon and awkward terms that I had to work through to find out what was meant. It could be a hard slog at times but it was worthwhile in the end.
The subject is big, difficult, and still very new, but I’m glad that some people are giving it some deep thought.