Read: September 2020
Inspiration: What are the important trends in AI and their implications?
Written with the help of ChatGPT, below is a brief summary to understand what is covered in the book.
“Human Compatible”, published in 2020 by computer scientist and AI expert Stuart Russell, discusses the ways in which AI is likely to shape the future and the implications of this for humanity. He argues that traditional approaches to designing AI systems, which focus on creating systems that are as intelligent and capable as possible, may be misguided and could ultimately lead to disastrous consequences. Instead, he proposes a new approach to designing AI that is more aligned with human values and goals. The book is divided into several chapters, each of which looks at a different aspect of AI and its potential impact on society. Some of the topics covered in the book include the risks posed by superintelligent AI, the potential for AI to transform the economy and employment, and the ethical considerations surrounding the development and deployment of AI. Throughout the book, Russell provides an in-depth and thought-provoking look at the future of AI and its potential impact on humanity. He also offers insights into how we can shape the development of AI in a way that is more aligned with our values and goals. Overall, “Human Compatible” is an interesting look at the future of AI and its implications for humanity.
Direct from my original book log, below are my unedited notes (abbreviations and misspellings included) to show how I take notes as I read.
AI officially begin 1956 at Dartmouth but run into cold spells until deep learning in 2011, have to balance rationality with uncertainty to achieve an end in best way, turn to expected utility or value bc Bernoulli show large bets lead to irrational exp value (e.g. 100% chance of 10 mm vs 1% chance of 1 bn are same exp value but obviously not same utility), Turing Universality 1936: computing device accept as input description of any other computing device and simulate second’s operation on its input to produce same output second would have, Intelligent Agent: something that perceives and acts (key to modern AI), Propositional/Boolean logic vs First-Order logic (FO key to AI), Bayesian rationality: updating degree of belief with new evidence constantly (posterior prob becomes prior and so on), infopocalypse: catastrophic failure of marketplace of ideas, is it true that no freedom of thought without access to true info but true info decision go up against free speech, regulation for Lethal Autonomous Weapons Systems (could have small flying weapon with explosive and camera to identify specific individual), general trend where initially tech lowers cost and increases demand which increase employment but then tech means less workers required for same job, “mechanical transportation became cheaper than the upkeep cost of a horse so horses became pet food”, no imminent threat from AI is not a reason to not prep/think, mentioning risks doesn’t imply no benefits (can’t enjoy benefits if risks not properly managed), challenging to precisely and completely enter human objectives into AI, key to beneficial AI is initial unawareness of human preferences so learn with humility and can be turned off to learn (not single minded objective from start), standard model sets fixed objective (risky, want deferential AI), make people want to pay their taxes (tie it to some other service/gain??), software biz needs more regulation as manipulate preferences and addictive behaviors (like pharma), propositional (and, if, or) vs first order logic (relate objects generally, more efficient), AI use bayesian updating to constantly track and map location as move “SLAM”