Read: November 2022

Inspiration: Saw on Elon Musk’s list of recommended books; wanted to learn more about the future of AI

Summary

Written with the help of ChatGPT, below is a brief summary to understand what is covered in the book.

“Life 3.0”, published in 2017 by author and cosmologist Max Tegmark, explores the future of artificial intelligence and its potential impacts on society. The book discusses the ways in which AI is already being used in a wide range of fields, including healthcare, transportation, and entertainment, and the ways in which it is likely to continue to transform these fields in the coming years. Tegmark also examines the potential risks and benefits of AI, and discusses the ways in which society can best prepare for and manage these impacts. The book also explores the broader philosophical and ethical implications of AI, and discusses the ways in which it might change the very nature of what it means to be human. Life 3.0 is a thought-provoking and accessible look at the future of AI and its potential impacts on society.

Unedited Notes

Direct from my original book log, below are my unedited notes (abbreviations and misspellings included) to show how I take notes as I read.

Life 1.0 characterized by life that evolves its hardware and software (biological stage), 2.0 evolves hardware but designs much of its software (cultural stage), 3.0 designs hardware and software (technological stage), intelligence here defined as ability to accomplish complex goals, those who think AI will be beneficial, others think harm (luddites), others think too far away to be concerned (techno skeptics), also digital utopians (more optimistic than beneficial AI mvmt—beneficial mvmt recognizes steps needed to ensure safety), fear of evil conscious robots is myth—about AI with misaligned goals above all else, human memory works by auto association vs computer memory works by address/location, AI safety discussion revolves around verification, validation, security, and control, verification means ensuring the software fully satisfies all expected requirements to work as intended—AI can automate and improve verif’n process, validation is about did I build the right system vs verif about did i build system right, validation is about invalid assumptions made my machines, control means ability for human operator to monitor system and change behavior if needed, autonomous cars who is liable—could be manufacturer via insurance policy on car itself (part of cost to buy, better track record means lower premium), cyborg is cybernetic organism or any organism with tech embedded broadly, optimization power and recalcitrance reflect amount of quality effort to make AI smarter and the difficulty of making progress (determins rate of progress), AGI adds optim. power, if machine intel grows at rate proportional to current power then get explosion—fast takeoff of AGI, takeoff depends on hardware (long) and software (quicker) improvements, history of life is story of expanding hierarchy (i.e. hierarchies that can reach farther for control/scale), “uploads” are beyond cyborgs where exist solely virtually/software representation of person, building AI human brain may not be best achieved by modeling humans today—evolution about survival not simplicity, airplanes not require mechanical bird as predecessor, simpler means to goals than modeling what we know today , neoluddites oppose tech if cause more harm than good (not full technophobes), wide range of scenarios of AGI superintelligence interacting with humans—can view as utopia or protector or zookeeper or benevolent dictator or enslaved god or destruction, can also envision tech reversion or human gatekeeping via surveillance state to avoid, also descendant scenario where humans see AI as offspring just different (raise and instill values then pass on), current energy needs can be met by harvesting sunlight striking an area less than .5% of sahara desert, digestion of mass (candy bar) to energy is only .00000001% efficient—if .001% efficient then only need 1 meal for rest of life, event horizon of black hole is where rotational energy at speed of light, past that energy/light cannot escape, could extract rotational energy of black holes (400k suns equivalent), , 98% of galaxies cannot travel to even at speed of light b/c space is expanding at accelerating rate, traveling to other galaxies constrained by fuel requirements and energy inefficiency, needs new means, perhaps hydrogen reactor on board to refuel collection hydrogen ions as move, or “laser sail” tech to leverage energy from sun light to accelerate vehicle with mirror, concept of wormholes to allow communication across galaxies not impossible but requires matter with negative density and poorly understood quantum gravity effects, big chill vs big crunch vs big rip for how universe ends (comsocalpyse), big chill as universe keeps expanding and dilute cosmos to cold/dead, all depends on dark matter/energy—if it dilutes due to neg density or anti dilutes, dark energy is 70% of mass in Universe, Tegmark think we are alone as intelligent life in our Universe given requires for habitability and then flukes for intelligence to evolve—low low probability in our observable universe (esp given lots has been observed), all laws of classical physics can be equivalently defined as past causing the future or nature optimizing something, heat death is max entropy to where all uniform/boring (maximally messy), dissipation refers to increasing entropy via useful energy into heat usually while doing useful work (dissipation driven adaptation), teleology is the explanation of things in terms of their purpose not cause, bounded rationality refers to rationality driven by info avail/time to think/hardware avail, AI goal alignment/sharing with humans key—3 issues: ai understand our goals, adopt our goals, retain our goals, understanding requires see why we act which goes beyond what we say (implicit goals), value loading hard with machines b/c short window where not too dumb and not too smart to understand and obey/retain, self preservation and resource gathering can easily become emergent subgoals for any AI (not just a human tendency), study of consciousness key to AI—defined as subjective experience, study brain maps and Neural Correlates of Consciousness—see what parts of brain react to scenarios, takes 1/4 second to consciously perceive but humans react unconsciously often, consciousness like molecules/matter has emergent properties based on arrangement and grouping, consciousness requires info storage capscity, info processing capacity, substantial indep from rest of world, but system cannot be of nearly indep parts (work together), predicting consciousness can be tested but why is anything conscious at all and predicting qualia not able to test for/answer today

Leave a Comment

Newsletter

Subscribe to my Newsletter for new reads & other updates!

About Me

Welcome to JeffReads, where I share summaries of the best books I’ve read on business, politics, science, technology and more.

 

Contact: Jeff@JeffReads.com

Newsletter

Subscribe to my Newsletter for new reads & other updates!

Copyright 2023 JeffReads | All Rights Reserved