Pagina principaleGruppiConversazioniAltroStatistiche
Cerca nel Sito
Questo sito utilizza i cookies per fornire i nostri servizi, per migliorare le prestazioni, per analisi, e (per gli utenti che accedono senza fare login) per la pubblicità. Usando LibraryThing confermi di aver letto e capito le nostre condizioni di servizio e la politica sulla privacy. Il tuo uso del sito e dei servizi è soggetto a tali politiche e condizioni.

Risultati da Google Ricerca Libri

Fai clic su di un'immagine per andare a Google Ricerca Libri.

The Alignment Problem: Machine Learning and…
Sto caricando le informazioni...

The Alignment Problem: Machine Learning and Human Values (originale 2020; edizione 2020)

di Brian Christian (Autore)

UtentiRecensioniPopolaritàMedia votiCitazioni
2143127,134 (4.25)2
"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--… (altro)
Utente:Wayfaring
Titolo:The Alignment Problem: Machine Learning and Human Values
Autori:Brian Christian (Autore)
Info:W. W. Norton & Company (2020), Edition: 1, 496 pages
Collezioni:La tua biblioteca
Voto:****
Etichette:Nessuno

Informazioni sull'opera

The Alignment Problem: Machine Learning and Human Values di Brian Christian (2020)

Sto caricando le informazioni...

Iscriviti per consentire a LibraryThing di scoprire se ti piacerà questo libro.

Attualmente non vi sono conversazioni su questo libro.

» Vedi le 2 citazioni

Mostra 3 di 3
There was lump in my throat when Deep Mind’s AlphaGo crushed Lee Sedol at Go, the oldest (3000 year old) arguably most complex strategic board game , cause with that AI not just defeat the greatest player ever but effectively wiped any future association of GO and Humans . No Human will even beat AI at GO again period, that fortress is breached! We have essentially been relegated to a mere factoid in the timeline of this planet.

While capitalism will ensure the inevitability that humans will be pushed “out of the loop” in every aspect – The question is not if but when . Brian Christian’s Alignment Problem educates the reader with the real pitfalls of depending on algorithms and inherent drawbacks of machine learning . Brian more than Nick Bostrom’s – Super Intelligence in my opinion dwells much deeper on the alignment problem at hand ; Bostrom set the stage for AI safety and was labelled as an alarmist ; well not anymore .

From dopamine exploiting social media algorithms to parole sentences to mortgage application approvals ; these highly pervasive machine learning algos now control various aspects of humans , while Congress grapples with legislation & red-tape .

The book gives an over arching view on how the ML algos came about around the following “pillars” curiosity, imitation, reinforcement, model bias , bad data samples etc. and why it is crucial to align AI goals with Human values .

And as often is the case the problems are more of the philosophical nature than anything , this also highlights the importance of psychology , social anthropology , neurophysiology and psychoanalysis playing a quintessential part in future development of this nascent field .The latter part of the book deals with possibly the tougher questions which AI posses ; happy to see the Effective Altruism movement founder Will MacAskell get a page in there too . ( )
  Vik.Ram | Aug 12, 2022 |
An impressive, conversation-based analysis of how AI systems developed through processes of machine learning (ML) might be constrained to be both safe and ethical. I had little idea of how rich and massive the research on this has been. In nine chapters with carefully chosen one-word headings (Representation, Fairness, Transparency, Reinforcement, Shaping, Curiosity, Imitation, Inference, and Uncertainty), the author describes a sequence of diverse and increasingly sophisticated ML concepts, culminating in what is called Cooperative Inverse Reinforcement Learning (CIRL). Whether AI will ever stop being part of what I regard as the wrongness of modern technology, I don't know, but at least there are people in the field who have their hearts in the right place.
  fpagan | Mar 21, 2022 |
There is a great book trapped inside this good book, waiting for a skillful editor to carve it out. The author did vast research in multiple domains and it seems like he could neither build a cohesive narration that could connect all of it nor leave anything out.

This book is probably the best intro to machine learning space for a non-engineer I've read. It presents its history, challenges, what can be done, and what can't be done (yet). It's both accessible and substantive, presenting complex ideas in a digestible form without dumbing them down. If you want to spark the ML interest in anyone who hasn't been paying attention to this field, give them this book. It provides a wide background connecting ML to neuroscience, cognitive science, psychology, ethics, and behavioral economics that will blow their mind.

It's also very detailed, screaming at the reader "I did the research, I went where no one else dared to go!". It will not only present you with an intriguing ML concept but also: trace its roots to XIX century farming problem or biology breakthrough, present all the scientist contributing to this research, explain how they met and got along, cite author's interviews with some of them, and present their life after they published their masterpiece, including completely unrelated information about their substance abuse and dark circumstances of their premature death. It's written quite well, so there might be an audience who enjoys this, but sadly I'm not a part of it.

If this book was structured to touch directly the subject of the alignment problem it would be at least 3 times shorter. It doesn't mean that 2/3 are bad - most of it is informative, some of it is entertaining, a lot seems like ML things that the author found interesting and just added to the book without any specific connection to its premise. I really liked the first few chapters where machine learning algorithms are presented as the first viable benchmark to the human thinking process and mental models that we build. Spoiler alert: it very clearly shows our flaws, biases, and lies that we tell ourselves (that are further embedded in ML models that we create and technology that uses them).

Overall, I enjoyed most of this book. I just feel a bit cheated by its title and premise, which advertise a different kind of book. This is the Machine Learning omnibus, presenting the most interesting scientific concepts of this field and the scientists behind them. If this is what you expect and need, you won't be disappointed! ( )
  sperzdechly | Mar 18, 2021 |
Mostra 3 di 3
The Alignment Problem does an outstanding job of explaining insights and progress from recent technical AI/ML literature for a general audience. For risk analysts, it provides both a fascinating exploration of foundational issues about how data analysis and algorithms can best be used to serve human needs and goals and also a perceptive examination of how they can fail to do so.
aggiunto da Edward | modificaRisk Analysis, Louis Anthony Cox Jr. (sito a pagamento) (Mar 3, 2023)
 
Devi effettuare l'accesso per contribuire alle Informazioni generali.
Per maggiori spiegazioni, vedi la pagina di aiuto delle informazioni generali.
Titolo canonico
Titolo originale
Titoli alternativi
Data della prima edizione
Personaggi
Luoghi significativi
Eventi significativi
Film correlati
Epigrafe
Dedica
Incipit
Citazioni
Ultime parole
Nota di disambiguazione
Redattore editoriale
Elogi
Lingua originale
DDC/MDS Canonico
LCC canonico

Risorse esterne che parlano di questo libro

Wikipedia in inglese

Nessuno

"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--

Non sono state trovate descrizioni di biblioteche

Descrizione del libro
Riassunto haiku

Discussioni correnti

Nessuno

Copertine popolari

Link rapidi

Voto

Media: (4.25)
0.5
1
1.5
2
2.5
3 2
3.5 2
4 12
4.5 1
5 9

Sei tu?

Diventa un autore di LibraryThing.

 

A proposito di | Contatto | LibraryThing.com | Privacy/Condizioni d'uso | Guida/FAQ | Blog | Negozio | APIs | TinyCat | Biblioteche di personaggi celebri | Recensori in anteprima | Informazioni generali | 205,862,301 libri! | Barra superiore: Sempre visibile