Konubinix' opinionated web of thoughts

Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI | Lex Fridman Podcast

Fleeting

  • External reference:

Daniel Kahneman Lex Fridman

Holocaust was not a total surprise. dehumanizing people and not feeling bad about killing them is already known. in-group and out-group, we already do that in a daily basis with other animals. The surprise was the large scale of the phenomenon.

Human nature, people starting to have power on other humans and they behave differently. standford jail experiment

a useful wrong image

They are two ways that ideas come to mind, the ones that come effortlessly and automatically: the system 1, and those that need mental effort: system 2.

main characteristic: limited capacity

very fast and give often good results, also, system 1 is better at what it does than system 2

skilled intuitions have to be learned.

deep learning = system 1

A lot of problem are solved with only system 1

is system 1 architecture enough to solve all problems?

A system that translates very well or a chess/go algorithm that is very strong don’t need to have understanding about the objects they talk about (see chinese room argument)

human learning vs AI learning

child can manipulate the environment and do active learning.

Also, a child does not need a lot of examples to learn, while an artificial intelligence needs a lot of examples.

artificial intelligence needs to understand the choreography of humans

For instance, prior to cross the street, humans look into the eyes of the driver, then look away when they cross the street, as if to show they are committed and the driver is better have understood. (paradoxe de newcomb)

An artificial intelligence MUST have captured such behavior to be useful.

hybrid artificial intelligence for system 1 and human for system 2

The machine could realize when it has no skilled intuitions and fall back on asking the human for finer analysis. This is a pretty big challenge as the system needs to have at least a partial understanding of the field to realize how it is likely to be wrong (effet de surconfiance).

people evaluate the difficulty of a problem by looking at how it is easy for them to solve it

Therefore, they underestimate problems that can be solved by skilled intuitions and overestimate problems solved by reasoning.

misconception: people thought that reasoning was hard and perception was easy

While in fact, perception is very hard to simulate and there are now automated theorem proofs

we should prefer algorithms, but people don’t like them

explanation is often an illusion

see belief perseverance,

we tend to ask artificial intelligence to give the explanation of why it reaches its results, as if it was what we ask of ourselves.

Yet, we are generally convinced by nice and convincing stories, not necessarily true ones (grande Histoire des petites histoires).

Therefore, if we make systems that tell true stories, we are likely to reject them. we should rather look for systems that provide convincing stories.

reason to believe, intuitions come first, strategic reasoning second, théorie argumentative du raisonnement, explaining AI

also

experimenting self and remembering self for remembering self, time does not matter, but event does

intuitively, experiencing self should be preferred, but people tend to optimise their remembering self

paradox

people tend to change their holidays destination if they assume they will forget

see experiencing self and remembering self People behave as if they wanted to create memories, not having good experiences.

Notes linking here