137x Filetype PPT File size 0.20 MB Source: www.incompleteideas.net
It’s Hard to Build Large AI Systems • Brittleness • Unforeseen interactions • Scaling • Requires too much manual complexity management – people must understand, intervene, patch and tune – like programming • Need more autonomy – learning, verification – internal coherence of knowledge and experience Marr’s Three Levels of Understanding • Marr proposed three levels at which any information-processing machine must be understood – Computational Theory Level • What is computed and why – Representation and Algorithm Level – Hardware Implementation Level • We have little computational theory for Intelligence – Many methods for knowledge representation, but no theory of knowledge – No clear problem definition – Logic Reinforcement Learning provides a little Computational Theory • Policies (controllers) : States Pr(Actions) • Value Functions Vπ: States → ℜ ∞ π t−1 V (s)=E ∑γ rewardstart in s, follow π t 0 t=1 • 1-Step Models P s s,a E r s,a t+1 t t t+1 t t Outline of Talk • Experience • Knowledge Prediction • Macro-Predictions • Mental Simulation offering a coherent candidate computational theory of intelligence Experience • AI agent should be embedded in an ongoing interaction with a world actions Agent observations World Experience = these 2 time series • Enables clear definition of the AI problem – Let {reward } be function of {observation } – Choose actions to maximize total reward • t t Experience provides something for knowledge to be about cf. textbook definitions
no reviews yet
Please Login to review.