Exam Preparation Insights for Cumulative Material on Neural Networks and Machine Learning

12/10/14
Exam
Wedn., 12/17/13, 2pm-4:30pm,
 
   Baker Laboratory 200
          (http://registrar.sas.cornell.edu/Sched/EXFA.html)
Material:
    
Cumulative. Covers all material.
    Study: Slides complemented by R&N.
    Study: Hwks solns. & midterm.
Allowed: 1 double-sided sheet of notes & calculator.
A Few Observations on Shimon
Edelman’s talk.     11/13/14
 
Valid criticism of the claim that 
Deep Learning 
(multi-
layer neural nets) and 
Reinforcement Learning (RL)
come close to capturing full cognition/AI. 
The brain is
much more complex!
Need to move beyond input-output view 
of machine
learning /AI. (Beyond the “Google query.”) It’s the
stimulus-response view of actions. 
Captures only the
most basic reflex agent behaviors.
Need 
richer internal representations of the world.
Knowledge representation. Causality. 
Can explain eg
learning new behaviors from just one or two examples
(contrast with RL).
 
But, there is also some validity behind the excitement for
deep learning and RL using Big Data:
    
the success on vision tasks (image recognition) and
    speech recognition. I.e. “sensory processing.”
 
These tasks have frustrated AI researchers for decades.
 
Famously underestimated: 
1966 --- “solving image
recognition (objects etc.) was assigned as a *part time*
ugrad project at MIT!
 
http://projects.csail.mit.edu/films/aifilms/AIFilms.html
 
Finally, 5 decades later, real progress!
 
Can we combine these advances with advances in
the symbolic/probabilistic arena, i.e., knowledge
representation, reasoning, and search?
E.g., use deep learning and RL to get the sensory
inputs into symbolic representation and proceed
from there. This will not model the brain in any
detail but the overall performance could be
impressive nevertheless.
 
Systems like Watson and self-driving cars suggest
that system integration of many “weak” and
distinct components can be very powerful overall.
 
 
 
Midterm: Material up-to-and-including Adversarial
Search (Game trees, minimax, & alpha-beta,
expectiminimax; slides #10)
 
 
 
1) See lecture slides on web.
      2) Study sections in R&N.
      3) Consider hkw problems and solns.
 
Midterm: closed book but 1 two-sided sheet of notes
allowed. (typed / hand written  / any way you like)
Re: reinforcement learning, focus on slides. R&N
    chapter 21 too much depth.
10/22/14
How do we search for “longest path”
on a map? 09/26/14
Uniform cost search with highest cost first? NO
(missing “Markovian” property of shortest path.
Other approach? General tree search. No
repeating states down a single branch (limits
depth to n nodes). What if you reach the goal?
Keep going! (Stop only after full space is searched
and return longest found path to goal.
Can we do better? If so, how? If not, why not?
 
Not significantly. P =/= NP
Notes Wedn. 09/10/14
Course url:
www.cs.cornell.edu/
courses/cs4700/2014fa/
Enrollment is finalized. Everyone now in CMS.
This Friday 09/12/14: 11:25am this room
     CS 4701 --- Organizational meeting.
News item: CNN --- When Machines Outsmart Humans
(link on course page)
--- Superintelligence by Nick Bostrom
Notes Fri. 09/05/14
Course url:
www.cs.cornell.edu/
courses/cs4700/2014fa/
CMS will be populated by Monday.
Enrollment: how many *not* enrolled?
Hopefully, resolved by today.
L
aurie.buck@cornell.edu
 --- CS ugrad admin
Slide Note
Embed
Share

Insights from various lectures and discussions focusing on deep learning, reinforcement learning, and advancements in AI. Emphasis on moving beyond input-output views to richer internal representations and the integration of deep learning with symbolic reasoning. Highlighting the success in sensory processing tasks like image recognition. Exam preparation tips provided include using slides, solutions, and notes for comprehensive coverage of the material.

  • Exam prep
  • Neural networks
  • Machine learning
  • AI advancements
  • Deep learning

Uploaded on Sep 14, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. 12/10/14 Exam Wedn., 12/17/13, 2pm-4:30pm, Baker Laboratory 200 (http://registrar.sas.cornell.edu/Sched/EXFA.html) Material: Cumulative. Covers all material. Study: Slides complemented by R&N. Study: Hwks solns. & midterm. Allowed: 1 double-sided sheet of notes & calculator.

  2. A Few Observations on Shimon Edelman s talk. 11/13/14 Valid criticism of the claim that Deep Learning (multi- layer neural nets) and Reinforcement Learning (RL) come close to capturing full cognition/AI. The brain is much more complex! Need to move beyond input-output view of machine learning /AI. (Beyond the Google query. ) It s the stimulus-response view of actions. Captures only the most basic reflex agent behaviors. Need richer internal representations of the world. Knowledge representation. Causality. Can explain eg learning new behaviors from just one or two examples (contrast with RL).

  3. But, there is also some validity behind the excitement for deep learning and RL using Big Data: the success on vision tasks (image recognition) and speech recognition. I.e. sensory processing. These tasks have frustrated AI researchers for decades. Famously underestimated: 1966 --- solving image recognition (objects etc.) was assigned as a *part time* ugrad project at MIT! http://projects.csail.mit.edu/films/aifilms/AIFilms.html Finally, 5 decades later, real progress!

  4. Can we combine these advances with advances in the symbolic/probabilistic arena, i.e., knowledge representation, reasoning, and search? E.g., use deep learning and RL to get the sensory inputs into symbolic representation and proceed from there. This will not model the brain in any detail but the overall performance could be impressive nevertheless. Systems like Watson and self-driving cars suggest that system integration of many weak and distinct components can be very powerful overall.

  5. 10/22/14 Midterm: Material up-to-and-including Adversarial Search (Game trees, minimax, & alpha-beta, expectiminimax; slides #10) 1) See lecture slides on web. 2) Study sections in R&N. 3) Consider hkw problems and solns. Midterm: closed book but 1 two-sided sheet of notes allowed. (typed / hand written / any way you like) Re: reinforcement learning, focus on slides. R&N chapter 21 too much depth.

  6. How do we search for longest path on a map? 09/26/14 Uniform cost search with highest cost first? NO (missing Markovian property of shortest path. Other approach? General tree search. No repeating states down a single branch (limits depth to n nodes). What if you reach the goal? Keep going! (Stop only after full space is searched and return longest found path to goal. Can we do better? If so, how? If not, why not? Not significantly. P =/= NP

  7. Notes Wedn. 09/10/14 Course url: www.cs.cornell.edu/courses/cs4700/2014fa/ Enrollment is finalized. Everyone now in CMS. This Friday 09/12/14: 11:25am this room CS 4701 --- Organizational meeting. News item: CNN --- When Machines Outsmart Humans (link on course page) --- Superintelligence by Nick Bostrom

  8. Notes Fri. 09/05/14 Course url: www.cs.cornell.edu/courses/cs4700/2014fa/ CMS will be populated by Monday. Enrollment: how many *not* enrolled? Hopefully, resolved by today. Laurie.buck@cornell.edu --- CS ugrad admin

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#