Full Stanford University Courses
Hey everyone, this month I have the latest Stanford university full courses with tens of lectures in each.
They include: Artificial Intelligence, Databases, Machine Learning, iPhone Application Development and Programming Parallel Processors.
Introduction to Artificial Intelligence (CS221)
Course description:
This course is taught from Peter Norvig's Artificial Intelligence, A Modern Approach book. CS221 is the introductory course into the field of Artificial Intelligence at Stanford University. It covers basic elements of AI, such as knowledge representation, inference, machine learning, planning and game playing, information retrieval, and computer vision and robotics. CS221 is a broad course aimed to teach students the very basics of modern AI. It is prerequisite to many other, more specialized AI classes at Stanford University.
Course topics:
Overview of AI. Statistics, Uncertainty, Bayes networks. Machine learning. Hidden Markov models. Bayes filters. Markov decision process. Reinforcement learning. Adversarial planning. Belief space planning. Logic and logical problem solving. Image processing and computer vision. Robotics. Natural language processing. Information retrieval.
Course professors:
Peter Norvig is Director of Research at Google Inc. He is also a Fellow of the American Association for Artificial Intelligence and the Association for Computing Machinery.
Sebastian Thrun is a Research Professor of Computer Science at Stanford University, a Google Fellow, a member of the National Academy of Engineering and the German Academy of Sciences. Thrun is best known for his research in robotics and machine learning.
Introduction to Databases
Course description:
This online course covers database design and the use of database management systems for applications. It includes extensive coverage of the relational model, relational algebra, and SQL - the standard language for creating, querying, and modifying relational databases. It also covers XML data including DTDs and XML Schema for validation, and the query and transformation languages XPath, XQuery, and XSLT. The course includes database design in UML, and relational design principles based on dependencies and normal forms. Many additional key database topics from the design and application-building perspective are also covered: indexes, views, transactions, authorization, integrity constraints, triggers, on-line analytical processing (OLAP), and emerging "NoSQL" systems.
Course professor:
Professor Jennifer Widom is the Fletcher Jones Professor and Chair of the Computer Science Department at Stanford University. She received her Bachelors degree from the Indiana University School of Music in 1982 and her Computer Science Ph.D. from Cornell University in 1987. She was a Research Staff Member at the IBM Almaden Research Center before joining the Stanford faculty in 1993. Her research interests span many aspects of nontraditional data management.
Machine Learning
Course description:
This course provides a broad introduction to machine learning and statistical pattern recognition. Topics include: supervised learning (generative/discriminative learning, parametric/non-parametric learning, neural networks, support vector machines); unsupervised learning (clustering, dimensionality reduction, kernel methods); learning theory (bias/variance tradeoffs; VC theory; large margins); reinforcement learning and adaptive control. The course will also discuss recent applications of machine learning, such as to robotic control, data mining, autonomous navigation, bioinformatics, speech recognition, and text and web data processing.
Course topics:
Introduction. Basic concepts in ML. Supervised learning. Supervised learning setup. LMS. Logistic regression. Perceptron. Exponential family. Generative learning algorithms. Gaussian discriminant analysis. Naive Bayes. Support vector machines. Model selection and feature selection. Ensemble methods: Bagging, boosting, ECOC.
Evaluating and debugging learning algorithms. Learning theory. Bias/variance tradeoff. Union and Chernoff/Hoeffding bounds. VC dimension. Worst case (online) learning. Practical advice on how to use learning algorithms. Unsupervised learning. Clustering. K-means. EM. Mixture of Gaussians. Factor analysis. PCA. MDS. pPCA. Independent components analysis (ICA). Reinforcement learning and control. MDPs. Bellman equations. Value iteration and policy iteration. Linear quadratic regulation (LQR). LQG. Q-learning. Value function approximation. Policy search. Reinforce. POMDPs.
Course professor:
Professor Andrew Ng is Director of the Stanford Artificial Intelligence Lab, the main AI research organization at Stanford, with 20 professors and about 150 students/post docs. At Stanford, he teaches Machine Learning, which with a typical enrollment of 350 Stanford students, is among the most popular classes on campus. His research is primarily on machine learning, artificial intelligence, and robotics, and most universities doing robotics research now do so using a software platform (ROS) from his group.
iPhone Application Development
Course description:
Tools and APIs required to build applications for the iPhone platform using the iPhone SDK. User interface designs for mobile devices and unique user interactions using multitouch technologies. Object-oriented design using model-view-controller pattern, memory management, Objective-C programming language. iPhone APIs and tools including Xcode, Interface Builder and Instruments on Mac OS X. Other topics include: core animation, bonjour networking, mobile device power management and performance considerations. Prerequisites: C language and programming experience at the level of 106B or X. Recommended: UNIX, object-oriented programming, graphical toolkits Offered by Stanford’s School of Engineering, the course will last ten weeks and include both the lecture videos and PDF documents.
Programming Massively Parallel Processors with CUDA
Course description:
Virtually all semiconductor market domains, including PCs, game consoles, mobile handsets, servers, supercomputers, and networks, are converging to concurrent platforms. There are two important reasons for this trend. First, these concurrent processors can potentially offer more effective use of chip space and power than traditional monolithic microprocessors for many demanding applications. Second, an increasing number of applications that traditionally used Application Specific Integrated Circuits (ASICs) are now implemented with concurrent processors in order to improve functionality and reduce engineering cost. The real challenge is to develop applications software that effectively uses these concurrent processors to achieve efficiency and performance goals. The aim of this course is to provide students with knowledge and hands-on experience in developing applications software for processors with massively parallel computing resources. In general, we refer to a processor as massively parallel if it has the ability to complete more than 64 arithmetic operations per clock cycle. Many commercial offerings from NVIDIA, AMD, and Intel already offer such levels of concurrency. Effectively programming these processors will require in-depth knowledge about parallel programming principles, as well as the parallelism models, communication models, and resource limitations of these processors. The target audiences of the course are students who want to develop exciting applications for these processors, as well as those who want to develop programming tools and future implementations for these processors.
Enjoy these lectures and see you next time!
This article brought to you by Assignment Expert, a homework help service, presents free science lectures on YouTube: https://www.youtube.com/user/AssignmentExpert.
Related Posts
They include: Artificial Intelligence, Databases, Machine Learning, iPhone Application Development and Programming Parallel Processors.
Introduction to Artificial Intelligence (CS221)
Course description:
This course is taught from Peter Norvig's Artificial Intelligence, A Modern Approach book. CS221 is the introductory course into the field of Artificial Intelligence at Stanford University. It covers basic elements of AI, such as knowledge representation, inference, machine learning, planning and game playing, information retrieval, and computer vision and robotics. CS221 is a broad course aimed to teach students the very basics of modern AI. It is prerequisite to many other, more specialized AI classes at Stanford University.
Course topics:
Overview of AI. Statistics, Uncertainty, Bayes networks. Machine learning. Hidden Markov models. Bayes filters. Markov decision process. Reinforcement learning. Adversarial planning. Belief space planning. Logic and logical problem solving. Image processing and computer vision. Robotics. Natural language processing. Information retrieval.
Course professors:
Peter Norvig is Director of Research at Google Inc. He is also a Fellow of the American Association for Artificial Intelligence and the Association for Computing Machinery.
Sebastian Thrun is a Research Professor of Computer Science at Stanford University, a Google Fellow, a member of the National Academy of Engineering and the German Academy of Sciences. Thrun is best known for his research in robotics and machine learning.
Introduction to Databases
Course description:
This online course covers database design and the use of database management systems for applications. It includes extensive coverage of the relational model, relational algebra, and SQL - the standard language for creating, querying, and modifying relational databases. It also covers XML data including DTDs and XML Schema for validation, and the query and transformation languages XPath, XQuery, and XSLT. The course includes database design in UML, and relational design principles based on dependencies and normal forms. Many additional key database topics from the design and application-building perspective are also covered: indexes, views, transactions, authorization, integrity constraints, triggers, on-line analytical processing (OLAP), and emerging "NoSQL" systems.
Course professor:
Professor Jennifer Widom is the Fletcher Jones Professor and Chair of the Computer Science Department at Stanford University. She received her Bachelors degree from the Indiana University School of Music in 1982 and her Computer Science Ph.D. from Cornell University in 1987. She was a Research Staff Member at the IBM Almaden Research Center before joining the Stanford faculty in 1993. Her research interests span many aspects of nontraditional data management.
Machine Learning
Course description:
This course provides a broad introduction to machine learning and statistical pattern recognition. Topics include: supervised learning (generative/discriminative learning, parametric/non-parametric learning, neural networks, support vector machines); unsupervised learning (clustering, dimensionality reduction, kernel methods); learning theory (bias/variance tradeoffs; VC theory; large margins); reinforcement learning and adaptive control. The course will also discuss recent applications of machine learning, such as to robotic control, data mining, autonomous navigation, bioinformatics, speech recognition, and text and web data processing.
Course topics:
Introduction. Basic concepts in ML. Supervised learning. Supervised learning setup. LMS. Logistic regression. Perceptron. Exponential family. Generative learning algorithms. Gaussian discriminant analysis. Naive Bayes. Support vector machines. Model selection and feature selection. Ensemble methods: Bagging, boosting, ECOC.
Evaluating and debugging learning algorithms. Learning theory. Bias/variance tradeoff. Union and Chernoff/Hoeffding bounds. VC dimension. Worst case (online) learning. Practical advice on how to use learning algorithms. Unsupervised learning. Clustering. K-means. EM. Mixture of Gaussians. Factor analysis. PCA. MDS. pPCA. Independent components analysis (ICA). Reinforcement learning and control. MDPs. Bellman equations. Value iteration and policy iteration. Linear quadratic regulation (LQR). LQG. Q-learning. Value function approximation. Policy search. Reinforce. POMDPs.
Course professor:
Professor Andrew Ng is Director of the Stanford Artificial Intelligence Lab, the main AI research organization at Stanford, with 20 professors and about 150 students/post docs. At Stanford, he teaches Machine Learning, which with a typical enrollment of 350 Stanford students, is among the most popular classes on campus. His research is primarily on machine learning, artificial intelligence, and robotics, and most universities doing robotics research now do so using a software platform (ROS) from his group.
iPhone Application Development
Course description:
Tools and APIs required to build applications for the iPhone platform using the iPhone SDK. User interface designs for mobile devices and unique user interactions using multitouch technologies. Object-oriented design using model-view-controller pattern, memory management, Objective-C programming language. iPhone APIs and tools including Xcode, Interface Builder and Instruments on Mac OS X. Other topics include: core animation, bonjour networking, mobile device power management and performance considerations. Prerequisites: C language and programming experience at the level of 106B or X. Recommended: UNIX, object-oriented programming, graphical toolkits Offered by Stanford’s School of Engineering, the course will last ten weeks and include both the lecture videos and PDF documents.
Programming Massively Parallel Processors with CUDA
Course description:
Virtually all semiconductor market domains, including PCs, game consoles, mobile handsets, servers, supercomputers, and networks, are converging to concurrent platforms. There are two important reasons for this trend. First, these concurrent processors can potentially offer more effective use of chip space and power than traditional monolithic microprocessors for many demanding applications. Second, an increasing number of applications that traditionally used Application Specific Integrated Circuits (ASICs) are now implemented with concurrent processors in order to improve functionality and reduce engineering cost. The real challenge is to develop applications software that effectively uses these concurrent processors to achieve efficiency and performance goals. The aim of this course is to provide students with knowledge and hands-on experience in developing applications software for processors with massively parallel computing resources. In general, we refer to a processor as massively parallel if it has the ability to complete more than 64 arithmetic operations per clock cycle. Many commercial offerings from NVIDIA, AMD, and Intel already offer such levels of concurrency. Effectively programming these processors will require in-depth knowledge about parallel programming principles, as well as the parallelism models, communication models, and resource limitations of these processors. The target audiences of the course are students who want to develop exciting applications for these processors, as well as those who want to develop programming tools and future implementations for these processors.
Enjoy these lectures and see you next time!
This article brought to you by Assignment Expert, a homework help service, presents free science lectures on YouTube: https://www.youtube.com/user/AssignmentExpert.
Related Posts
- Free Computer Science Video Lecture Courses
(Courses include web application development, lisp/scheme programming, data structures, algorithms, machine structures, programming languages, principles of software engineering, object oriented programming in java, systems, computer system engineering, computer architecture, operating systems, database management systems, performance analysis, cryptography, artificial intelligence) - Programming Lectures and Tutorials
(Lectures include topics such as software engineering, javascript programming, overview of firefox's firebug extension, document object model, python programming, design patterns in python, java programming, delphi programming, vim editor and sqlite database design) - Programming, Networking Free Video Lectures and Other Interesting Ones
(Includes lectures on Python programming language, Common Lisp, Debugging, HTML and Web, BGP networking, Building scalable systems, and as a bonus lecture History of Google) - More Mathematics and Theoretical Computer Science Video Lectures
(Includes algebra, elementary statistics, applied probability, finite mathematics, trigonometry with calculus, mathematical computation, pre-calculus, analytic geometry, first year calculus, business calculus, mathematical writing (by Knuth), computer science problem seminar (by Knuth), dynamic systems and chaos, computer musings (by Knuth) and other Donald E. Knuth lectures) - More Mathematics and Theoretical Computer Science Video Lectures
(Includes algebra, elementary statistics, applied probability, finite mathematics, trigonometry with calculus, mathematical computation, pre-calculus, analytic geometry, first year calculus, business calculus, mathematical writing (by Knuth), computer science problem seminar (by Knuth), dynamic systems and chaos, computer musings (by Knuth) and other Donald E. Knuth lectures) - Pure Computer Science
(Includes basics of computation theory, intro to computer science, data structures, compiler optimization, intro to computers and internet, intro to clojure, and some videos from EECS colloquium at Case Western Reserve University.) - Programming video lectures
(Includes Steven Skiena's programming challenges with solutions. IOI 2009 problems and solutions. Haskell programming. Scheme. SICP. Scheme Macros. Machine Learning. Natural Language Processing. Programming Methodology. Programming Abstractions. Programming Paradigms. Evolution of Lisp. Programming Effective ML.)