Free Science and Video Lectures Online!
Free education online is possible.

Great news! New Science Site launched: Free Science Videos and Lectures

Another great news! My Programming Site launched: Good coders code, great reuse

More great news! I started my own company: Browserling - Cross-browser testing.

Tuesday, July 31, 2007

Machine Learning and Artificial Intelligence Video Lectures

I found some great Machine Learning and Artificial Intelligence (AI) video lecture courses recently and I will share them with you this month.

Here are lectures from Machine Learning Summer School 2003, 2005 and 2006.

Machine Learning is a foundational discipline of the Information Sciences. It combines deep theory from areas as diverse as Statistics, Mathematics, Engineering, and Information Technology with many practical and relevant real life applications.


Some Mathematical Tools for Machine Learning

Lectures contain:

1. Lagrange multipliers: Lagrange multipliers: an indirect approach can be easier; Multiple Equality Constraints; Multiple Inequality Constraints; Two points on a d-sphere; The Largest Parallelogram; Resource allocation; A convex combination of numbers is maximized by choosing the largest; The Isoperimetric problem; For fixed mean and variance, which univariate distribution has maximum entropy? An exact solution for an SVM living on a simplex
2. Notes on some Basic Statistics; Probabilities can be Counter-Intuitive (Simpson's paradox; the Monty Hall puzzle); IID-ness: Measurement Error decreases as 1/sqrt{n}; Correlation versus Independence; The Ubiquitous Gaussian: Product of Gaussians is Gaussian o Convolution of two Gaussians is a Gaussian; Projection of a Gaussian is a Gaussian; Sum of Gaussian random variables is a Gaussian random variables; Uncorrelated Gaussian variables are also independent; Maximum Likelihood Estimates for mean and covariance (prove required matrix identities); Aside: For 1-dim Laplacian, max. likelihood gives the median; Using cumulative distributions to derive densities
3. Principal Component Analysis and Generalizations: Ordering by Variance; Does Grouping Change Things? PCA Decorrelates the Samples; PCA gives Reconstruction with Minimal Mean Squared Error; PCA preserves Mutual Information on Gaussian data; PCA directions lie in the span of the data; PCA: second order moments only; The Generalized Rayleigh Quotient; Non-orthogonal principal directions; OPCA; Fisher Linear Discriminant; Multiple Discriminant Analysis
4. Elements of Functional Analysis: High Dimensional Spaces; Is Winning Transitive?; Most of the Volume is Near the Surface: Cubes; Spheres in n-dimensions; Banach Spaces, Hilbert Spaces, Compactness; Norms; Useful Inequalities (Minkowski and Holder); Vector Norms; Matrix Norms; The Hamming Norm; L1, L2, L_infty norms - is L0 a norm?
Example: Using a Norm as a Constraint in Kernel Algorithms

These are lectures on some fundamental mathematics underlying many approaches and algorithms in machine learning. They are not about particular learning algorithms; they are about the basic concepts and tools upon which such algorithms are built. Often students feel intimidated by such material: there is a vast amount of "classical mathematics", and it can be hard to find the wood for the trees. The main topics of these lectures are Lagrange multipliers, functional analysis, some notes on matrix analysis, and convex optimization. I've concentrated on things that are often not dwelt on in typical CS coursework. Lots of examples are given; if it's green, it's a puzzle for the student to think about. These lectures are far from complete: perhaps the most significant omissions are probability theory, statistics for learning, information theory, and graph theory. I hope eventually to turn all this into a series of short tutorials.



Introduction to Learning Theory
  • Video Lectures (by Olivier Bousquet from Max Planck Institute for Biological Cybernetics)

Description:
The goal of this course is to introduce the key concepts of learning theory. It will not be restricted to Statistical Learning Theory but will mainly focus on statistical aspects. Instead of giving detailed proofs and precise statements, this course will aim at providing some useful conceptual tools and ideas useful for practitioners as well as for theoretically-driven people.



An Introduction to Pattern Classification

Lectures contain:
Pattern classification algorithms, classification procedures, supervised learning, unsupervised learning, classifier and preprocessing algorithms, errors, classifier and computational complexity, dimensionality reduction, approaches for dimensionality reduction: feature reduction, feature selection; genetic programming, whitening transform, nearest neighbor editing algorithm, voronoi diagram, clusters, clustering techniques: agglomerative, partitional, minimum spanning tree, aghc, kohonen maps, k-means, fuzzy k-means, competitive learning; bayes rule, heuristic algorithms, tree based algorithms, optimization algorithms, neural networks, training methods, perceptons, radial-basis function networks, support-vector machines, error estimation methods.



Statistical Learning Theory
  • Video Lectures (by Olivier Bousquet from Max Planck Institute for Biological Cybernetics)

Description:

This course will give a detailed introduction to learning theory with a focus on the classification problem. It will be shown how to obtain (pobabilistic) bounds on the generalization error for certain types of algorithms. The main themes will be:
  • probabilistic inequalities and concentration inequalities
  • union bounds and chaining
  • measuring the size of a function class
  • Vapnik Chervonenkis dimension
  • shattering dimension and Rademacher averages
  • classification with real-valued functions
Some knowledge of probability theory would be helpful but not required since the main tools will be introduced.


Stochastic Learning

These lectures contain:

Early learning systems, recursive adaptive algorithms, risks, batch gradient descent, stochastic gradient descent, non differentiable loss functions, rosenblatt's perceptrons, k-means, vector quantization, stochastic noise, multilayer networks



Bayesian Learning

Description of video course:
Bayes Rule provides a simple and powerful framework for machine learning. This tutorial will be organised as follows:
1. Lecturer will give motivation for the Bayesian framework from the point of view of rational coherent inference, and highlight the important role of the marginal likelihood in Bayesian Occam's Razor.
2. He will discuss the question of how one should choose a sensible prior. When Bayesian methods fail it is often because no thought has gone into choosing a reasonable prior.
3. Bayesian inference usually involves solving high dimensional integrals and sums. He will give an overview of numerical approximation techniques (e.g. Laplace, BIC, variational bounds, MCMC, EP...).
4. Mr. Ghahramani will talk about more recent work in non-parametric Bayesian inference such as Gaussian processes (i.e. Bayesian kernel "machines"), Dirichlet process mixtures, etc.



Learning on Structured Data

Lectures description:
Discriminative learning framework is one of the very successful fields of machine learning. The methods of this paradigm, such as Boosting, and Support Vector Machines have significantly advanced the state-of-the-art for classification by improving the accuracy and by increasing the applicability of machine learning methods. One of the key benefits of these methods is their ability to learn efficiently in high dimensional feature spaces, either by the use of implicit data representations via kernels or by explicit feature induction. However, traditionally these methods do not exploit dependencies between class labels where more than one label is predicted. Many real-world classification problems involve sequential, temporal or structural dependencies between multiple labels. We will investigate recent research on generalizing discriminative methods to learning in structured domains. These techniques combine the efficiency of dynamic programming methods with the advantages of the state-of-the-art learning methods.



Information Retrieval and Text Mining

Description:
This four hour course will provide an overview of applications of machine learning and statistics to problems in information retrieval and text mining. More specifically, it will cover tasks like document categorization, concept-based information retrieval, question-answering, topic detection and document clustering, information extraction, and recommender systems. The emphasis is on showing how machine learning techniques can help to automatically organize content and to provide efficient access to information in textual form.



Foundations of Learning


An introduction to grammars and parsing
  • Video Lecture (by Mark Johnson from Brown Laboratory for Linguistic Information Processing)

Video Lecture contains:
computational linguistics, its syntactic and semantic structure, context free grammars, its derivations, probabalistics cfg's (pcfg), dynamic programming, expectation maximization, em algorithm for pcfg's, top-down parsing, bottom-up parsing, left-corner parsing.




Information Geometry

Description:
This tutorial will focus on entropy, exponential families, and information projection. We'll start by seeing the sense in which entropy is the only reasonable definition of randomness. We will then use entropy to motivate exponential families of distributions — which include the ubiquitous Gaussian, Poisson, and Binomial distributions, but also very general graphical models. The task of fitting such a distribution to data is a convex optimization problem with a geometric interpretation as an "information projection": the projection of a prior distribution onto a linear subspace (defined by the data) so as to minimize a particular information-theoretic distance measure. This projection operation, which is more familiar in other guises, is a core optimization task in machine learning and statistics. We'll study the geometry of this problem and discuss two popular iterative algorithms for it.



Tutorial on Machine Learning Reductions

Tutorial description:

There are several different classification problems commonly encountered in real world applications such as 'importance weighted classification', 'cost sensitive classification', 'reinforcement learning', 'regression' and others. Many of these problems can be related to each other by simple machines (reductions) that transform problems of one type into problems of another type. Finding a reduction from your problem to a more common problem allows the reuse of simple learning algorithms to solve relatively complex problems. It also induces an organization on learning problems — problems that can be easily reduced to each other are 'nearby' and problems which can not be so reduced are not close.


Online Learning and Game Theory

Description:
We consider online learning and its relationship to game theory. In an online decision-making problem, as in Singer's lecture, one typically makes a sequence of decisions and receives feedback immediately after making each decision. As far back as the 1950's, game theorists gave algorithms for these problems with strong regret guarantees. Without making statistical assumptions, these algorithms were guaranteed to perform nearly as well as the best single decision, where the best is chosen with the benefit of hindsight. We discuss applications of these algorithms to complex learning problems where one receives very little feedback. Examples include online routing, online portfolio selection, online advertizing, and online data structures. We also discuss applications to learning Nash equilibria in zero-sum games and learning correlated equilibria in general two-player games.



On the Borders of Statistics and Computer Science

Description:

Machine learning in computer science and prediction and classification in statistics are essentially equivalent fields. I will try to illustrate the relation between theory and practice in this huge area by a few examples and results. In particular I will try to address an apparent puzzle: Worst case analyses, using empirical process theory, seem to suggest that even for moderate data dimension and reasonable sample sizes good prediction (supervised learning) should be very difficult. On the other hand, practice seems to indicate that even when the number of dimensions is very much higher than the number of observations, we can often do very well. We also discuss a new method of dimension estimation and some features of cross validation.


Decision Maps
  • Video Lecture (by Nadler Boaz from Toyota Technological Institute)


Measures of Statistical Dependence
  • Video Lectures (by Arthur Gretton from Max Planck Institute for Biological Cybernetics)

Description:
A number of important problems in signal processing depend on measures of statistical dependence. For instance, this dependence is minimised in the context of instantaneous ICA, in which linearly mixed signals are separated using their (assumed) pairwise independence from each other. A number of methods have been proposed to measure this dependence, however they generally assume a particular parametric model for the densities generating the observations. Recent work suggests that kernel methods may be used to find estimates that adapt according to the signals they compare. These methods are currently being refined, both to yeild greater accuracy, and to permit the use of the signal properties over time in improving signal separability. In addition, these methods can be applied in cases where the statistical dependence between observations must be maximised, which is true for certain classes of clustering algorithms.



Anti-Learning

Description:
The Biological domain poses new challenges for statistical learning. In the talk we shall analyze and theoretically explain some counter-intuitive experimental and theoretical findings that systematic reversal of classifier decisions can occur when switching from training to independent test data (the phenomenon of anti-learning). We demonstrate this on both natural and synthetic data and show that it is distinct from overfitting. The natural datasets discussed will include: prediction of response to chemo-radio-therapy for esophageal cancer from gene expression (measured by cDNA-microarrays); prediction of genes affecting the aryl hydrocarbon receptor pathway in yeast. The main synthetic classification problem will be the approximation of samples drawn from high dimensional distributions, for which a theoretical explanation will be outlined.



Brain Computer Interfaces

Description:
Brain Computer Interfacing (BCI) aims at making use of brain signals for e.g. the control of objects, spelling, gaming and so on. This tutorial will first provide a brief overview of the current BCI research activities and provide details in recent developments on both invasive and non-invasive BCI systems. In a second part -- taking a physiologist point of view -- the necessary neurological/neurophysical background is provided and medical applications are discussed. The third part -- now from a machine learning and signal processing perspective -- shows the wealth, the complexity and the difficulties of the data available, a truely enormous challenge. In real-time a multi-variate very noise contaminated data stream is to be processed and classified. Main emphasis of this part of the tutorial is placed on feature extraction/selection and preprocessing which includes among other techniques CSP and also ICA methods. Finally, I report in more detail about the Berlin Brain Computer (BBCI) Interface that is based on EEG signals and take the audience all the way from the measured signal, the preprocessing and filtering, the classification to the respective application. BCI communication is discussed in a clincial setting and for gaming.



Introduction to Kernel Methods

Lecture contains:
Kernel-based algorithms, regression / classification, regularization, rkhs, representer theorem, rls algorithm, svms, feature map.



Related Posts
  • Free Computer Science Video Lecture Courses
    (Courses include web application development, lisp/scheme programming, data structures, algorithms, machine structures, programming languages, principles of software engineering, object oriented programming in java, systems, computer system engineering, computer architecture, operating systems, database management systems, performance analysis, cryptography, artificial intelligence)

  • Programming Lectures and Tutorials
    (Lectures include topics such as software engineering, javascript programming, overview of firefox's firebug extension, document object model, python programming, design patterns in python, java programming, delphi programming, vim editor and sqlite database design)

  • Programming, Networking Free Video Lectures and Other Interesting Ones
    (Includes lectures on Python programming language, Common Lisp, Debugging, HTML and Web, BGP networking, Building scalable systems, and as a bonus lecture History of Google)

  • More Mathematics and Theoretical Computer Science Video Lectures
    (Includes algebra, elementary statistics, applied probability, finite mathematics, trigonometry with calculus, mathematical computation, pre-calculus, analytic geometry, first year calculus, business calculus, mathematical writing (by Knuth), computer science problem seminar (by Knuth), dynamic systems and chaos, computer musings (by Knuth) and other Donald E. Knuth lectures)

  • More Mathematics and Theoretical Computer Science Video Lectures
    (Includes algebra, elementary statistics, applied probability, finite mathematics, trigonometry with calculus, mathematical computation, pre-calculus, analytic geometry, first year calculus, business calculus, mathematical writing (by Knuth), computer science problem seminar (by Knuth), dynamic systems and chaos, computer musings (by Knuth) and other Donald E. Knuth lectures)

  • Computer Science Lectures
    (Courses include higher computing (intro to theory of computation), intro to computer science, data structures, compiler optimization, computers and internet, intro to clojure, the akamai story, cryptography, EECS colloquium videos at Case Western Reserve University)

  • Computer Science Courses
    (Includes introduction to computer science and computing systems, computational complexity and quantum computing, the c programming language, multicore programming, statistics and data mining, combinatorics, software testing, evolutionary computation, deep learning, data structures and algorithms and computational origami.)

Labels: , , , , , , , , , , ,

Monday, April 30, 2007

Programming, Networking Free Video Lectures and Other Interesting Ones

Ok, I am back to real science video lectures and this month I present to you the best (in my opinion) computer science video lectures I could find on Google Tech Talk.


OSS Speaker Series: Python for Programmer
Python is a popular very-high-level programming language, with a clean and spare syntax, simple and regular semantics, a large standard library and a wealth of third-party extensions, libraries and tools. With several production-quality open-source implementations available, many excellent books, and growing acceptance in both industry and academia, Python can play some useful role within a huge variety of software development projects.

Moreover, Python is really easy to learn, particularly (though not exclusively) for programmers who are skilled at such languages as Java, C++ and C. This talk addresses software developers who are experienced in other languages but have had limited or no exposure to Python yet, and offers a rapid overview of the main characteristics of the language, plus a brief synopsis of its main implementations, its standard library, and third-party extension packages.



Python 3000
The next major version of Python, nicknamed Python 3000 (or more prosaically Python 3.0), has been anticipated for a long time. For years the author of Python has been collecting and exploring ideas that were too radical for Python 2.x, and it's time to stop dreaming and start coding. In this talk he will present the community process that will be used to complete the specification for Python 3000, as well as some of the major changes to the language and the remaining challenges.


Practical Common Lisp
In the late 1920's linguists Edward Sapir and Benjamin Whorf hypothesized that the thoughts we can think are largely determined by the language we speak. In his essay "Beating the Averages" Paul Graham echoed this notion and invented a hypothetical language, Blub, to explain why it is so hard for programmers to appreciate programming language features that aren't present in their own favorite language. Does the Sapir-Whorf hypothesis hold for computer languages? Can you be a great software architect if you only speak Blub? Doesn't Turing equivalence imply that language choice is just another implementation detail? Yes, no, and no says Peter Seibel, language lawyer (admitted, at various times, to the Perl, Java, and Common Lisp bars) and author of the award-winning book Practical Common Lisp. In his talk, Peter will discuss how our choices of programming language influences and shapes our pattern languages and the architectures we can, or are likely to, invent. He will also discuss whether it's sufficient to merely broaden your horizons by learning different programming languages or whether you must actually use them.


Debugging Backwards in Time
What if a debugger could allow you to simply step BACKWARDS? Instead of all that hassle with guessing where to put breakpoints and the fear of typing "continue" one too many times... What if you could simply go backwards to see what went wrong?

This is the essence of the "Omniscient Debugger" -- it remembers everything that happened during the run of a program, and allows the programmer to "step backwards in time" to see what happened at any point of the program. All variable values, all objects, all method calls, all exceptions are recorded and the programmer can now look at anything that happened at any time.



Learning to Analyze Sequences
Sequential data - speech, text, genomic sequences - floods our storage servers. Much useful information in these data is carried by implicit structure: phonemes and prosody in speech, syntactic structure in text, genes and regulatory elements in genomic sequences. Over the last six years, several of us have been investigating structured linear models, a unified discriminative learning approach to sequence analysis problems. The lecturer will review the approach and illustrate it with applications to information extraction and gene finding. Then he will conclude with a summary of other applications and current research questions.


The XHTML video Element Tag
Video is becoming increasingly important content type, and it's time to make video a first-class citizen on the web. The element is, along with JavaScript bindings, proposed as a simple solution to encourage browsers to support video natively. Equally important is the choice of video format to be used with. The lecturer will argue that the success of the web is based on using open standards, and that video should be no exception. He will then demo Opera showing Ogg Theora video clips natively.
A demonstration is available here:
http://people.opera.com/howcome/2007/video



Pipes: A Tool For Remixing the Web
Pipes is a service platform for processing well-structured data such as RSS, Atom and RDF feeds in a Web-based visual programming environment. Developers can use Pipes to combine data sources and user input into mashups without having to write code. These mashups, analogous in some ways to Unix pipes, can power badges on personal publishing sites, provide core functionality for Web applications, or serve as reusable components within the Pipes platform itself.

Here's what Tim O'Reilly says about pipes: "Yahoo!'s new Pipes service is a milestone in the history of the internet. It's a service that generalizes the idea of the mashup, providing a drag and drop editor that allows you to connect internet data sources, process them, and redirect the output."

You can play with Yahoo! Pipes here: Yahoo! Pipes



BGP (Border Gateway Protocol) at 18: Lessons in Protocol Design
18th anniversary of BGP. In this talk we examine the evolution of BGP over these 18 years, and look at the lessons we could learn from this.

Dr. Yakov Rekhter joined Juniper Networks in Dec 2000, where he is a Distinguished Engineer. Prior to joining Juniper, Yakov worked at Cisco Systems, where he was a Cisco Fellow. Prior to joining Cisco in 1995, he worked at IBM T.J. Watson Research Center.

Yakov Rekhter was one of the leading architects and a major software developer of the NSFNET Backbone Phase II. He co-designed the Border Gateway Protocol (BGP). He was also one of the lead designers of Tag Switching, BGP/MPLS based VPNs, and MPLS Traffic Engineering. Among his most recent activities is the work on Generalized Multi-Protocol Label Switching (GMPLS). His other contributions to contemporary Internet technology include: Classless Inter-Domain Routing (CIDR) and IP address allocation for private Internets.

He is the author or co-author of over 40 IETF RFCs, and numerous papers and articles on TCP/IP and the Internet. His recent books include: "MPLS: Technology and Applications" (Morgan Kauffman, 2000) and "Switching in IP Networks: IP Switching, Tag Switching and Related Technologies" (Morgan Kauffman, 1998).



A New Way to Look at Networking
Today's research community congratulates itself for the success of the internet and passionately argues whether circuits or datagrams are the One True Way. Meanwhile the list of unsolved problems grows.

Security, mobility, ubiquitous computing, wireless, autonomous sensors, content distribution, digital divide, third world infrastructure, etc., are all poorly served by what's available from either the research community or the marketplace. The lecturer will use various strained analogies and contrived examples to argue that network research is moribund because the only thing it knows how to do is fill in the details of a conversation between two applications. Today as in the 60s problems go unsolved due to our tunnel vision and not because of their intrinsic difficulty. And now, like then, simply changing our point of view may make many hard things easy.



Building Large Scale Systems at Google
Google deals with large amounts of data and millions of users. We'll take a behind-the-scenes look at some of the distributed systems and computing platform that power Google's various products, and make the products scalable and reliable.


Authors@Google: Steve Wozniak
Apple co-founder Steve Wozniak discusses his new book iWoz as part of the Authors@Google speaker series. The book chronicles his experiences founding Apple and taking part in Silicon Valley's boom period.


Computer Versus Common Sense
It's way past 2001 now, where the heck is HAL? For several decades now we've had high hopes for computers amplifying our mental abilities not just giving us access to relevant stored information, but answering our complex, contextual questions.

Even applications like human-level unrestricted speech understanding continue to dangle close but just out of reach. What's been holding AI up? The short answer is that while computers make fine idiot savants, they lack common sense: the millions of pieces of general knowledge we all share, and fall back on as needed, to cope with the rough edges of the real world. The presenter will talk about how that situation is changing, finally, and what the timetable -- and the path -- realistically are on achieving Artificial Intelligence.



Dasher: Information Efficient Text Entry
Keyboards are inefficient for two reasons: they do not exploit the redundancy in normal language; and they waste the fine analogue capabilities of the user's motor system (fingers and eyes, for example). I describe a system intended to rectify both these inefficiencies. Dasher is a text-entry system in which a language model plays an integral role, and it's driven by continuous gestures. Users can achieve single-finger writing speeds of 35 words per minute and hands-free writing speeds of 25 words per minute. Dasher is free software, and it works in all languages, and on many platforms. Dasher is part of Debian, and there's even a little java version for your web-browser.
More on Dasher: http://www.dasher.org.uk/



Winning The DARPA Grand Challenge
The DARPA Grand Challenge technical details explained by Sebastian Thrun's whose team won, and an introduction to the next phase called "The Urban Grand Challenge".

More on the DARPA Grand Challenge
Wikipedia link to DARPA Grand Challenge


The Google Story
Here is what the author of the book has to say about the lecture/talk:

Not since Gutenberg invented the modern printing press more than 500 years ago, making books and scientific tomes affordable and widely available to the masses, has any new invention empowered individuals or transformed access to information as profoundly as Google. I first became aware of this while covering Google as a beat reporter for The Washington Post. What galvanized my deep interest in the company was its unconventional initial public offering in August 2004 when the firm thumbed its nose at Wall Street by doing the first and only multi-billion dollar IPO using computers, rather than Wall Street bankers, to allocate its hot shares of stock.

A few months later, in the fall of 2004, I decided to write the first biography of Google, tracing its short history from the time founders Sergey Brin and Larry Page met at Stanford in 1995 until the present. In my view, this is the hottest business, media and technology success of our time, with a stock market value of $110 billion, more than the combined value of Disney, The Washington Post, The New York Times, The Wall Street Journal, Amazon.com, Ford and General Motors.



"The Search" (Google Search)
John Battelle, co-founding editor of Wired and founder of The Industry Standard visits the Google New York office to speak about his book The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture


Related Posts
  • Free Computer Science Video Lecture Courses
    (Courses include web application development, lisp/scheme programming, data structures, algorithms, machine structures, programming languages, principles of software engineering, object oriented programming in java, systems, computer system engineering, computer architecture, operating systems, database management systems, performance analysis, cryptography, artificial intelligence)

  • Programming Lectures and Tutorials
    (Lectures include topics such as software engineering, javascript programming, overview of firefox's firebug extension, document object model, python programming, design patterns in python, java programming, delphi programming, vim editor and sqlite database design)

  • More Mathematics and Theoretical Computer Science Video Lectures
    (Includes algebra, elementary statistics, applied probability, finite mathematics, trigonometry with calculus, mathematical computation, pre-calculus, analytic geometry, first year calculus, business calculus, mathematical writing (by Knuth), computer science problem seminar (by Knuth), dynamic systems and chaos, computer musings (by Knuth) and other Donald E. Knuth lectures)

  • Computer Science Lectures
    (Courses include higher computing (intro to theory of computation), intro to computer science, data structures, compiler optimization, computers and internet, intro to clojure, the akamai story, cryptography, EECS colloquium videos at Case Western Reserve University)

  • Computer Science Courses
    (Includes introduction to computer science and computing systems, computational complexity and quantum computing, the c programming language, multicore programming, statistics and data mining, combinatorics, software testing, evolutionary computation, deep learning, data structures and algorithms and computational origami.)

Labels: , , , , , , , , , , , , , , , ,