Module Name | Download |
---|
Sl.No | Chapter Name | MP4 Download |
---|---|---|
1 | 1.1 Paradigms of Machine Learning | Download |
2 | 1.2 Few more examples | Download |
3 | 1.3 Types of Learning | Download |
4 | 1.4 Types of supervised learning | Download |
5 | 1.5 Introduction to Regression | Download |
6 | 1.6 Linear regression | Download |
7 | 1.7 Geometrical Interpretation | Download |
8 | 1.8 Iterative solution: Gradient descent | Download |
9 | 1.9 Gradient Descent | Download |
10 | 1.10 Choosing Step size | Download |
11 | 1.11 Taylor Series | Download |
12 | 1.12 Stochastic Gradient Descent and basis functions | Download |
13 | 1.13 Regularization Techniques | Download |
14 | 1.14 Visual Guide to Orthogonal Projection | Download |
15 | 2.1 Binary Classification | Download |
16 | 2.2 K-Nearest Neighbour Classification | Download |
17 | 2.3 Distance metric and Cross-Validation | Download |
18 | 2.4 Computational efficiency of KNN | Download |
19 | 2.5 Introduction to Decision Trees | Download |
20 | 2.6 Level splitting | Download |
21 | 2.7 Measure of Impurity | Download |
22 | 2.8 Entropy and Information Gain | Download |
23 | 2.9 Generative vs Discriminative models | Download |
24 | 2.10 Naive Bayes classifier | Download |
25 | 2.11 Conditional Independence | Download |
26 | 2.12 Classifying the test point and summary | Download |
27 | 3.1 Discriminative models | Download |
28 | 3.2 Logistic Regression | Download |
29 | 3.3 Summary and big picture | Download |
30 | 3.4 Maximum likelihood estimation | Download |
31 | 3.5 Linear separability | Download |
32 | 3.6 Perceptron and its learning algorithm | Download |
33 | 3.7 Perceptron : A thing of past | Download |
34 | 3.8 Perceptron : A thing of past | Download |
35 | 3.9 Optimizing weights | Download |
36 | 3.10 Handling Outliers | Download |
37 | 3.11 Dual Formulation | Download |
38 | 3.12 Kernel formulation | Download |
39 | 4.1 Artificial Neural Networks | Download |
40 | 4.2 Unsupervised learning | Download |
41 | 4.3 K-means Clustering | Download |
42 | 4.4 LLyod's Algorithms | Download |
43 | 4.5 Convergence and Initialization | Download |
44 | 4.6 Representation Learning | Download |
45 | 4.7 Orthogonal Projection | Download |
46 | 4.8 Covariance Matrix and Eigen direction | Download |
47 | 4.9 PCA and mean centering | Download |
48 | 4.10 Concluding remarks | Download |
Sl.No | Chapter Name | English |
---|---|---|
1 | 1.1 Paradigms of Machine Learning | PDF unavailable |
2 | 1.2 Few more examples | PDF unavailable |
3 | 1.3 Types of Learning | PDF unavailable |
4 | 1.4 Types of supervised learning | PDF unavailable |
5 | 1.5 Introduction to Regression | PDF unavailable |
6 | 1.6 Linear regression | PDF unavailable |
7 | 1.7 Geometrical Interpretation | PDF unavailable |
8 | 1.8 Iterative solution: Gradient descent | PDF unavailable |
9 | 1.9 Gradient Descent | PDF unavailable |
10 | 1.10 Choosing Step size | PDF unavailable |
11 | 1.11 Taylor Series | PDF unavailable |
12 | 1.12 Stochastic Gradient Descent and basis functions | PDF unavailable |
13 | 1.13 Regularization Techniques | PDF unavailable |
14 | 1.14 Visual Guide to Orthogonal Projection | PDF unavailable |
15 | 2.1 Binary Classification | PDF unavailable |
16 | 2.2 K-Nearest Neighbour Classification | PDF unavailable |
17 | 2.3 Distance metric and Cross-Validation | PDF unavailable |
18 | 2.4 Computational efficiency of KNN | PDF unavailable |
19 | 2.5 Introduction to Decision Trees | PDF unavailable |
20 | 2.6 Level splitting | PDF unavailable |
21 | 2.7 Measure of Impurity | PDF unavailable |
22 | 2.8 Entropy and Information Gain | PDF unavailable |
23 | 2.9 Generative vs Discriminative models | PDF unavailable |
24 | 2.10 Naive Bayes classifier | PDF unavailable |
25 | 2.11 Conditional Independence | PDF unavailable |
26 | 2.12 Classifying the test point and summary | PDF unavailable |
27 | 3.1 Discriminative models | PDF unavailable |
28 | 3.2 Logistic Regression | PDF unavailable |
29 | 3.3 Summary and big picture | PDF unavailable |
30 | 3.4 Maximum likelihood estimation | PDF unavailable |
31 | 3.5 Linear separability | PDF unavailable |
32 | 3.6 Perceptron and its learning algorithm | PDF unavailable |
33 | 3.7 Perceptron : A thing of past | PDF unavailable |
34 | 3.8 Perceptron : A thing of past | PDF unavailable |
35 | 3.9 Optimizing weights | PDF unavailable |
36 | 3.10 Handling Outliers | PDF unavailable |
37 | 3.11 Dual Formulation | PDF unavailable |
38 | 3.12 Kernel formulation | PDF unavailable |
39 | 4.1 Artificial Neural Networks | PDF unavailable |
40 | 4.2 Unsupervised learning | PDF unavailable |
41 | 4.3 K-means Clustering | PDF unavailable |
42 | 4.4 LLyod's Algorithms | PDF unavailable |
43 | 4.5 Convergence and Initialization | PDF unavailable |
44 | 4.6 Representation Learning | PDF unavailable |
45 | 4.7 Orthogonal Projection | PDF unavailable |
46 | 4.8 Covariance Matrix and Eigen direction | PDF unavailable |
47 | 4.9 PCA and mean centering | PDF unavailable |
48 | 4.10 Concluding remarks | PDF unavailable |
Sl.No | Language | Book link |
---|---|---|
1 | English | Not Available |
2 | Bengali | Not Available |
3 | Gujarati | Not Available |
4 | Hindi | Not Available |
5 | Kannada | Not Available |
6 | Malayalam | Not Available |
7 | Marathi | Not Available |
8 | Tamil | Not Available |
9 | Telugu | Not Available |