π
π
π
π
Tzu-Heng's wiki
gitbook
lzhbrian.me
Searchβ¦
Tzu-Heng's wiki
Machine Learning
Traditionals
Deep Learning
Image Classification (CNN)
Detection
Semantic Segmentation
Generative Adversarial Networks
Style Transfer
Recommender Systems
Meta Learning
Notes
Differientiable Sampling and Argmax
GAN theory
Multi-task Learning (MTL)
Disentanglement in GANs
CNN practical notes
3D Clothes
OpenGL
Generative Art
nginx usage
Deploy Deep Learning Models
Character Motion Synthesis
Data Structure & Algorithms
Sorting Algorithms
Powered By
GitBook
Traditionals
some traditional machine learning algorithms
Survey Papers / Repos
Top 10 algorithms in data mining.
[ICDM'06]
β
β
josephmisiti/awesome-machine-learning
β
Resources
β
Coursera Machine Learning
by Andrew Ng
Tasks
Supervised
Linear Regression
y
=
a
x
+
b
L
(
y
,
y
^
)
=
(
y
β
y
^
)
2
y=ax+b\\ L(y,\hat{y}) = (y-\hat{y})^2
y
=
a
x
+
b
L
(
y
,
y
^
β
)
=
(
y
β
y
^
β
)
2
Logistic Regression
y
=
1
1
+
e
β
(
a
x
+
b
)
L
(
y
,
y
^
)
=
β
y
^
log
β‘
y
β
(
1
β
y
^
)
log
β‘
(
1
β
y
)
y=\frac{1}{1+e^{-(ax+b)}} \\ L(y,\hat{y}) = -\hat{y}\log y - (1 - \hat{y}) \log (1-y)
y
=
1
+
e
β
(
a
x
+
b
)
1
β
L
(
y
,
y
^
β
)
=
β
y
^
β
lo
g
y
β
(
1
β
y
^
β
)
lo
g
(
1
β
y
)
Naive Bayes
P
(
A
β£
B
)
=
P
(
B
β£
A
)
P
(
A
)
P
(
B
)
P(A|B) = \frac{P(B|A)P(A)}{P(B)}
P
(
A
β£
B
)
=
P
(
B
)
P
(
B
β£
A
)
P
(
A
)
β
Support Vector Machine (SVM)
Training process: Lagrange -> Dual Problem -> SMO
min
β‘
1
2
β£
β£
w
β£
β£
2
s.t.Β
y
(
i
)
(
w
T
x
(
i
)
+
b
)
β₯
1
,
i
=
1
,
.
.
.
,
m
\min \frac{1}{2} ||w||^2 \\ \text{s.t.}~y^{(i)}(w^{T}x^{(i)}+b) \geq 1, i=1,...,m
min
2
1
β
β£β£
w
β£
β£
2
s.t.
Β
y
(
i
)
(
w
T
x
(
i
)
+
b
)
β₯
1
,
i
=
1
,
...
,
m
K Nearest Neighbor (kNN)
Expectation-Maximization (EM)
Linear Discrimant Analysis (LDA)
Decision Tree
Random Forest
Gradient Boosting Tree (GBDT)
Semi-supervised
Weakly-supervised
Unsupervised
Clustering
K-means
Mean-shift
DBSCAN
Principal Component Analysis (PCA)
Latent Dirichlet allocation (LDA) Topic Modeling
Others
Ensemble
K-Fold Cross Validation
Bagging
Boosting
Metrics
Text
True Samples
False Samples
Predict True
True Positive
False Positive [Type I Error]
Predict False
False Negative [Type II Error]
True Negative
Precision and Recall
β
Precision
=
TP
TP
+
FP
\text{Precision} = \frac{\text{TP}}{\text{TP} +\text{FP}}
Precision
=
TP
+
FP
TP
β
β
β
Recall
=
TP
TP
+
FN
\text{Recall} = \frac{\text{TP}}{\text{TP}+\text{FN}}
Recall
=
TP
+
FN
TP
β
β
F1 Score
β
F1Β score
=
2
β
Precision
β
Recall
Precision
+
Recall
\text{F1 score} = 2 \cdot\frac{\text{Precision} \cdot \text{Recall}}{\text{Precision} +\text{Recall}}
F1Β score
=
2
β
Precision
+
Recall
Precision
β
Recall
β
β
Receiver Operating Characteristic (ROC)
β
TPR
=
TP
TP
+
FN
\text{TPR} = \frac{\text{TP}}{\text{TP}+\text{FN}}
TPR
=
TP
+
FN
TP
β
β
β
FPR
=
FP
FP
+
TN
\text{FPR} = \frac{\text{FP}}{\text{FP}+\text{TN}}
FPR
=
FP
+
TN
FP
β
β
Area Under ROC (AUC)
Confusion Matrix
Reference
SVM:
https://www.cnblogs.com/jerrylead/archive/2011/03/13/1982639.html
β
Previous
Tzu-Heng's wiki
Next - Machine Learning
Deep Learning
Last modified
3mo ago
Copy link
Contents
Survey Papers / Repos
Resources
Tasks
Supervised
Semi-supervised
Weakly-supervised
Unsupervised
Others
Ensemble
Metrics
Reference