πŸ“
Tzu-Heng's wiki
  • Tzu-Heng's wiki
  • Machine Learning
    • Traditionals
    • Deep Learning
    • Image Classification (CNN)
    • Detection
    • Semantic Segmentation
    • Generative Adversarial Networks
    • Style Transfer
    • Recommender Systems
    • Meta Learning
  • Notes
    • Differientiable Sampling and Argmax
    • GAN theory
    • Multi-task Learning (MTL)
    • Disentanglement in GANs
    • CNN practical notes
    • 3D Clothes
    • OpenGL
    • Generative Art
    • nginx usage
    • Deploy Deep Learning Models
    • Character Motion Synthesis
  • Data Structure & Algorithms
    • Sorting Algorithms
Powered by GitBook
On this page
  • Survey Papers / Repos
  • Resources
  • Tasks
  • Supervised
  • Semi-supervised
  • Weakly-supervised
  • Unsupervised
  • Others
  • Ensemble
  • Metrics
  • Reference

Was this helpful?

  1. Machine Learning

Traditionals

some traditional machine learning algorithms

PreviousTzu-Heng's wikiNextDeep Learning

Last updated 3 years ago

Was this helpful?

Survey Papers / Repos

  • Top 10 algorithms in data mining.

Resources

  • by Andrew Ng

Tasks

Supervised

  • Linear Regression

y=ax+bL(y,y^)=(yβˆ’y^)2y=ax+b\\ L(y,\hat{y}) = (y-\hat{y})^2y=ax+bL(y,y^​)=(yβˆ’y^​)2
  • Logistic Regression

y=11+eβˆ’(ax+b)L(y,y^)=βˆ’y^log⁑yβˆ’(1βˆ’y^)log⁑(1βˆ’y)y=\frac{1}{1+e^{-(ax+b)}} \\ L(y,\hat{y}) = -\hat{y}\log y - (1 - \hat{y}) \log (1-y)y=1+eβˆ’(ax+b)1​L(y,y^​)=βˆ’y^​logyβˆ’(1βˆ’y^​)log(1βˆ’y)
  • Naive Bayes

  • Support Vector Machine (SVM)

    • Training process: Lagrange -> Dual Problem -> SMO

  • K Nearest Neighbor (kNN)

  • Expectation-Maximization (EM)

  • Linear Discrimant Analysis (LDA)

  • Decision Tree

  • Random Forest

  • Gradient Boosting Tree (GBDT)

Semi-supervised

Weakly-supervised

Unsupervised

  • Clustering

    • K-means

    • Mean-shift

    • DBSCAN

  • Principal Component Analysis (PCA)

  • Latent Dirichlet allocation (LDA) Topic Modeling

Others

Ensemble

  • K-Fold Cross Validation

  • Bagging

  • Boosting

Metrics

True Samples
False Samples

Predict True

True Positive

False Positive [Type I Error]

Predict False

False Negative [Type II Error]

True Negative

  • Precision and Recall

  • F1 Score

  • Receiver Operating Characteristic (ROC)

  • Area Under ROC (AUC)

  • Confusion Matrix

Reference

P(A∣B)=P(B∣A)P(A)P(B)P(A|B) = \frac{P(B|A)P(A)}{P(B)}P(A∣B)=P(B)P(B∣A)P(A)​
min⁑12∣∣w∣∣2s.t.Β y(i)(wTx(i)+b)β‰₯1,i=1,...,m\min \frac{1}{2} ||w||^2 \\ \text{s.t.}~y^{(i)}(w^{T}x^{(i)}+b) \geq 1, i=1,...,mmin21β€‹βˆ£βˆ£w∣∣2s.t.Β y(i)(wTx(i)+b)β‰₯1,i=1,...,m

Precision=TPTP+FP\text{Precision} = \frac{\text{TP}}{\text{TP} +\text{FP}}Precision=TP+FPTP​

Recall=TPTP+FN\text{Recall} = \frac{\text{TP}}{\text{TP}+\text{FN}}Recall=TP+FNTP​

F1Β score=2β‹…Precisionβ‹…RecallPrecision+Recall\text{F1 score} = 2 \cdot\frac{\text{Precision} \cdot \text{Recall}}{\text{Precision} +\text{Recall}}F1Β score=2β‹…Precision+RecallPrecisionβ‹…Recall​

TPR=TPTP+FN\text{TPR} = \frac{\text{TP}}{\text{TP}+\text{FN}}TPR=TP+FNTP​

FPR=FPFP+TN\text{FPR} = \frac{\text{FP}}{\text{FP}+\text{TN}}FPR=FP+TNFP​

SVM:

https://www.cnblogs.com/jerrylead/archive/2011/03/13/1982639.html
[ICDM'06]
josephmisiti/awesome-machine-learning
Coursera Machine Learning