[ML 스터디] Chapter1. Model Training & Cost Function - 2

Date:     Updated:

카테고리:

Main Reference:
- Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow - Aurelien Geron
- MIT 18.065 Matrix Methods in Data Analysis, Signal Processing, and Machine Learning

🚌 Model Training & Cost Function

[ML 스터디] Model Training & Cost Function - 1 편

  • Least Square
  • Pseudo-inverse (SVD)
  • Gradient Descent & Convexity
  • Newton method vs Gradient Descent
  • Regression
  • Regulization - Ridge, Lasso

  • Stochastic Gradient Descent vs Batch Gradient Descent
  • Classification
  • Logistic & Softmax Function
  • Cross-Entropy


🚌 Lecture Note

Stochastic Gradient Descent vs Batch Gradient Descent

ML_Week1-23 ML_Week1-24


Classification - Logistic & Cross-Entropy

ML_Week1-25 ML_Week1-26 ML_Week1-27 ML_Week1-28 ML_Week1-29 ML_Week1-30

Classification - Softmax & Cross-Entropy

ML_Week1-31 ML_Week1-32 ML_Week1-33 ML_Week1-34 ML_Week1-35 ML_Week1-36



맨 위로 이동하기

ML 카테고리 내 다른 글 보러가기

댓글 남기기