[ML 스터디] Chapter1. Model Training & Cost Function - 1

Date:     Updated:

카테고리:

Main Reference:
- Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow - Aurelien Geron
- 혁펜하임의 딥러닝

🚌 Model Training & Cost Function

  • Least Square
  • Pseudo-inverse (SVD)
  • Gradient Descent & Convexity
  • Newton method vs Gradient Descent
  • Regression
  • Regulization - Ridge, Lasso

[ML 스터디] Model Training & Cost Function - 2 편에서 계속

  • Stochastic Gradient Descent vs Batch Gradient Descent
  • Classification
  • Logistic & Softmax Function
  • Cross-Entropy


🚌 Lecture Note

Least Square & Pseudo-inverse

Linear Algebra 참고


ML_Week1-01 ML_Week1-02 ML_Week1-03 ML_Week1-04 ML_Week1-05 ML_Week1-06


Gradient Descent & Convexity

ML_Week1-07 ML_Week1-08 ML_Week1-09 ML_Week1-10


Newton method vs Gradient Descent

ML_Week1-11 ML_Week1-12 ML_Week1-13


Regression

ML_Week1-14 ML_Week1-15


Regulization - Ridge, Lasso

ML_Week1-16 ML_Week1-17 ML_Week1-18 ML_Week1-19 ML_Week1-20 ML_Week1-21 ML_Week1-22



맨 위로 이동하기

ML 카테고리 내 다른 글 보러가기

댓글 남기기