【【deep learning学习笔记】注释yusugomori的LR代码 --- LogisticRegression.h】继续看yusugomori的代码,看逻辑回归。在DBN(Deep Blief Network)中,下面几层是RBM,最上层就是LR了。关于回归、二类回归、以及逻辑回归,资料就是前面转的几篇。套路就是设定目标函数(softmax损失函数),对参数求偏导数,得出权重更新公式等。
LogisticRegression.h注释如下:
class LogisticRegression
{
public:
int N;
// number of input samples
int n_in;
// number of input nodes
int n_out;
// number of output nodes
double **W;
// weights connecting the input nodes and the output nodes
double *b;
// bias of the output nodes
// allocate memory and initialize the parameters
LogisticRegression(
int,// N
int,// n_in
int// n_out
);
~LogisticRegression();
public:
// train the logistic regression model, update the value of W and b
void train (
int*,// the input from input nodes in training set
int*,// the output from output nodes in training set
double // the learning rate
);
// calculate the softmax for a input vector
// dSoftMax = exp(d_i - Max) / sum_i( exp(d_i - Max) )
void softmax (
double* // the calculated softmax probabiltiy -- input & output
);
// do prediction by calculating the softmax probability from input
void predict (
int*,// the input from input nodes in testing set
double* // the calculated softmax probability
);
};
顺便提一句。从前RBM的那个注释,是在家用VS2008写的;现在这个,用CFree5.0,轻量级、编辑器操作贴心,赞一下!
推荐阅读
- paddle|动手从头实现LSTM
- 人工智能|干货!人体姿态估计与运动预测
- 推荐系统论文进阶|CTR预估 论文精读(十一)--Deep Interest Evolution Network(DIEN)
- Python专栏|数据分析的常规流程
- 读书笔记|《白话大数据和机器学习》学习笔记1
- Pytorch学习|sklearn-SVM 模型保存、交叉验证与网格搜索
- Python机器学习基础与进阶|Python机器学习--集成学习算法--XGBoost算法
- 深度学习|深度学习笔记总结
- 机器学习|机器学习Sklearn学习总结
- 机器学习|线性回归原理与python实现