PLS系列004 多因变量非线性PLS


多因变量非线性PLS

  • 1 多因变量非线性PLS[不是拟线性方法]
    • 1.1 计算推导
    • 1.2 简化算法
    • 1.3 性质
  • Reference

1 多因变量非线性PLS[不是拟线性方法] 1.1 计算推导 在PLS进行之前,首先要进行预备分析,目的是判断自变量(因变量)是否存在多重共线性,判断因变量与自变量是否存在相关关系,进而决定是否需要采用PLS方法建模,具体计算方法:记矩阵 Z = ( X , Y ) Z=(X,Y) Z=(X,Y),求 Z Z Z的各列数据之间的简单相关系数。然后考虑是否采用PLS,若采用:
①样本数据 X X X与 Y Y Y标准化预处理
②记 t 1 {{t}_{1}} t1?是 X X X的第1个成分有 t 1 = X w 1 {{t}_{1}}=X{{w}_{1}} t1?=Xw1?,其中 w 1 {{w}_{1}} w1?是 X X X的第1个轴(单位列向量即 ∥ w 1 ∥ = 1 \left\| {{w}_{1}} \right\|\text{=}1 ∥w1?∥=1)。
u 1 {{u}_{1}} u1?是 Y Y Y的第1个成分有 u 1 = Y v 1 {{u}_{1}}=Y{{v}_{1}} u1?=Yv1?,其中 v 1 {{v}_{1}} v1?是 X X X的第1个轴(单位列向量即 ∥ v 1 ∥ = 1 \left\| {{v}_{1}} \right\|\text{=}1 ∥v1?∥=1)。
t 1 {{t}_{1}} t1?、 u 1 {{u}_{1}} u1?为列向量,行数为 n n n,即正好是样本集合数。
w 1 {{w}_{1}} w1?为列向量,行数为 p p p,即正好是自变量个数
v 1 {{v}_{1}} v1?为列向量,行数为 q q q,即正好是因变量个数
t 1 {{t}_{1}} t1?和 u 1 {{u}_{1}} u1?满足(1)中两个条件则有:
变异信息最大: V a r ( t 1 ) → max ? , V a r ( u 1 ) → max ? Var({{t}_{1}})\to \max ,Var({{u}_{1}})\to \max Var(t1?)→max,Var(u1?)→max
相关程度最大: r ( t 1 , u 1 ) → max ? r({{t}_{1}},{{u}_{1}})\to \max r(t1?,u1?)→max
综合可得协方差最大: C o v ( t 1 , u 1 ) = r ( t 1 , u 1 ) V a r ( t 1 ) V a r ( u 1 ) → max ? Cov({{t}_{1}},{{u}_{1}})=r({{t}_{1}},{{u}_{1}})\sqrt{Var({{t}_{1}})Var({{u}_{1}})}\to \max Cov(t1?,u1?)=r(t1?,u1?)Var(t1?)Var(u1?) ?→max
由于 1 n < X w 1 , Y v 1 > = C o v ( t 1 , u 1 ) \frac{1}{n}=Cov({{t}_{1}},{{u}_{1}}) n1?=Cov(t1?,u1?)且 n n n为常数,则:
max ? < X w 1 , Y v 1 > = ( X w 1 ) T Y v 1 = w 1 T X T Y v 1 s . t { w 1 T w 1 = ∥ w 1 ∥ 2 = 1 v 1 T v 1 = ∥ v 1 ∥ 2 = 1 \begin{aligned} & \max ={{(X{{w}_{1}})}^{T}}Y{{v}_{1}}=w_{_{1}}^{T}{{X}^{T}}Y{{v}_{1}} \\ & s.t\left\{ \begin{matrix} w_{_{1}}^{T}{{w}_{1}}={{\left\| {{w}_{1}} \right\|}^{2}}=1 \\ v_{_{1}}^{T}{{v}_{1}}={{\left\| {{v}_{1}} \right\|}^{2}}=1 \\ \end{matrix} \right. \\ \end{aligned} ?max=(Xw1?)TYv1?=w1?T?XTYv1?s.t{w1?T?w1?=∥w1?∥2=1v1?T?v1?=∥v1?∥2=1??
根据拉格朗日算法有:
f = w 1 T X T Y v 1 ? λ ( w 1 T w 1 ? 1 ) ? μ ( v 1 T v 1 ? 1 ) f=w_{_{1}}^{T}{{X}^{T}}Y{{v}_{1}}-\lambda (w_{_{1}}^{T}{{w}_{1}}-1)-\mu (v_{_{1}}^{T}{{v}_{1}}-1) f=w1?T?XTYv1??λ(w1?T?w1??1)?μ(v1?T?v1??1)
对 f f f分别求关于 w 1 , v 1 , λ , μ {{w}_{1}},{{v}_{1}},\lambda ,\mu w1?,v1?,λ,μ的偏导且置0(求),有:
{ ? f ? w 1 = X T Y v 1 ? 2 λ w 1 = 0 ? f ? v 1 = Y T X w 1 ? 2 μ v 1 = 0 ? f ? λ = ? ( w 1 T w 1 ? 1 ) = 0? f ? μ = ? ( v 1 T v 1 ? 1 ) = 0\left\{ \begin{matrix} \frac{\partial f}{\partial {{w}_{1}}}={{X}^{T}}Y{{v}_{1}}-2\lambda {{w}_{1}}=0 \\ \frac{\partial f}{\partial {{v}_{1}}}={{Y}^{T}}X{{w}_{1}}-2\mu {{v}_{1}}=0 \\ \frac{\partial f}{\partial \lambda }=-(w_{_{1}}^{T}{{w}_{1}}-1)=0\ \ \ \ \\ \frac{\partial f}{\partial \mu }=-(v_{_{1}}^{T}{{v}_{1}}-1)=0\ \ \ \ \ \\ \end{matrix} \right. ???????????w1??f?=XTYv1??2λw1?=0?v1??f?=YTXw1??2μv1?=0?λ?f?=?(w1?T?w1??1)=0?μ?f?=?(v1?T?v1??1)=0?
由上式可推出:
2 λ = 2 μ = w 1 T X T Y v 1 = ( X w 1 ) T Y v 1 = < X w 1 , Y v 1 > 2\lambda =2\mu =w_{_{1}}^{T}{{X}^{T}}Y{{v}_{1}}={{(X{{w}_{1}})}^{T}}Y{{v}_{1}}\text{=} 2λ=2μ=w1?T?XTYv1?=(Xw1?)TYv1?=
记 θ 1 = 2 λ = 2 μ = w 1 T X T Y v 1 {{\theta }_{1}}=2\lambda =2\mu =w_{_{1}}^{T}{{X}^{T}}Y{{v}_{1}} θ1?=2λ=2μ=w1?T?XTYv1?,则 θ 1 {{\theta }_{1}} θ1?是优化问题的目标函数且使是 θ 1 {{\theta }_{1}} θ1?达到最大必须有有:
{ X T Y v 1 = θ 1 w 1 Y T X w 1 = θ 1 v 1 \left\{ \begin{aligned} & {{X}^{T}}Y{{v}_{1}}={{\theta }_{1}}{{w}_{1}} \\ & {{Y}^{T}}X{{w}_{1}}\text{=}{{\theta }_{1}}{{v}_{1}} \\ \end{aligned} \right. {?XTYv1?=θ1?w1?YTXw1?=θ1?v1??
将上面组合式结合得:
X T Y ( 1 θ 1 Y T X w 1 ) = θ 1 w 1 ? X T Y Y T X w 1 = θ 1 2 w 1 {{X}^{T}}Y(\frac{1}{{{\theta }_{1}}}{{Y}^{T}}X{{w}_{1}})={{\theta }_{1}}{{w}_{1}}\Rightarrow {{X}^{T}}Y{{Y}^{T}}X{{w}_{1}}=\theta _{_{1}}^{2}{{w}_{1}} XTY(θ1?1?YTXw1?)=θ1?w1??XTYYTXw1?=θ1?2?w1?
同理可得:
Y T X X T Y v 1 = θ 1 2 v 1 {{Y}^{T}}X{{X}^{T}}Y{{v}_{1}}=\theta _{_{1}}^{2}{{v}_{1}} YTXXTYv1?=θ1?2?v1?
可见, w 1 {{w}_{1}} w1?是矩阵 X T Y Y T X {{X}^{T}}Y{{Y}^{T}}X XTYYTX的特征向量,对应的特征值为 θ 1 2 \theta _{_{1}}^{2} θ1?2?。 θ 1 {{\theta }_{1}} θ1?为目标函数值且为最大。则 w 1 {{w}_{1}} w1?是 X T Y Y T X {{X}^{T}}Y{{Y}^{T}}X XTYYTX的最大特征值 θ 1 2 \theta _{_{1}}^{2} θ1?2?的单位特征向量(列向量)。同理, v 1 {{v}_{1}} v1?是 Y T X X T Y {{Y}^{T}}X{{X}^{T}}Y YTXXTY最大特征值 θ 1 2 \theta _{_{1}}^{2} θ1?2?的单位特征向量(列向量)。
我们通过求得 w 1 {{w}_{1}} w1?和 v 1 {{v}_{1}} v1?之后即可得到第1成分:
{ t 1 = X w 1 u 1 = Y v 1 \left\{ \begin{aligned} & {{t}_{1}}=X{{w}_{1}} \\ & {{u}_{1}}=Y{{v}_{1}} \\ \end{aligned} \right. {?t1?=Xw1?u1?=Yv1??
由(1)式我们可以进一步推导出:
θ 1 = < t 1 , u 1 > = w 1 T X T Y v 1 {{\theta }_{1}}\text{=}<{{t}_{1}},{{u}_{1}}>=w_{1}^{T}{{X}^{T}}Y{{v}_{1}} θ1?==w1T?XTYv1?
然后分别进行 X X X、 Y Y Y对 t 1 {{t}_{1}} t1?的非线性回归(这里 Y Y Y对 t 1 {{t}_{1}} t1?的非线性回归):
{ X = g ( t 1 ) + X 1 Y = f ( t 1 ) + Y 1 Y = ψ ( u 1 ) + Y 1 ? \left\{ \begin{aligned} & X=g\left( {{t}_{1}} \right)+{{X}_{1}} \\ & Y=f\left( {{t}_{1}} \right)+Y_{1}^{{}} \\ & Y=\psi \left( {{u}_{1}} \right)+Y_{1}^{*} \\ \end{aligned} \right. ???????X=g(t1?)+X1?Y=f(t1?)+Y1?Y=ψ(u1?)+Y1???
另外, X 1 {{X}_{1}} X1?、 Y 1 {{Y}_{1}} Y1?则为 X X X、 Y Y Y的残差信息矩阵。(回归系数向量可利用PLS回归性质推导?)
在PLS方法中,我们称 w w w为模型效应权重(Model Effect Weights), v v v为因变量权重(Dependent Variable Weights), p p p为模型效应载荷量(Model Effect Loadings)。 模型效应指的就是X即自变量O(∩_∩)O哈哈~
得分向量 t t t,载荷向量 p p p,权重向量 w w w.
[注意]在上面3个非线性回归方程中,Y对u的非线性回归方程在后面计算中不再接触到,因此,不会再求解这个方程。

③用残差信息矩阵 X 1 {{X}_{1}} X1?、 Y 1 {{Y}_{1}} Y1?取代 X X X、 Y Y Y,求第2个成分 t 2 {{t}_{2}} t2?、 u 2 {{u}_{2}} u2?和第2个轴 w 2 {{w}_{2}} w2?、 v 2 {{v}_{2}} v2?,即:
{ t 2 = X 1 w 2 u 2 = Y 1 v 2 \left\{ \begin{aligned} & {{t}_{2}}={{X}_{1}}{{w}_{2}} \\ & {{u}_{2}}={{Y}_{1}}{{v}_{2}} \\ \end{aligned} \right. {?t2?=X1?w2?u2?=Y1?v2??
θ 2 = < t 2 , u 2 > = w 2 T X 1 T Y 1 v 2 {{\theta }_{2}}=<{{t}_{2}},{{u}_{2}}>=w_{2}^{T}X_{1}^{T}{{Y}_{1}}{{v}_{2}} θ2?==w2T?X1T?Y1?v2?
w 2 {{w}_{2}} w2?是对应于矩阵 X 1 T Y 1 Y 1 T X 1 X_{1}^{T}{{Y}_{1}}Y_{1}^{T}{{X}_{1}} X1T?Y1?Y1T?X1?最大特征值 θ 2 {{\theta }_{2}} θ2?的特征向量(列向量), v 2 {{v}_{2}} v2?是对应于矩阵 Y 1 T X 1 X 1 T Y 1 Y_{1}^{T}{{X}_{1}}X_{1}^{T}{{Y}_{1}} Y1T?X1?X1T?Y1?最大特征值的特征向量(列向量),于是回归方程:
{ X 1 = f ( t 2 ) + X 2 Y 1 = g ( t 2 ) + Y 2 \left\{ \begin{aligned} & {{X}_{1}}=f\left( {{t}_{2}} \right)+{{X}_{2}} \\ & {{Y}_{1}}=g\left( {{t}_{2}} \right)+{{Y}_{2}} \\ \end{aligned} \right. {?X1?=f(t2?)+X2?Y1?=g(t2?)+Y2??
④如此利用剩下的残差信息矩阵不断迭代计算,我们假设 X X X的秩为 m m m(即可以有A个成分):
{ X = g ( t 1 ) + g ( t 2 ) + ? + g ( t m ) + X m Y = f ( t 1 ) + f ( t 2 ) + ? + f ( t m ) + Y m \left\{ \begin{aligned} & X=g\left( {{t}_{1}} \right)+g\left( {{t}_{2}} \right)+\cdots +g\left( {{t}_{m}} \right)+{{X}_{m}} \\ & Y=f\left( {{t}_{1}} \right)+f\left( {{t}_{2}} \right)+\cdots +f\left( {{t}_{m}} \right)\text{+}{{Y}_{m}} \\ \end{aligned} \right. {?X=g(t1?)+g(t2?)+?+g(tm?)+Xm?Y=f(t1?)+f(t2?)+?+f(tm?)+Ym??
1.2 简化算法 在上面的非线性偏最小二乘模型计算过程中,每次求出主成分后,都需要求出自变量数据集和主成分之间的非线性回归,如提取第1个主成分 t t t后,计算 X = g ( t 1 ) + X 1 X=g\left( {{t}_{1}} \right)+{{X}_{1}} X=g(t1?)+X1?,这里 t 1 {{t}_{1}} t1?是 X X X中提取出来的主成分,并且是线性的,即 t 1 {{t}_{1}} t1?是自变量数据集的线性组合,反之,自变量数据集同时也是 t 1 {{t}_{1}} t1?的线性组合,因此, X = g ( t 1 ) + X 1 X=g\left( {{t}_{1}} \right)+{{X}_{1}} X=g(t1?)+X1?进行的非线性关系方程其实本质上是一个线性回归方程,因此,在后面的计算中, X X X对主成分的回归可简化为线性回归。即 X = t 1 p 1 T + X 1 X={{t}_{1}}p_{1}^{T}+{{X}_{1}} X=t1?p1T?+X1?,再次进行推导有:
{ X = g ( t 1 ) + X 1 ? X = t 1 p 1 T + X 1 Y = f ( t 1 ) + Y 1 \left\{ \begin{aligned} & X=g\left( {{t}_{1}} \right)+{{X}_{1}}\leftrightarrow X={{t}_{1}}p_{1}^{T}+{{X}_{1}} \\ & Y=f\left( {{t}_{1}} \right)+Y_{1}^{{}} \\ \end{aligned} \right. {?X=g(t1?)+X1??X=t1?p1T?+X1?Y=f(t1?)+Y1??
其中
p 1 = X T t 1 ∥ t 1 ∥ 2 {{p}_{1}}=\frac{{{X}^{T}}{{t}_{1}}}{{{\left\| {{t}_{1}} \right\|}^{2}}} p1?=∥t1?∥2XTt1??
{ X = t 1 p 1 T + t 2 p 2 T + ? + t m p m T + X m Y = f ( t 1 ) + f ( t 2 ) + ? + f ( t m ) + Y m \left\{ \begin{aligned} & X={{t}_{1}}p_{1}^{T}+{{t}_{2}}p_{2}^{T}+\cdots +{{t}_{m}}p_{m}^{T}+{{X}_{m}} \\ & Y=f\left( {{t}_{1}} \right)+f\left( {{t}_{2}} \right)+\cdots +f\left( {{t}_{m}} \right)\text{+}{{Y}_{m}} \\ \end{aligned} \right. {?X=t1?p1T?+t2?p2T?+?+tm?pmT?+Xm?Y=f(t1?)+f(t2?)+?+f(tm?)+Ym??
等价于
由于 w h ? = ∏ k = 1 h ? 1 ( E ? w k p k T ) w h&t h = X w h ? w_{h}^{*}=\prod\limits_{k=1}^{h-1}{(E-{{w}_{k}}p_{k}^{T})}{{w}_{h}}\ \And \ \ {{t}_{h}}=Xw_{h}^{*} wh??=k=1∏h?1?(E?wk?pkT?)wh? &th?=Xwh?? (性质)则有:
Y = f ( t 1 ) + f ( t 2 ) + ? + f ( t m ) + Y m= f ( X w 1 ? ) + f ( X w 2 ? ) + ? + f ( X w m ? ) + Y m \begin{aligned} & Y=f\left( {{t}_{1}} \right)+f\left( {{t}_{2}} \right)+\cdots +f\left( {{t}_{m}} \right)+{{Y}_{m}} \\ & \ \ \ =f(Xw_{1}^{*})+f(Xw_{2}^{*})+\cdots +f(Xw_{m}^{*})+{{Y}_{m}} \\ \end{aligned} ?Y=f(t1?)+f(t2?)+?+f(tm?)+Ym?=f(Xw1??)+f(Xw2??)+?+f(Xwm??)+Ym??
【注意】
对于主成分个数的判定,不会完全提取全部 A A A个主成分( A A A为原始自变量数据集的秩),一般情况下,提取的前 m m m个主成分能够代表自变量原始数据的绝大部分比例数据信息就可停止后续计算步骤,或者残差信息比较小时也可停止计算步骤。两者代表的意义是相同的。因此,在一般的案例实证分析中,一般要求对原始自变量数据集提取信息的比例超过80%即可停止继续提取主成分。
1.3 性质 根据
{ X T Y v 1 = θ 1 w 1 Y T X w 1 = θ 1 v 1 \left\{ \begin{aligned} & {{X}^{T}}Y{{v}_{1}}={{\theta }_{1}}{{w}_{1}} \\ & {{Y}^{T}}X{{w}_{1}}\text{=}{{\theta }_{1}}{{v}_{1}} \\ \end{aligned} \right. {?XTYv1?=θ1?w1?YTXw1?=θ1?v1??

{ t 1 = X w 1 u 1 = Y v 1 \left\{ \begin{aligned} & {{t}_{1}}=X{{w}_{1}} \\ & {{u}_{1}}=Y{{v}_{1}} \\ \end{aligned} \right. {?t1?=Xw1?u1?=Yv1??
可以得到:
{ t h = X h ? 1 w h u h = Y h ? 1 v h w h = 1 θ h X h ? 1 T Y v h = 1 θ h X h ? 1 T u h v h = 1 θ h Y h ? 1 T X w h = 1 θ h X h ? 1 T t h \left\{ \begin{matrix} \begin{aligned} & {{t}_{h}}={{X}_{h-1}}{{w}_{h}} \\ & {{u}_{h}}={{Y}_{h-1}}{{v}_{h}} \\ \end{aligned} \\ {{w}_{h}}=\frac{1}{{{\theta }_{h}}}X_{h-1}^{T}Y{{v}_{h}}=\frac{1}{{{\theta }_{h}}}X_{h-1}^{T}{{u}_{h}} \\ {{v}_{h}}=\frac{1}{{{\theta }_{h}}}Y_{h-1}^{T}X{{w}_{h}}=\frac{1}{{{\theta }_{h}}}X_{h-1}^{T}{{t}_{h}} \\ \end{matrix} \right. ???????????th?=Xh?1?wh?uh?=Yh?1?vh??wh?=θh?1?Xh?1T?Yvh?=θh?1?Xh?1T?uh?vh?=θh?1?Yh?1T?Xwh?=θh?1?Xh?1T?th??
①轴 w 1 , w 2 , ? ? , w m {{w}_{1}},{{w}_{2}},\cdots ,{{w}_{m}} w1?,w2?,?,wm?之间相互直交
②成分 t 1 , t 2 , ? ? , t m {{t}_{1}},{{t}_{2}},\cdots ,{{t}_{m}} t1?,t2?,?,tm?之间相互直交
③ t h T X l = 0 ( l ≥ h ) t_{h}^{T}{{X}_{l}}=0(l\ge h) thT?Xl?=0(l≥h)
④ p h T w h = ( t h T X h ? 1 ∥ t h ∥ 2 ) w h = t h T ( X h ? 1 w h ) ∥ t h ∥ 2 = t h T t h ∥ t h ∥ 2 = 1 p_{h}^{T}{{w}_{h}}=(\frac{t_{h}^{T}{{X}_{h-1}}}{{{\left\| {{t}_{h}} \right\|}^{2}}}){{w}_{h}}=\frac{t_{h}^{T}({{X}_{h-1}}{{w}_{h}})}{{{\left\| {{t}_{h}} \right\|}^{2}}}=\frac{t_{h}^{T}{{t}_{h}}}{{{\left\| {{t}_{h}} \right\|}^{2}}}=1 phT?wh?=(∥th?∥2thT?Xh?1??)wh?=∥th?∥2thT?(Xh?1?wh?)?=∥th?∥2thT?th??=1
⑤轴 w h {{w}_{h}} wh?与后续回归系数向量正交即 w h T p l = w h T X l ? 1 T t l ∥ t l ∥ 2 = 0 w_{h}^{T}{{p}_{l}}=w_{h}^{T}\frac{X_{l-1}^{T}{{t}_{l}}}{{{\left\| {{t}_{l}} \right\|}^{2}}}=0 whT?pl?=whT?∥tl?∥2Xl?1T?tl??=0
(重要) ? h ≥ 1 \forall h\ge 1 ?h≥1,有 X h {{X}_{h}} Xh?与 X X X的关系式:
X h = X ∏ k = 1 h ( E ? w k p k T ) {{X}_{h}}=X\prod\limits_{k=1}^{h}{(E-{{w}_{k}}p_{k}^{T})} Xh?=Xk=1∏h?(E?wk?pkT?)
其中 E E E为单位矩阵
证明(数学归纳法):
当 h = 1 h=1 h=1时, X 1 = X ? t 1 p 1 T = X ? X w 1 p 1 T = X ( E ? w 1 p 1 T ) {{X}_{1}}=X-{{t}_{1}}p_{1}^{T}=X-X{{w}_{1}}p_{1}^{T}=X(E-{{w}_{1}}p_{1}^{T}) X1?=X?t1?p1T?=X?Xw1?p1T?=X(E?w1?p1T?)
设在 h = k h=k h=k时成立,则证 h = k + 1 h=k+1 h=k+1时也成立:
X k + 1 = X k ? t k + 1 p k + 1 T = X k ? ( X k w k + 1 ) p k + 1 T= X k ( E ? w k + 1 p k + 1 T )= [ X ∏ h = 1 k ( E ? w h p h T ) ] ( E ? w k + 1 p k + 1 T ) \begin{aligned} & {{X}_{k+1}}={{X}_{k}}-{{\color{red}{t}_{k+1}}}p_{k+1}^{T}={{X}_{k}}-{\color{red}({{X}_{k}}{{w}_{k+1}})}p_{k+1}^{T} \\ & \ \ \ \ \ \ \ ={{X}_{k}}(E-{{w}_{k+1}}p_{k+1}^{T}) \\ & \ \ \ \ \ \ \ =\left[ X\prod\limits_{h=1}^{k}{(E-{{w}_{h}}p_{h}^{T})} \right](E-{{w}_{k+1}}p_{k+1}^{T}) \\ \end{aligned} ?Xk+1?=Xk??tk+1?pk+1T?=Xk??(Xk?wk+1?)pk+1T?=Xk?(E?wk+1?pk+1T?)=[Xh=1∏k?(E?wh?phT?)](E?wk+1?pk+1T?)?
则得证。
⑦任一成分 t h {{t}_{h}} th?是原自变量 X X X的线性组合即:
t h = X h ? 1 w h = X ∏ k = 1 h ? 1 ( E ? w k p k T ) w h = X w h ? {{t}_{h}}={{X}_{h-1}}{{w}_{h}}=X\prod\limits_{k=1}^{h-1}{(E-{{w}_{k}}p_{k}^{T})}{{w}_{h}}=Xw_{h}^{*} th?=Xh?1?wh?=Xk=1∏h?1?(E?wk?pkT?)wh?=Xwh??
其中
w h ? = ∏ k = 1 h ? 1 ( E ? w k p k T ) w h = w h ∏ k = 1 h ? 1 ( E ? w k p k T ) = w h { ( E ? w 1 p 1 T ) ( E ? w 2 p 2 T )? ( E ? w h ? 1 p h ? 1 T )} \begin{aligned} & w_{h}^{*}=\prod\limits_{k=1}^{h-1}{(E-{{w}_{k}}p_{k}^{T})}{{w}_{h}}={{w}_{h}}\prod\limits_{k=1}^{h-1}{(E-{{w}_{k}}p_{k}^{T})} \\ & ={{w}_{h}}\left\{ \left( E-{{w}_{1}}p_{1}^{T} \right)\left( E-{{w}_{2}}p_{2}^{T} \right)\ \ \cdots \left( E-{{w}_{h-1}}p_{h-1}^{T} \right)\ \right\} \\ \end{aligned} ?wh??=k=1∏h?1?(E?wk?pkT?)wh?=wh?k=1∏h?1?(E?wk?pkT?)=wh?{(E?w1?p1T?)(E?w2?p2T?)?(E?wh?1?ph?1T?) }?
E E E为单位矩阵。
【编程计算问题】
初始化 c h g = E chg=E chg=E
h=1 求 w 1 ? = w 1 × ( E ? O ) = w 1 × c h g w_{1}^{*}={{w}_{1}}\times \left( E-O \right)={{w}_{1}}\times chg w1??=w1?×(E?O)=w1?×chg
h=2 chg发生变化, c h g = c h g × ( E ? w 1 p 1 T ) chg=chg\times \left( E-{{w}_{1}}p_{1}^{T} \right) chg=chg×(E?w1?p1T?),求 w 2 ? = w 2 × c h g w_{2}^{*}={{w}_{2}}\times chg w2??=w2?×chg
h=3 chg发生变化, c h g = c h g × ( E ? w 2 p 2 T ) chg=chg\times \left( E-{{w}_{2}}p_{2}^{T} \right) chg=chg×(E?w2?p2T?),求 w 3 ? = w 3 × c h g w_{3}^{*}={{w}_{3}}\times chg w3??=w3?×chg
以上证明过程(王惠文书有)。
Reference
【PLS系列004 多因变量非线性PLS】王惠文.偏最小二乘方法原理及其应用
郭建校. 改进的高维非线性PLS回归方法及应用研究[D]. 天津大学, 2010.

    推荐阅读