tensorflow2.0-ValueError:A target array with shape was passed for an output of shape...问题解决
在Jupyter notebook训练一个多分类模型时遇到报错
问题:
喂入的是fashion_mnist数据集,训练和测试的图片都经过了归一化,标签也都转为独热编码
【tensorflow2.0-ValueError:A target array with shape was passed for an output of shape...问题解决】模型建立的代码如下:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape = (28, 28)))
model.add(tf.keras.layers.Dense(128, activation = 'relu'))
model.add(tf.keras.layers.Dense(10, activation = 'softmax'))
之前运行的时候是没有报错的,
后来我打算添加层,试一下网络拟合能力有没有提升,
于是在上面的代码下面简单粗暴地加了一行:
model.add(tf.keras.layers.Dense(128, activation = 'relu'))
之后进行模型装配,指定损失函数为分类交叉熵
model.compile(optimizer = tf.keras.optimizers.Adam(learning_rate = 0.01),
loss = 'categorical_crossentropy',
metrics = ['acc'])
传入数据进行训练的时候就遇到报错:
model.fit(train_image, train_label_onehot, epochs = 5)
报错如下:
...
ValueError: A target array with shape (60000, 10) was passed for an output of shape (None, 128)
while using as loss `categorical_crossentropy`.
This loss expects targets to have the same shape as the output.
原因:
添加网络层的时候没有修改输出神经元的个数。
将标签转为独热编码之后label.shape由(60000, )变为(60000, 10)
因此对应的输出的应该是10个神经元
解决:
128 改为10
model.add(tf.keras.layers.Dense(10, activation = 'relu'))
总结:
其实不应该这么直接修改网络层,
softmax层应置于最后一层。
在修改网络结构的时候要注意输出神经元的修改,
从第二层开始,
输入神经元由机器自动判断不用自己定义,
但输出神经元还是要自己判断。
推荐阅读
- iOS|iOS -Cannot synthesize weak property because the current deployment target does n
- (17.04.01)a标签target属性、svn、git、命令行命令、github
- Linux|commands commence before first target
- 解决打开安卓旧项目时报错“Gradle sync failed: unable to find valid certification path to requested target ”问题
- keras报错Error when checking target: expected dense_1 to have shape (5,) but got array with shape (1,)
- Linux设备驱动|error:Target dll has been cancelled.debugger aborted
- keil|Error: Target DLL has been cancelled. Debugger aborted !
- Error: Flash Download failed - Target DLL has been cancelled
- Keras|Keras 报错: Error when checking target: expected dense_4...
- no connection could be made because the target machine actively refused it.问题解决