请问在AdamOptimizer中,是如何针对不同的损失函数进行求导,来梯度下降的呢?
还有就是在下面这段程序里是如何实现反向传播来更新参数的呢?
with tf.Session() as sess:
sess.run(init)
for i in range(train_steps):
batch_data, batch_labels = train_data.next_batch(batch_size)
loss_val, acc_val, _ = sess.run([loss, accuracy, train_op],
feed_dict={ x: batch_data,y: batch_labels})