tensorflow apply_gradients
时间: 2023-10-29 21:09:11 浏览: 93
`tf.train.Optimizer.apply_gradients()` is a method in TensorFlow that applies gradients to variables. It takes in a list of gradient-variable pairs and applies them to the variables using the optimization algorithm specified in the optimizer.
Here's an example of how it can be used:
```python
import tensorflow as tf
# Create a simple linear model
x = tf.placeholder(tf.float32, shape=[None])
y = tf.placeholder(tf.float32, shape=[None])
W = tf.Variable(tf.zeros([1]))
b = tf.Variable(tf.zeros([1]))
y_pred = tf.add(tf.multiply(x, W), b)
# Define a loss function
loss = tf.reduce_mean(tf.square(y - y_pred))
# Define an optimizer
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
# Compute gradients and apply them
grads_and_vars = optimizer.compute_gradients(loss, [W, b])
train_op = optimizer.apply_gradients(grads_and_vars)
# Train the model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(1000):
_, loss_val = sess.run([train_op, loss], feed_dict={x: [1, 2, 3], y: [2, 4, 6]})
if i % 100 == 0:
print('Step {}: Loss = {}'.format(i, loss_val))
```
In this example, we first define a simple linear model and a loss function. We then define a `GradientDescentOptimizer` optimizer with a learning rate of 0.01. We compute the gradients of the loss with respect to the variables `W` and `b`, and then apply them using `apply_gradients()`. Finally, we train the model for 1000 steps and print the loss every 100 steps.
阅读全文