tensorflow traceback: feed placeholder

x = tf.placeholder(tf.float32, [None, 5]) #x_1-x_5

W = tf.Variable(tf.zeros([5,1]))
b = tf.Variable(tf.zeros([1]))

y = tf.nn.softmax(tf.matmul(x, W) + b) #berechneter Wert für y

#Training
y_tensor = tf.placeholder(tf.float32, [None, 1])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_tensor * tf.log(y), reduction_indices=[1])) #Hier Cross-Entropie statt minimum squares method

optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)


#Start
session = tf.Session() #Google-> was ist das?
init = tf.global_variables_initializer()
session.run(init)


for i in range(10):
  batch_xs = [[dataA[i], dataB[i], driving_time[i],dataC[i],
              dataD[i]]]
  batch_ys = [[realized_time[i]]]
  session.run(optimizer ,feed_dict={x: batch_xs, y_tensor: batch_ys})

print(session.run(y_tensor))

Hi, i am working on my first neural net and for this wrote the following python code. The problem is, the optimizer does not work and a print of the placeholder tensors shows that tey are empty and i wonder why. They are fed in the last for loop.

Does anyone have an advice to fix this problem?

The error message is ‘tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor ‘Placeholder_1’ with dtype float and shape [?,1]’ for the line where i declare the tensor x and in the last line.