When feeding symbolic tensors to a model, we expect thetensors to have a static batch size

i have a CNN-Autoencoder in python for classification .when i use fit() function i get this error

“When feeding symbolic tensors to a model, we expect thetensors to have a static batch size. Got tensor with shape: (None, 1280, 1)”

input_img1 = x_train.reshape((x_train.shape[0], x_train.shape[1], 1))
    #print(input_img1)
    input_img = Input(input_img1.shape[1:])


    x_test = x_test.reshape((x_test.shape[0], x_test.shape[1], 1))
    x_test = Input(x_test.shape[1:])









    '''
                     -------------------------
    |---------------| encoder |-------------------------------------------------------|
                     --------------------------
    '''
    x = Conv1D(16,3, activation='relu', padding='same', name='conv1')(input_img)
    x = BatchNormalization()(x)
    x = MaxPooling1D(2, padding='same', name='max1')(x)
    x = Conv1D(32, 3, activation='relu', padding='same', name='conv2')(x)
    x = BatchNormalization()(x)
    x = MaxPooling1D(2, padding='same', name='max2')(x)
    x = Conv1D(64, 3, activation='relu', padding='same', name='con3')(x)
    x = BatchNormalization()(x)
    encoded = MaxPooling1D(2, padding='same', name='max3')(x)



    print(encoded)
    '''
                     -------------------------
    |---------------| decoder |--------------------------------------------------------|
                     -------------------------
    '''
    x = Conv1D(64, 3, activation='relu', padding='same', name='conv4')(encoded)
    x = BatchNormalization()(x)
    x = UpSampling1D(2, name='up1')(x)
    x = Conv1D(32, 3, activation='relu', padding='same', name='conv5')(x)
    x = BatchNormalization()(x)
    x = UpSampling1D(2, name='up2')(x)
    x = Conv1D(16, 3, activation='relu', padding='same', name='conv6')(x)
    x = BatchNormalization()(x)
    x = UpSampling1D(2,name='up3')(x)
    decoded = Conv1D(1, 3, activation='sigmoid', padding='same', name='conv7')(x)

    print(decoded)

    autoencoder = Model(input_img, decoded)
    autoencoder.compile(Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False), loss='binary_crossentropy', metrics=['accuracy'])

    print('3')
    #autoencoder.summary()

    '''
                     -------------------------
    |---------------| training model |-------------------------------------------------------|
                     -------------------------
    '''


    #x_test = input_img



    print(input_img)
    history = autoencoder.fit(input_img1,
                                     epochs= 200,
                                     batch_size=10,
                                     shuffle=True,
                                     validation_data=(x_test, x_test),
                                     callbacks=[TensorBoard(log_dir='/tmp/autoencoder')])