Hi i am sitting for google tensorflow certification exam and there is model training for NLP to determine sentiment of twitter messages. The codes are by Laurence Moroney who hosted it on Coursera Tensorflow in Practice Specialisation.

Since it requires deep learning and i am using a surface book i5-6300u with no gpu, i decided to create a virtual machine on google cloud. Since the exam requires me to use pycharm ide, so i train my model on pycharm.

When i train the model using my pc, it is fine, training is slow but there is progress. However when i train on google cloud , by setting my vcpu to 16 and ram 60gb, with tesla t4 gpu, training got stuck. I think it is because of the pretrained weight file(Standford Glove 100D) i m required to use in my embedding layer.

Below is the code lines:

!wget --no-check-certificate \

```
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/glove.6B.100d.txt \
-O /tmp/glove.6B.100d.txt
```

embeddings_index = {};

with open(’/tmp/glove.6B.100d.txt’) as f:

```
for line in f:
values = line.split();
word = values[0];
coefs = np.asarray(values[1:], dtype='float32');
embeddings_index[word] = coefs;
```

embeddings_matrix = np.zeros((vocab_size+1, embedding_dim));

for word, i in word_index.items():

```
embedding_vector = embeddings_index.get(word);
if embedding_vector is not None:
embeddings_matrix[i] = embedding_vector;
```

model = tf.keras.Sequential([

tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=max_length, weights=[embeddings_matrix], trainable=False),

tf.keras.layers.Dropout(0.2),

tf.keras.layers.Conv1D(64, 5, activation=‘relu’),

tf.keras.layers.MaxPooling1D(pool_size=4),

tf.keras.layers.LSTM(64),

tf.keras.layers.Dense(1, activation=‘sigmoid’)

])

model.compile(loss=‘binary_crossentropy’,optimizer=‘adam’,metrics=[‘accuracy’])

model.summary()

num_epochs = 50

history = model.fit(training_sequences, training_labels, epochs=num_epochs, validation_data=(test_sequences, test_labels), verbose=1)

print(“Training Complete”)

The full codes can be accessed at:

Sorry if i am able to place the code lines in a suitable format but i didnt. pretty new to github and machine learning in general