I'm testing tensorflow
on my new M2 Mac device. For experimentation purposes, I have a very simple script:
import tensorflow as tf
from tensorflow import keras as k
# Print list of devices available
print("The following devices are available: ")
for device in tf.config.experimental.list_physical_devices():
print(device)
# Load a demo dataset
(x_train, y_train), (x_test, y_test) = k.datasets.mnist.load_data()
# Normalize the data
x_train = x_train / 255.0
x_test = x_test / 255.0
# Define the model
model = k.models.Sequential([
k.layers.Flatten(input_shape=(28, 28)),
k.layers.Dense(128, activation=tf.nn.relu),
k.layers.Dropout(0.2),
k.layers.Dense(10, activation=tf.nn.softmax)
])
optimizer = k.optimizers.legacy.Adam(learning_rate=0.0001)
# Compile the model
model.compile(optimizer=optimizer,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
# Train the model
model.fit(x_train, y_train, epochs=50, callbacks=[tf.keras.callbacks.TensorBoard(log_dir='./logs')])
# Evaluate the model
evaluation = model.evaluate(x_test, y_test, callbacks=[tf.keras.callbacks.TensorBoard(log_dir='./logs')])
# Save the model
model.save('model.keras')
For some reason, installing tensorflow-metal
changes the learning curve of my tutorial model, compared to just using tensorflow
(in which case I can't use the GPU). I don't understand what could cause this. The script was exactly the same in both cases. Can anyone enlighten me?