It’s expected, as this dataset is quite similar to MNIST.īut what will happen if we introduce a more complex dataset and neural network architecture? Performance test - CIFAR-10ĬIFAR-10 also falls into the category of “hello world” deep learning datasets. The runtime results for the same environments are shown below: Image 3 - Fashion MNIST model average training times (image by author) (train_images, train_labels), (test_images, test_labels) = datasets.fashion_mnist.load_data()Īs you can see, the only thing that’s changed here is the function used to load the dataset. Because of that, you can use the identical neural network architecture for the training: import tensorflow as tf This dataset is quite similar to the regular MNIST, but is contains pieces of clothing instead of handwritten digits. Keep in mind that results may vary, as there’s no guarantee of the runtime environment in Colab. Colab outperformed it in both CPU and GPU runtimes. The results are somewhat disappointing for a new Mac. You can see the runtime comparisons below: Image 2 - MNIST model average training times (image by author)
The above script was executed on an M1 MBP and Google Colab (both CPU and GPU). Validation_data=(test_images, test_labels) Train_images, test_images = train_images / 255.0, test_images / 255.0 (train_images, train_labels), (test_images, test_labels) = _data() If you’re on an M1 Mac, uncomment the mlcompute lines, as these will make things run a bit faster: import tensorflow as tfįrom tensorflow.keras import datasets, layers, models The following script trains a neural network classifier for ten epochs on the MNIST dataset. It comes built-in with TensorFlow, making it that much easier to test. The MNIST dataset is something like a “hello world” of deep learning. M1 chip demolished Intel chip in my 2019 Mac. Geekbench 5 was used for the tests, and you can see the results below: Image 1 - Geekbench 5 results (Intel MBP vs.
LEARNING PYTHON ON MAC PRO
The comparison is made between the new MacBook Pro with the M1 chip and the base model (Intel) from 2019.
Let’s start with the basic CPU and GPU benchmarks first. They only compare the average training time per epoch.
The test you’ll see aren’t “scientific” in any way, shape or form. This is only for macOS 11.0 and above, so keep that in mind. whl files for TensorFlow and it’s dependencies.
LEARNING PYTHON ON MAC DOWNLOAD
You can refer to this link to download the. Getting TensorFlow (version 2.4) to work properly is easier said than done. Not all data science libraries are compatible with the new M1 chip yet.
LEARNING PYTHON ON MAC FREE
Short answer - yes, there are some improvements in this department, but are Macs now better than, let’s say, Google Colab? Keep in mind, Colab is an entirely free option. I’ve already demonstrated how fast the M1 chip is for regular data science tasks, but what about deep learning? Both the processor and the GPU are far superior to the previous-generation Intel configurations. On the MacBook Pro, it consists of 8 core CPU, 8 core GPU, and 16 core neural engine, among other things. But what does this mean for deep learning? That’s what you’ll find out today. So far, it’s proven to be superior to anything Intel has offered. There’s a lot of hype behind the new Apple M1 chip.