Hi Folks, How is MacBook Pro M2 chip with 8-core C...
# random
p
Hi Folks, How is MacBook Pro M2 chip with 8-core CPU and 10-core GPU, 16gb Ram, 256GB SSD for Machine Learning? Any issues that anyone had in the past with MacBook?
f
FYI the 256 GB version has a very slow SSD on M2 Air, and upgrading your said config to 512 GB SSD (which is the proper faster SSD that macs should have) would cost you around 1.7L, not to mention that it would still not have a fan so any workload that pushes NPU & GPU together would be subject to thermal throttling. And at that point, you might as well directly buy 14 inch MBP with M1 Pro that comes with 16GB RAM and 512 GB SSD as standard (along with ton of other features compared to Air) for 1.95L, but since those models are almost a year old, you may get decent discounts on those.
💯 2
m
do you want to train models on the M2 chip?
you would probably end up using colab and all
Nvidia GPUs are much better at this point
a
I don't think many libraries have adopted apples metal api yet so it might just end up cpu heavy. Do double check this. I have an M1 Pro computer and I was trying to train some model for the fast ai course but it was just very slow and didn't work at all. I ended up just using kaggle
a
FYI new macs expected next month.
p
Thank you all for your responses. Appreciate it. Most of the time you would use cloud but for smaller/exploration projects I would prefer to do it on laptop. So the consensus here is, MacBook Pro M2 256 GB is probably not that great for ML workloads. Any suggestions for Dell or Lenovo laptop?
f
Lenovo Legion 5 Pro is quite good!
🙏 1
m
yup @acceptable-summer-68562 we added support for training models using our frameworks in the last release https://pytorch-lightning.readthedocs.io/en/latest/accelerators/mps_basic.html
Also, a lot of operations won’t be supported on M1 yet, so those are moved to CPU for computation. I lead the dev for TPUs in the team, it has the same issue but not as much.
a
@many-intern-37109 awesome! Btw Do you know what is the biggest bottleneck in adopting metal api ? I haven't seen much libraries or even games using metal api. Is it the api or something else?
p
@many-intern-37109 @acceptable-summer-68562 Any idea if this issue is addressed on M2?
f
For the longest time, PyTorch and other similar libraries used instruction set from MathML that was only available on Intel, and even though AMD had equally performant (but generic x86 code) available to use, libraries didn’t use those, although it has been changing now that Ryzen is far more popular in consumer space, keep it in mind while using any non-Intel hardward, be it AMD or ARM.