rpastuszak 17 hours ago

Out of sheer curiosity, how much time did you spend on it on average? How much of this knowledge are you using now?

1
hzay 17 hours ago

Not the poster you responded to but I learned quite a bit from kaggle too.

I started from scratch, spent 2-4 hrs per day for 6 months & won a silver in a kaggle NLP competition. Now I use some of it now but not all of it. More than that, I'm quite comfortable with models, understand the costs/benefits/implications etc. I started with Andrew Ng's intro courses, did a bit of fastai, did Karpathy's Zero to Hero fully, all of Kaggle's courses & a few other such things. Kagglers share excellent notebooks and I found them v helpful. Overall I highly recommend this route of learning.

wyclif 6 hours ago

Thanks; this is a very helpful and informative reply. Are you referring to DeepLearning.AI?

hzay 5 hours ago

I started with this 3 part course - https://www.coursera.org/specializations/machine-learning-in.... I think the same course is available at deeplearning.ai as well, I'm not sure, but I found coursera's format of ~5 min videos on the phone app very helpful (with speed-up options). I was a new mother and didn't have continuous hours of time back then. I could watch these videos while brushing, etc. It helped me to not quit. After a point I was hooked & baby also grew up a bit and I gradually acquired more time and energy for learning ML. :)

fastai is also amazing, but it's made of 1.5 hour videos, and is more freeflowing. By the time I even figured out where we stopped last time, my time would sometimes be up. It was very discouraging because of this. But later, once I got a little more time & some basic understanding from Andrew Ng, I was able to attempt fastai.

Foobar8568 16 hours ago

I was playing also on kaggle a few years back, similar feedback.

solardev 16 hours ago

Thanks for the detailed reply!

swyx 14 hours ago

i mean yes but also how much does kaggling/traditional ML path actually prepare you for the age of closed model labs and LLM APIs?

im not even convinced kaggling helps you interview at an openai/anthropic (its not a negative, sure, but idk if itd be what theyd look for for a research scientist role)

hzay 13 hours ago

I learned ML only to satisfy my curiosity, so I don't know if it's useful for interviewing. :)

Now when I read a paper on something unrelated to AI (idk, say progesterone supplements), and they mention a random forest, I know what they're talking about. I understand regression, PCA, clustering, etc. When I trained a few transformer models (not pretrained) on my native language texts, I was shocked by how rapidly they learn connotations. I find transformer-based LLMs to be very useful, yes, but not unsettlingly AGI-like, as I did before learning about them. I understand the usual way of building recommender systems, embeddings and things. Image models like Unets, GANs etc were very cool too, and when your own code produces that magical result, you see the power of pretraining + specialization. So yeah, idk what they do in interviews nowadays but I found my education very fruitful. It was how I felt when I first picked up programming.

Re the age of LLMs, it is precisely because LLMs will be ubiquitous I wanted to know how they work. I felt uncomfortable treating them as black boxes that you don't understand technically. Think about the people who don't know simple things about a web browser, like opening dev tools and printing the auth token or something. It's not great to be in that place.