13 Comments

The podcast with Yann LeCun was amazing. Waiting for more self-supervised articles or interviews on the same topic.

I'm excited to be a part of this community in its developing days. I will try my best to contribute as much as possible.

vkmavani3@gmail.com

Expand full comment

Hello VK! Hugh here, another editor at the Gradient. We are glad you liked the articles. Are there any other people you would like to see on the podcast in the topic of self-supervised learning?

Expand full comment

Hello Haug, thank you for considering my request.

There are two people - Dr Ishan Mishra from FAIR and Simon Kornblith from the simCLR paper. The question that I have is that how to use self-supervised learning effectively for medical imaging. Secondly, the reasons why self-supervised model works.

Expand full comment

I'd be interested in a podcast with Melanie Mitchell. I read her book (Artificial Intelligence: a guide for thinking humans) and it was a very nice introduction to the field of AI.

Expand full comment

Thanks for the suggestion! Professor Mitchell would be a great guest on the podcast, we'll try to make it happen!

Expand full comment

I haven't heard any of the podcast episodes yet, but I love the insightful and thorough articles you write. Looking forward to more – thanks!

Expand full comment

I am very glad to have found this podcast. Just finished the one with LeCun and looking forwards to getting to the latest podcast.

Transformers and GANs are two of the most popular topics nowadays so getting some guests who are working in those fields might be very interesting.

mohdig9@gmail.com

Expand full comment

what are your thoughts on the next step (say after deep learning) on the road to AGI?

Expand full comment

Hello Joe! Hugh here, another editor at the Gradient. In my opinion, we have a long way to go before we achieve AGI. Nevertheless, I think one fruitful direction along that path is combining classical methods with deep learning. Two specific areas that I think are promising: search and control theory.

Search: Deep RL wasn't the only secret sauce behind successes like AlphaGo/Zero. In particular, even with the improved Go AI methods now, no pure deep RL approaches managed to achieve superhuman performance in Go without search. The MCTS search methods in AlphaGo/Zero/MuZero are still somewhat limited in scope. If we can get search to work in the general setting, there's no telling what new fields AI can conquer.

Control Theory: Deep learning and robotics doesn't *quite* work yet in the real world (in simulation it seems to work quite fine though). On the flip side, control theory (a la Boston Dynamics) seems to create very, very nice demos but isn't scalable in the general setting. I think if someone can get deep learning to work in robotics (possibly by combining it with classical method), we could see another big revolution in what AI can do.

Expand full comment

Hi everyone! Andrey here, one of the founders and a lead editor at The Gradient.

We've been wanting to experiment with more community interaction for a long time, but haven't been sure of the best way to go about it and are excited to try out this format. Excited to hear from you!

We've also considered having a subreddit or Discord or a Community Slack, and would welcome your thoughts wrt what sort of forum you might be interested in for interacting with fellow Gradient readers.

Expand full comment

Hi Andrey,

Personally I think a subreddit would be a great place to have as a forum. I think having it on a place where people come regularly is often more beneficial than having it on a separate site like this. Unlike discord or slack, reddit has the advantage that it is accessible without an account and that you can make topics that are easily searchable.

Expand full comment

Thanks for your feedback! Those are indeed good points, we'll be sure to discuss this option.

Expand full comment

seppo.keronen@gmail.com

Inspired by Walid Saba’s NLP =/= NLU article, I wrote this

https://link.medium.com/4PTCFZIxNib

Working towards systems capable of grounded referential language…

Expand full comment