Feel free to comment with any thoughts on our latest articles, newsletters, or podcast interviews, OR suggestions for things to cover or people to interview, or any questions for The Gradient editors!
To commemorate the launch of Gradient community threads, we will be giving away an exclusive Gradient hoodie to one lucky respondent chosen by our editors! If you’d like to be considered, just include your email in your first reply to this thread; more substantive comments will be more likely to win! Comments that are clearly low effort / just an email will not be considered.
Here are links to the latest from The Gradient if you missed any of it:
Hello VK! Hugh here, another editor at the Gradient. We are glad you liked the articles. Are there any other people you would like to see on the podcast in the topic of self-supervised learning?
There are two people - Dr Ishan Mishra from FAIR and Simon Kornblith from the simCLR paper. The question that I have is that how to use self-supervised learning effectively for medical imaging. Secondly, the reasons why self-supervised model works.
I'd be interested in a podcast with Melanie Mitchell. I read her book (Artificial Intelligence: a guide for thinking humans) and it was a very nice introduction to the field of AI.
Hello Joe! Hugh here, another editor at the Gradient. In my opinion, we have a long way to go before we achieve AGI. Nevertheless, I think one fruitful direction along that path is combining classical methods with deep learning. Two specific areas that I think are promising: search and control theory.
Search: Deep RL wasn't the only secret sauce behind successes like AlphaGo/Zero. In particular, even with the improved Go AI methods now, no pure deep RL approaches managed to achieve superhuman performance in Go without search. The MCTS search methods in AlphaGo/Zero/MuZero are still somewhat limited in scope. If we can get search to work in the general setting, there's no telling what new fields AI can conquer.
Control Theory: Deep learning and robotics doesn't *quite* work yet in the real world (in simulation it seems to work quite fine though). On the flip side, control theory (a la Boston Dynamics) seems to create very, very nice demos but isn't scalable in the general setting. I think if someone can get deep learning to work in robotics (possibly by combining it with classical method), we could see another big revolution in what AI can do.
Hi everyone! Andrey here, one of the founders and a lead editor at The Gradient.
We've been wanting to experiment with more community interaction for a long time, but haven't been sure of the best way to go about it and are excited to try out this format. Excited to hear from you!
We've also considered having a subreddit or Discord or a Community Slack, and would welcome your thoughts wrt what sort of forum you might be interested in for interacting with fellow Gradient readers.
Personally I think a subreddit would be a great place to have as a forum. I think having it on a place where people come regularly is often more beneficial than having it on a separate site like this. Unlike discord or slack, reddit has the advantage that it is accessible without an account and that you can make topics that are easily searchable.
The podcast with Yann LeCun was amazing. Waiting for more self-supervised articles or interviews on the same topic.
I'm excited to be a part of this community in its developing days. I will try my best to contribute as much as possible.
vkmavani3@gmail.com
Hello VK! Hugh here, another editor at the Gradient. We are glad you liked the articles. Are there any other people you would like to see on the podcast in the topic of self-supervised learning?
Hello Haug, thank you for considering my request.
There are two people - Dr Ishan Mishra from FAIR and Simon Kornblith from the simCLR paper. The question that I have is that how to use self-supervised learning effectively for medical imaging. Secondly, the reasons why self-supervised model works.
I'd be interested in a podcast with Melanie Mitchell. I read her book (Artificial Intelligence: a guide for thinking humans) and it was a very nice introduction to the field of AI.
Thanks for the suggestion! Professor Mitchell would be a great guest on the podcast, we'll try to make it happen!
I haven't heard any of the podcast episodes yet, but I love the insightful and thorough articles you write. Looking forward to more – thanks!
I am very glad to have found this podcast. Just finished the one with LeCun and looking forwards to getting to the latest podcast.
Transformers and GANs are two of the most popular topics nowadays so getting some guests who are working in those fields might be very interesting.
mohdig9@gmail.com
what are your thoughts on the next step (say after deep learning) on the road to AGI?
Hello Joe! Hugh here, another editor at the Gradient. In my opinion, we have a long way to go before we achieve AGI. Nevertheless, I think one fruitful direction along that path is combining classical methods with deep learning. Two specific areas that I think are promising: search and control theory.
Search: Deep RL wasn't the only secret sauce behind successes like AlphaGo/Zero. In particular, even with the improved Go AI methods now, no pure deep RL approaches managed to achieve superhuman performance in Go without search. The MCTS search methods in AlphaGo/Zero/MuZero are still somewhat limited in scope. If we can get search to work in the general setting, there's no telling what new fields AI can conquer.
Control Theory: Deep learning and robotics doesn't *quite* work yet in the real world (in simulation it seems to work quite fine though). On the flip side, control theory (a la Boston Dynamics) seems to create very, very nice demos but isn't scalable in the general setting. I think if someone can get deep learning to work in robotics (possibly by combining it with classical method), we could see another big revolution in what AI can do.
Hi everyone! Andrey here, one of the founders and a lead editor at The Gradient.
We've been wanting to experiment with more community interaction for a long time, but haven't been sure of the best way to go about it and are excited to try out this format. Excited to hear from you!
We've also considered having a subreddit or Discord or a Community Slack, and would welcome your thoughts wrt what sort of forum you might be interested in for interacting with fellow Gradient readers.
Hi Andrey,
Personally I think a subreddit would be a great place to have as a forum. I think having it on a place where people come regularly is often more beneficial than having it on a separate site like this. Unlike discord or slack, reddit has the advantage that it is accessible without an account and that you can make topics that are easily searchable.
Thanks for your feedback! Those are indeed good points, we'll be sure to discuss this option.
seppo.keronen@gmail.com
Inspired by Walid Saba’s NLP =/= NLU article, I wrote this
https://link.medium.com/4PTCFZIxNib
Working towards systems capable of grounded referential language…