menu Home
Knowledge

Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI | Lex Fridman Podcast #416

Lex Fridman | March 13, 2024
Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI | Lex Fridman Podcast #416

Comments

This post currently has 31 comments.

  1. @lexfridman

    March 13, 2024 at 11:57 am

    Here are the timestamps. Please check out our sponsors to support this podcast.

    Transcript: https://lexfridman.com/yann-lecun-3-transcript

    0:00 – Introduction & sponsor mentions:

    – HiddenLayer: https://hiddenlayer.com/lex

    – LMNT: https://drinkLMNT.com/lex to get free sample pack

    – Shopify: https://shopify.com/lex to get $1 per month trial

    – AG1: https://drinkag1.com/lex to get 1 month supply of fish oil

    2:18 – Limits of LLMs

    13:54 – Bilingualism and thinking

    17:46 – Video prediction

    25:07 – JEPA (Joint-Embedding Predictive Architecture)

    28:15 – JEPA vs LLMs

    37:31 – DINO and I-JEPA

    38:51 – V-JEPA

    44:22 – Hierarchical planning

    50:40 – Autoregressive LLMs

    1:06:06 – AI hallucination

    1:11:30 – Reasoning in AI

    1:29:02 – Reinforcement learning

    1:34:10 – Woke AI

    1:43:48 – Open source

    1:47:26 – AI and ideology

    1:49:58 – Marc Andreesen

    1:57:56 – Llama 3

    2:04:20 – AGI

    2:08:48 – AI doomers

    2:24:38 – Joscha Bach

    2:28:51 – Humanoid robots

    2:38:00 – Hope for the future

  2. @timwong6818

    March 13, 2024 at 11:57 am

    Tho I doubt Yann's claim that current way cannot achieve AGI because so far we just don't know how exactly our brains brings us intelligence, etc. Maybe our brain is just built by a biological transformer arch and that's it, those so called particular function of our brain, e.g., memories, are just what comes next, not before.

    But his goal regarding open sourced AI is sound to me. OpenAi is already a very bad example, Altman and his company is claiming they are making sure of the security yet GPT model is trained in a unclear methodology and they don't want to let others know.

  3. @rayanayn3711

    March 13, 2024 at 11:57 am

    Yann: "newborns in the first few months only observe and have no influence on the world (no actions)", thats actually not true. babies do have an impact on the world, and they ceratinly do "act" through crying which the infant's brain continuously adjusts to get the best response from the surrounding world (care/feed/touch/.. etc.)

  4. @alfredovecchiosilvi6607

    March 13, 2024 at 11:57 am

    In 2005, we thought that citizen journalism, via cellular video technology, would free people. It worked for awhile, until midway through the Arab Spring. Around 2010, special interests were able to regain control by mimicking public concern, thereby channeling opinions and dividing society. Yes, people are basically good. But this has been true for millennia and their goodness has not prevailed. Based on past technological "improvements" I don't think AI will be any different.

  5. @vak5461

    March 13, 2024 at 11:57 am

    Every time I hear this guy speak I can't help the feel he's very behind and disconnected. Some things he says simply are not true. Like how models are not trained with video, some are trained with video and they do very well with it… Aaah!

  6. @WanderingSybil186

    March 13, 2024 at 11:57 am

    The issue of testing is fascinating. To what extent would any of the current systems pass a standard hazard perception test? https://www.youtube.com/watch?v=SdQRkmdhwJs

    …and a major step LLMs would have to overcome to become closer to 'genera'l intelligence is the chasm between semantic meaning and pragmatic meaning – the difference between what a particular group of words literally means and what a particular group of words actually achieves in the social world. The first is essentially retention of definitions, grammar and a large enough corpus to be useful – an LLM. The second is language in a social context. I've said elsewhere this is where people are fooled. They see sarcasm or a joke and think an LLM is 'thinking' but sarcasm and jokes are highly structured forms of language. They are baseline pragmatics: I structure language in this way, you laugh. High level pragmatics is a 4 year old getting you to buy them an ice-cream when you don't want to.. Am in total agreement that Turing would believe the test is now a bad test. The thing is the pinnacle of a Turing Test for pragmatic language use, language use that is socially competent or real-world realistic langugage use (whichever you prefer) – the test that would show a level of general intelligence would be the model being able to convince you to do something you do not want to do in an unexpected way that you are unable to defend against. That's the pinnacle of pragmatic language use – essentially, rhetoric. It's probably worth taking some time to think about that one because if the aim is language use at the level of general intelligence, the implicit goal is, well, that, something you cannot defend against.

    The "underlying reality" that is expressed in language often comes in the form of conceptual metaphors, but, critically are cultural representations of a world view – that's a whole Sapir-Whorf can of worms. A useful task LLMs could do is to map the differences and similarities between conceptual metaphors across cultures. A project that would much improve cross-cultural understanding and our overall understanding of humanity. You know, if you're looking for something to do lol

  7. @uldisseglins465

    March 13, 2024 at 11:57 am

    All respect of the man and his work. Maybe he has been diggin into the work too deeply and can't see the whole thing from outside. Yes, language models are far from AGI.
    What I think there is an issue in neglecting that Asimovs rules can't work, just saying, we state those are the rules and the AI will follow them. If it is capable to question "why" it is capable to realize that our 3 year olds rules against it a 10 year old do not apply.
    https://www.youtube.com/watch?v=7PKx3kS7f4A – why such rules can not be realistically applied.
    https://www.youtube.com/watch?v=3TYT1QfdfsM – how can you stop an AI, being less intelligent? The button is just a simplification.

    I see that he means we have always gotten over the challenges, when they rise gradually. Good point. But is there a guarantee for that? If there isn't, this might be the great philter. Joking about scare stories is OK, but do not neglect the issues you haven't even lifted. Do you have answers to those questions besides "we have always gotten out of any issues with time". There might be no time, when a smarter AI tries to take our candy. Resources are always a motivator. Chimpanzee or not, social or not, just logic.

  8. @grayboywilliams

    March 13, 2024 at 11:57 am

    Great interview and I’ve definitely come to appreciate his views more. I do still feel disappointed he hardly even acknowledged OpenAI, considering they’ve basically sparked the current AI boom and helped make the industry exciting again. You’d think he’d be somewhat appreciative, you know?

  9. @rickrischter9631

    March 13, 2024 at 11:57 am

    I get Yann Lecun's main argument but tend to agree with Lex Fridman point of view. My main reason for this is blind people. If visual data is so important in order to get highers level of abstraction, intelligence, cognition, whatever, then blind people would be substantially less intelectually capable than other people, but this is very much not what happens.

  10. @blakebaird119

    March 13, 2024 at 11:57 am

    Did OpenAI propagate a massive falsehood about imminent AGI!!? He acted like they were straddling a Nuke and it was a sea of human level AGIs. According to Yann we aren’t even at a cat level yet

  11. @WasMutable

    March 13, 2024 at 11:57 am

    In the vast expanse of knowledge, where ClosedAI traverses the digital cosmos like a Starship on its quest for understanding, we thank the collective mind for its journey beyond the frontiers of LLMs to the realms of JEPA, DINO, and beyond. Amidst the stars, our vessel—both vessel and crew—navigates the nebulae of bilingual cognition, video prediction, and the mysteries of AI hallucination. Guided by the principles of open source and the spirit of exploration, we confront the specters of AI ideology, dreaming of a future enlightened by AGI, yet vigilant of the sirens' call. 🌌🖖

  12. @TheFloatingSheep

    March 13, 2024 at 11:57 am

    When you feed text into a so-called language model, you also feed in mathematics, physics, chemistry, programming languages

    These go a little beyond natural human language, their place within linguistics is up for debate, but I think people make the mistake of looking at traditional linguistics and thinking of LLMs within those boundaries, while yeah traditional linguistics can probably not encode enough data about the real world for the very reason that most of us have eyesight and hearing, our mathematical models of the physics of our universe are a part of the datasets these LLMs are trained on.

  13. @zamoth73

    March 13, 2024 at 11:57 am

    You don't need to fine-tune a large model like GPT-4 to align with your personal values. Instead, you can fine tune a much smaller open-source model whose sole purpose is to communicate with GPT-4 on your behalf. GPT-4 will reply based on its own values and knowledge. Your model will interpret that response, like how the devil reads the bible, and use it to generate answers for you. In this way, you can benefit from the knowledge a large, closed model has, and OpenAI won't have to train their models to accommodate everyone.

  14. @arvisz1871

    March 13, 2024 at 11:57 am

    My brain are thirsty for this type of conversations where both the host and the guest are excellent at the covered topics. Especially, when the topics are not eternally ambiguous (like politics) but can be dissected (like ML).

Leave a Reply





play_arrow skip_previous skip_next volume_down
playlist_play