Yann LeCun’s Post

View profile for Yann LeCun, graphic
Yann LeCun Yann LeCun is an Influencer

I think that the magnitude of the AI alignment problem has been ridiculously overblown & our ability to solve it widely underestimated. I've been publicly called stupid before, but never as often as by the "AI is a significant existential risk" crowd. That's OK, I'm used to it.

Jakub Mizera

Data Scientist @ SoftServe

1y

And the best part is that AI alignment alarmists still can't provide a convincing proof of orthogonality thesis. Even if we assume that human-level AGI will emerge soon - our current training methods imply that AGIs "moral code" will be derived directly from training data. Which implies that they should be very closely aligned with our values. Also, some guys out there seem to underestinate our capabilities to hard-code constants into AGIs objective function, suggesting that instrumental convergance will find a way to override them anyway. Again, I'd love to see a proof

Eduardo Florentino

SAP ABAP Consultant at Bit Consulting

1y

Can I call you Yann? I think it's because of history, whenever some invention was created it was usually put to use even when the scale and effects were known to be overall hazardous and impossible to truly mitigate such as the Atomic Bomb. In this case it's an inteligence bomb with no known site of launch that could be reproduced by anyone with sufficiente knowledge and computational power. I'm not saying it's impossible to handle, I'm saying depending on people to do it would not be feasible. Do we have AI's to combat AIs already on the watch? Will they be able to keep with each other? Not in any way calling you stupid. But I'm just skeptical we'll be able to handle that much power without using it, not a tech problem, a human problem as usual...

Patrick Hammer

Postdoctoral Researcher at Division of Robotics, Perception and Learning (KTH)

1y

I fully agree. That's some human minds which have gone far off-track into fantasy land. I wish there would be less brainpower wasted for wild speculations, and more to get the research and engineering work done to build systems which can actually function in the real world to benefit humanity. The language models couldn't even control a Roomba robot reliably. If they are misused, it's because of bad human actors, not because of misaligned AI.

Shubham Samant

Machine Learning Engineer @ Sarvam.ai

1y

Seems like lower-income section of our society are most vulnerable to face consequences of algorithmic bias. I feel that from my experience in the indian context. Eg: availing loans, people from lower-income class or coming from "backward areas" or "below 3rd grade institutions" can be treated as outliers, as per algorithms they have more probability of defaulting. And while this maybe true in 70% cases, what was the fault of the remaining 30% people who could actually pay back but where denied in the first place? Not an acceptable collateral damage. Advance automation that maybe possible due to so called "emergent properties" in LLMs like GPT-4 will take this a notch higher in context of above example. Different sections of society will feel the positive/negative impact accordingly

Leon Lahoud

Cloud Architect - MBA - Deep Learner and aspiring writer

1y

the worry today is not about AI ending humanity but about humans+AI displacing jobs of humans without AI. More competition is needed in the AI space. When will Meta release a commercial AI product to compete with ChatGPT and whatever google is cooking?

Jackie Jin

 Engineer, Musician

1y

As much as I think legend like you shouldn't be called stupid or any other personal attack, people have every right to worry about their livelihood whenever disruptive technology challenges status quo. You can certainly educate people to have them adapt as much as possible, or maybe exert control and have ethics framework in place, but nothing stops companies adopting whatever is more efficient and requires fewer manpower. People who can't catch up are usually left in ditches and dismissed, for example, as "AI is a significant existential risk" crowd, which further increases all kinds of gaps making it impossible for both sides to empathize with each other. I can't think of good solution to this but it's important to remind ourselves how naturally biased we could be if we never lived in other people's shoes.

J.Francisco Muñoz-Elguezabal

Lecturer/Researcher in Machine Learning @ITESO | AI/ML Engineer @IteraLabs.ai

1y

Since we currently live in the times of : “My ignorance is as equal as your expertise” effect of freedom of speech, plus indeed you are an expert and many are ignorant, in addition to true statements discovery by randomness by ignorants and false discoveries by science by experts, let’s focus on the effects that I believe will have the worst impact on long term, the false positive : Ignorants end up saying something right and you are wrong. Lets acknowledge all known and plausible states.

Ask them about their Harry Potter fanfic. I bet some days you wish you continued working on image compression.

Petros Bezirganyan

Software Development Engineer L5 at Amazon

1y

Yann, AI has a potential of causing more damage than nuclear power technology at the mearest click of a button. While the technology itself almost always is neutral to its good or bad use, the "reach radius" of the latter is of essence (e.g. a granade vs a rock); AI has the furthest impact reach so far, and in the wrong hands it can wreak unimaginable havoc... The problem is controlling it -- as usual...

Kenneth Francis Cavanagh

analytics @ spacex | data science & industrial psychology

1y

How would you address the problem that there are currently no known methods for stopping a superintelligent optimizer from forming instrumental goals that would be harmful to humanity?

See more comments

To view or add a comment, sign in

Explore topics