incognito124 a day ago

Watched it a while ago. Made me seriously think about AI and what we should use it for. I feel like all the entertainment use cases (image and video gen) are a complete waste.

  • mattlondon a day ago

    The chatbots and image editors are just a side-show. The real value is coming in e.g. chemistry (Alpha fold etc all), fusion research, weather prediction etc.

    • poszlem a day ago

      The real value is coming in warfare.

      • awaythrow999 a day ago

        Right. More accurate predictions for meta-data based killings which as championed by US in their war on terror

        • walletdrainer a day ago

          Metadata based killings are most likely a huge improvement from the prior state of affairs

          • modeless a day ago

            Yeah. Let the leaders assassinate each other with drone strikes instead of indiscriminately bombing whole cities as they used to.

            • dylan604 20 hours ago

              what gov't in modern day would fall because the leader was assassinated? the next in line would just step up, and now have a pissed population that will be in favor of ratcheting up beyond assassinations.

              • mattlondon 11 hours ago

                Any autocratic state would probably have quite a high likelihood I would expect.

                I am sure you can think of a few prominent examples.

    • echelon a day ago

      None of that has reached the market yet. If it was up to the sciences alone, AI couldn't bear the weight of its own costs.

      It also needs to be vertically integrated to make money, otherwise it's a handout to the materials science company. I can't see any of the AI companies stretching themselves that thin. So they give it away for goodwill or good PR.

      • tim333 7 hours ago

        Science in general tends to be subsidised and given away because as basic understanding of the world is hard to monopolise. I'm not sure how Einstein would have done a general relativity startup.

        That said Deepmind are doing a spin-off making drugs https://www.isomorphiclabs.com/

      • bayindirh 7 hours ago

        > None of that has reached the market yet.

        AI for science is not "marketed". It silently evolves under the wraps and changes our lives step by step.

        There are many AI systems already monitoring our ecosystem and predicting things as you read this comment.

      • incognito124 a day ago

        That's not really true. Commercial weather prediction has reached the market, and a drug (sorry, can't find the new s link) that was found by AI-accelerated drug discovery is now in clinical testing

        • aoeusnth1 14 hours ago

          The reason why vertical integration is important for AI investment is that if AI is commoditized, then that AI-acceleration will costs pennies for drugs that are worth billions.

          I don't see how OpenAI or Google can profit from drug discovery. It's nearly pure consumer surplus (where the drug companies and patients are the consumers).

    • epolanski 19 hours ago

      ML is used in weather prediction since the 80s and is the backbone of it since almost a decade.

      Not sure what are LLMs supposed to do there.

      • danpalmer 18 hours ago

        No one is suggesting using LLMs for weather. DeepMind is making significant progress on weather prediction with new AI models.

        • neumann 14 hours ago

          oh god - please tell BoM in Australia. Either ML is not keeping up with clime change unpredictability, or SOTA is worse than what we had 10 years ago.

      • Glemkloksdjf 6 hours ago

        LLMs in general are ML based, need a lot of data and compute. The same infrastructure as any other ML based system.

        The AI/AGI hype in my opinion could be better renamed to ml with data and compute 'hype' (i don't like the word hype as it doesn't fit very well)

  • cultofmetatron 17 hours ago

    unfortunately all this work on sora has very real military use case. I personally think all this investment in sora by open AI is largely to create a digital fog of war. Now when a rocket splatters a 6 year old palestinian girl's head across the pavement like a jackson polock painting, They will be able to claim its AI generated by state sponsored actors in order to prevent disruption to the manufactured consent aperatus.

  • modeless a day ago

    You might have said the same thing about GPUs for 20 years when they were mostly for games, before they turned out to be essential for AI. All the entertainment use cases were directly funding development of the next generation of computing all along.

  • threethirtytwo a day ago

    Why are images and video a complete waste? This makes no sense to me.

    Right now the generators aren’t effective but they are definitely stepping stones to something better in the future.

    If that future thing produces video, movies and pictures better than anything humanity can produce at a rate faster than we can produce things… how is that a waste?

    It can arguably be bad for society but definitely not a waste.

    • lm28469 7 hours ago

      It might be shocking to you but some people believe there is more to life than producing and consuming "content" faster and faster.

      Most of it is used to fool people for engagement, scam, politics or propaganda, it definitely is a huge waste of resource, time, brain and compute power. You have to be completely brainwashed by consumerism and techsolutionism to not see it

      • threethirtytwo 2 hours ago

        I see it. But you’re lacking imagination to what I’m referring to. It’s also fucking obvious. Like I’m obviously not referring to TikTok videos and ads and that kind of bullshit every one on earth knows about and obviously hates. You’re going on as if it’s “shocking” to me when what you’re talking about is obvious as night and day. What’s shocking to me is that you’re not getting my point and I’m obviously talking about something less well known.

        Take your favorite works of art, music and cinema. Imagine if content on that level can be generated by AI in seconds. I wouldn’t classify that as a “waste” at all. You’re obviously referring to bullshit content, I’m referring to content that is meaningful to you and most people. That is where the trendline is pointing. And my point, again is this:

        We don’t know the consequence of such a future. But I wouldn’t call such content created by AI a waste if it is objectively superior to content created by humans.

      • Glemkloksdjf 6 hours ago

        I actually had a counter thought a few years ago.

        We consume A LOT of entertainment every day. Our brains like that a lot.

        Doesn't has to be just video but even normal people not watching tv at all entertain themselves through books or events etc.

        Live would be quite boring.

    • incognito124 a day ago

      Let me phrase it a bit differently, then: AI generated cats in Ghibli style are a waste, we should definitely do less of that. I did not hold that opinion before the documentary

      Education-style infographics and videos are OK.

      • danielbln a day ago

        I'm glad you're not the sole arbiter for what is wasteful and what isn't.

        • dylan604 20 hours ago

          Just because you disagree does not make them wrong though

      • threethirtytwo a day ago

        I’m not even talking about this. Those cat videos are just stepping stones for academy award winning masterpieces of cinema like dune. All generated by AI on a click in one second.

        • lm28469 7 hours ago

          Homoconsomator brain be like ^

    • QuantumGood a day ago

      Parent said "entertainment use cases" are a complete waste, not all uses of images and video. I don't agree, but do particularly find educational use cases of AI video are becoming compelling.

      I help people turn wire rolling shelf racks into the base of their home studio, and AI can now create a "how to attach something to a wire shelf rack" without me having to do all the space and rack and equipment and lighting and video setup, and just use a prompt. It's not close to perfect yet, but it's becoming useful.

      • dylan604 20 hours ago

        > particularly find educational use cases of AI video are becoming compelling.

        compelling graphics take a long time to create. for education content creators, this can be too expensive as well. my high school physics teacher would hand draw figures on transparencies on an overhead projector. if he could have produced his drawings as animations cheap and fast using AI, it would have really brought his teaching style (he really tried to make it humorous) to another level. I think it would be effective for his audience.

        imagine the stylized animations for things like the rebooted Cosmos, NOVA, or even 3Blue1Brown on YT. there is potential for small teams to punch above their weight class with genAI graphics

      • threethirtytwo a day ago

        If AI can produce movies, video and art better aka “more entertaining” then humans than how is it a waste?

        • youngNed 19 hours ago

          Because vast amounts of people find Coldplay entertaining. That doesn't mean it's a good thing.

          • threethirtytwo 14 hours ago

            You lack imagination. When ChatGPT just came out people were saying it can never code. Now if you aren’t using ai in your coding you’re biting the dust.

            Stop talking about the status quo… we are talking about the projected trendline. What will AI be when it matures?

            Second you’re just another demographic. Smaller than fans of Coldplay but equally generic and thus an equal target for generated art.

            Here’s a prompt that will one day target you: “ChatGPT, create musical art that will target counter culture posers who think they’re better than everyone just because they like something that isn’t mainstream. Make it so different they will worship that garbage like they worship Pearl Jam. Pretend that the art is by a human so what when they finally figure out they fell for it hook line and sinker they’ll realize their counter culture tendencies are just another form of generic trash fandom no different than people who love cold play or, dare I say it, Taylor swift.”

            What do you do then when this future comes to pass and all content even for posers is replicated in ways that are superior?

            • plastic3169 12 hours ago

              ”What a way to show them. You rock! Unfortunately I can’t create the musical art you requested as you reference multiple existing musical acts by name. How about rephrasing your request in a way that is truly original and unique to you”

              • threethirtytwo 2 hours ago

                Again I’m referring to the future. When ChatGPT came out nobody thought it was good enough to be an assistant coding agent. That future came to pass.

                Nobody gives a fuck about what ChatGPT can currently do. It’s not interesting to talk about because it’s obvious. I don’t even understand why you’re just rehashing the obvious response. I’m talking about the future. The progression of LLMs is leading to a future where my prompt leads to a response that is superior to the same prompt given to a human.

        • wasmainiac 21 hours ago

          But it’s not. I think most can agree that there really has not been any real entertainment from genAI beyond novelty crap like seeing Lincoln pulling a nice track at a skate park. No one wants to watch genAI slop video, no one wants to listen to genAI video essays, most people do not want to read genAI blog posts. Music is a maybe, based on leaderboards, but it is not like we ever had a lack of music to listen to.

          • threethirtytwo 14 hours ago

            Bro. You and your cohorts said the exact same thing about LLMs and coding when ChatGPT just came out. The status quo is obvious. So no one is talking about that.

            Draw the trendline into the future. What will happen when the content is indistinguishable and AI is so good it produces something moves people to tears?

          • CamperBob2 16 hours ago

            Eventually it will be good enough that you won't know the difference.

            I have a feeling that's already happened to me.

  • tim333 a day ago

    Practical things are probably treating diseases and more abundance of physical goods. More speculative/sci-fi is merging in some form with AI and maybe immortality which I think is the more interesting bit.

  • jeffbee a day ago

    DeepMind's new [edit: apparently now old] weather forecast model is similar in architecture to the toys that generate videos of horses addressing Congress or cats wearing sombreros. The technology moves forward and while some of the new applications are not important, other applications of the same technology may be important.

    • incognito124 a day ago

      Is it really similar? I was under the impression it's a GNN of a (really dense) polyhedron, not a diffusion model

someguy101010 a day ago

reposting this from youtube comment

From 1:14:55-1:15:20, within the span of 25 seconds, the way Demis spoke about releasing all known sequences without a shred of doubt was so amazing to see. There wasn't a single second where he worried about the business side of it (profits, earnings, shareholders, investors) —he just knew it had to be open source for the betterment of the world. Gave me goosebumps. I watched that on repeat for more than 10 times.

  • dekhn a day ago

    Another way to interpret this (and I don't mean it pejoratively at all): Demis has been optimizing his chances for winning a nobel prize for quite some time now. Releasing the data increased that chance. He also would have been fairly certain that the commercial value of the predictions was fairly low (simply predicting structures accurately was never the rate-limiting step for downstream things like drug discovery). And that he and his team would have a commercial advantage by developing better proprietary models using them to make discoveries.

    • tim333 a day ago

      Also since selling Deepmind to Google, it's Google's shareholder's money really.

    • sgt101 20 hours ago

      I think that's a rather conspiratorial way of framing it.

      I think it's more about someone trying to do the most good that was possible at that time.

      I doubt he cares much about prizes or money at this point.

      • dekhn 20 hours ago

        It's hardly a conspiracy to use strategy and intelligence to maximize the probability of achieving the outcome you desire.

        He doesn't have to care much about prizes or money at this point: he won his prize and he gets all the hardware and talent he needs.

  • jpecar 10 hours ago

    DB of known proteins is not where the money can be made, designing new proteins is. This is why AlphaFold3 (that can aid in this) is now wrapped in layers of legalese preventing you to actually use it in the way you want. At least that's what my lifescience users tell me. Big Pharma is now paying Big Money to DeepMind to make use of AF3 ...

  • mNovak 21 hours ago

    My interpretation of that moment was that they had already decided to give away protein sequences as charity, it was just a decision of all as a bundle vs fielding individual requests (a 'service').

    Still great of them to do, and as can be seen it's worth it as a marketing move.

    • dekhn 20 hours ago

      (as an aside, this is a common thing that comes up when you have a good model: do you make a server that allows people to do one-off or small-scale predictions, or do you take a whole query set and run it in batch and save the results in a database; this comes up a lot)

  • potsandpans 21 hours ago

    I also noticed this as well. Actually went back and watched it several times. It's an incredible moment. I keep thinking, "if this moment is real, this is truly a special person."

ilaksh a day ago

Greg Kohs and his team are brilliant. For example, the way it captured the emotional triumph of the AlphaFold achievement. And a lot of other things.

One of the smart choices was that it omitted a whole potential discussion about LLMs (VLMs) etc. and the fact that that part of the AI revolution was not invented in that group, and just showed them using/testing it.

One takeaway could be that you could be one of the world's most renowned AI geniuses and not invent the biggest breakthrough (like transformers). But also somewhat interesting is that even though he had been thinking about this for most of his life, the key technology (transformer-type architecture) was not invented until 2017. And they picked it up and adapted it within 3 years of it being invented.

Also I am wondering if John Jumper and/or other members of the should get a little bit more credit for adapting transformers into Evoformer.

nightski a day ago

In my experience all DeepMind content ends up being a puff piece for Dennis Hassabis. It's like his personal marketing engine lol.

  • ainch 20 hours ago

    Perhaps they need more advertising around the correct spelling of his name.

    • nightski 10 hours ago

      Good catch but it was just an honest typo.

  • tim333 7 hours ago

    All content about organisations tends to go that way. I guess it's just easier to talk about the leader than the thousands of others involved.

  • stevenjgarner a day ago

    Is that a good thing or a bad thing? Demis is after all a co-founder and CEO.

    • Hacker_Yogi a day ago

      Makes it seem that AI is a one-man show while also feeding the hype cycle

  • Glemkloksdjf 6 hours ago

    Our society is leader based. Otherwise the garbage Trump wouldn't matter but he does. The same thing with garbage Musk. Musk gets what he wants from Tesla because the shareholders believe that Musk is critical to Tesla.

    Both are fundamental to their followers.

    So its quite clear that you can't just say 'its DeepMind' but have a figure in the middle of it like Dennis.

    They trust him to lead DeepMind.

  • ipnon 16 hours ago

    He's the leading AI researcher at the 3rd largest company in the world in the middle of an AI boom. He's naturally going to have quite the marketing budget behind him!

quirino a day ago

Watched it this week. Pretty good.

There are a couple parts at the start and the end where a lady points her phone camera at stuff and asks an AI about what it sees. Must have been mind-blowing stuff when this section was recorded (2023), but now it's just the bare minimum people expect of their phones.

Crazy times we're living in.

  • HarHarVeryFunny 19 hours ago

    I was ok with that as "fledgling AI" at the start of the movie/documentary, but thought that going back to it and having the chatbot suggest a chess book opening to Hassabis at the end was cheesy and misleading.

    They should have ended the movie on the success of AlphaFold.

dwroberts a day ago

I want to watch it, but at the same time, it’s basically going to be an advert for Google. I’m not sure if I can put up with the uncritical fluff.

I would love to see a real (ie outsider) filmmaker do this - eg an updated ‘Lo and behold’ by Werner Herzog

  • ilaksh a day ago

    It was directed by Greg Kohs, who is a real filmmaker and does not work for Google.

    • dwroberts a day ago

      Yeah I don’t mean to say they’re not a real filmmaker or untalented etc, I mean more the context they’re doing it. That they’ve chosen to cover this topic themselves, and that they would show critical angles of it and not just promo + hagiography

    • lysace a day ago

      Are you saying this movie production wasn't paid for by Google? If it was, surely he did?

  • dist-epoch a day ago

    It's an advert for Demis Hassabis, not Google.

jnwatson a day ago

I caught it on the airplane a few days ago. I would have loved a little more technical depth, but I guess that's pretty much standard for a puff piece.

It is interesting that Hassabis has had the same goal for almost 20 years now. He has a decent chance of hitting it too.

redbell a day ago

Just watched it yesterday and enjoyed every second of it, the director put more focus on Demis Hassabis which turns out to be a true superhero and I have to confess that I am probably admiring him more that any other human in the tech industry.

dwarfpagent a day ago

I find it funny that the YouTube link takes you to the film, but like an hour into it.

  • vmilner a day ago

    Yes, it made me think I'd already watched it and had forgotten about it...

dwa3592 a day ago

Loved this documentary. People complaining - WTFV first.

mskogly 9 hours ago

Two thoughts: 1 the field of ai research moves so fast that any attempt to make a full documentary would we obsolete long before it was released. 2 all I want ai do to right now is to remove generic «dramatic» music from YouTube clips.

beginnings a day ago

i tried to watch it but like AI in general, it was extraordinarily boring. neural nets are really cool technically, but the whole AI thing is just getting old and I couldnt care less where its going

we can guarantee that whether its the birth of superintelligence or just a very powerful but fundamentally limited algorithm, it will not be used for the betterment of mankind, it will be exploited by the few at the top at the expense of the masses

because thats apparently who we are as a species

  • hbarka a day ago

    Hi, I’m genuinely curious about your writing style. I’m seeing this trend of no proper casing and no punctuation becoming vogue-ish. Is there a particular reason you prefer to write this way or is this writing style typical for a generation? Sincere question, not snark, coming from an older generation guy.

    • mystifyingpoi a day ago

      This is the writing style of this generation. I've just scrolled 6 months of my conversation with a friend in his twenties. Not a single comma or period to be seen. I mean on his side.

    • beginnings 11 hours ago

      it signals high status and nonconformity. the reader intuits that a sigma male is speaking and he doesnt play by the rules. hes not bound by the constraints and regulations of classical reality. hes dangerous

      but seriously, its just more comfortable to type. apostrophes and capitals are generally superfluous, we'll and well the only edge case, theyve, theyll, wont, dont etc its just not necessary. theres no ambiguity

      i only recently started using full stops for breaks. for years, I was only using commas, but full stops are trending among the right people. but only for breaks, not for closing

    • aswegs8 a day ago

      If you grew up in the internet of early 2000s, that's how we wrote online.

      • querez a day ago

        I grew up in the Internet at that time, and it's certainly not how I type. So you might want to be more specific about which sites or subcultures you think this style is representative of?

        • luma a day ago

          I’m certainly no authority but i tend to write the same way for casual communication, came from the 90s era BBS days. It was (and still is) common on irc nets too. Autocorrect fixes up some of it, but sometimes i just have ideas i’m trying to dump out of my head and the shift key isn’t helping that go faster. Emails at work get more attention, but bullshittin with friends on the PC? No need.

          I’ll code switch depending on the venue, on HN i mostly Serious Post so my post history might demonstrate more care for the language than somewhere i consider more causal.

  • tim333 a day ago

    If you watch on there's a bit where they decide to give away all the protein folding results for free when they could have charged (https://youtu.be/d95J8yzvjbQ?t=4497). Not everything is exploitation rather than the betterment of mankind.

    • beginnings 10 hours ago

      that sort of mentality is typical in researchers, but the powers that be will be thinking about profit and control, mass layoffs and AI governance in conjunction with digital id, carbon credits etc

      every technological advancement that made people more productive and should have led to them having to do less work, only led to people needing to do more work to survive. i just dont see AI being any different

  • Glemkloksdjf 6 hours ago

    Its so disappointing to read this.

    Do you know how long it took us to get to this point? Massive compute, knowledge, alogorithm etc.

    Why are you even on HN if the most modern and most impactful technologie leads you to say "i couldn't care less were its going'?

    Just a few years ago there was not a single way of just solving image generation, music generation and chat bots which actually able to respond reasonable to you and that in different languages.

    AlphaFold already helps society today btw.

  • AndrewKemendo a day ago

    Correct! I’m glad people are finally starting to get it

    • verisimi a day ago

      weekends are always better on hn

circadian 18 hours ago

There's some funny comments going on in this thread. Understandably so. What could be more divisive an issue than AI on a silicon valley forum!?

As a brit, I found it to be a really great documentary about the fact that you can be idealistic and still make it. There are, for sure, numerous reasons to give Deepmind shit: Alphabet, potential arms usage, "we're doing research, we're not responsible". The Oppenheimer aspect is not to be lost, we all have to take responsibility for wielding technology.

I was more anti-Deepmind than pro before this, but the truth is as I get older it's nicer to see someone embodying the aspiration of wanton benevolence (for whatever reason) based on scientific reasoning, than to not. To keep it away from the US and acknowledge the benefits of spreading the proverbial "love" to the benefit of all (US included) shows a level of consideration that should not be under-acknowledged.

I like this documentary. Does AGI and the search for it scare me? Hell yes. So do killer mutant spiders descending on earth post nuclear holocaust. It's all about probabilities. To be honest: disease X freaks me out more than a superintelligence built by an organisation willing to donate the research to solve the problems of disease X. Google are assbiscuits, but Deepmind point in the right direction (I know more about their weather and climate forecasting efforts). This at least gave me reason to think some heart is involved...

ChrisArchitect a day ago

Hard to discount the impact of AlphaFold in science work but submitting this to a number of film festivals like Tribeca seems a bit AI-washing.

  • llbbdd 21 hours ago

    What is AI-washing?

DrierCycle a day ago

AlphaFold is optimization, not thinking. Propaganda 'r us.

  • fredoliveira a day ago

    Did you watch the documentary? Would probably fare better if you did, because it'd give you the context for the film title.

    • DrierCycle a day ago

      I'm an hour into it, unconvinced.

      The illusion that agency 'emerges' from rules like games, is fundamentally absurd.

      This is the foundational illusion of mechanics. It's UFOlogy not science.

      • fredoliveira a day ago

        Well, two things: it's the last sentence of the film; being on hour into something you're calling propaganda is brave.

        Anyways. I thought the documentary was inspiring. Deepmind are the only lab that has historically prioritized science over consumer-facing product (that's changing now, however). I think their work with AlphaFold is commendable.

        • DrierCycle a day ago

          It's science under the creative boundary of binary/symbols. And as analog thinkers, we should be developing far greater tools than these glass ceilings. And yes, having finished the film, it's far more propagandic than it began as.

          Science is exceeding the envelop of paradox, and what I see here is obeying the envelope in order to justify the binary as a path to AGI. It's not a path. The symbol is a bottleneck.

          • Zigurd a day ago

            Everything between your ears is an electrochemical process. It's all math and there is no "creative boundary." There's plenty to criticize in AI hype that we're going to get to machine intelligence very soon. I suspect a lot of the hype is oriented towards getting favorable treatment from the government if not outright subsidies. But claiming that there are fundamental barriers is a losing bet.

            • DrierCycle a day ago

              It doesn't happen "btwn ears" and math is an illusion of imprecision. The fundamental barrier is frameworks and computers will not be involved. There will be software obviously. But it will never be computed.

        • amitport a day ago

          Plenty *commercial* labs frequently prioritized pure science over *immediate* consumer products, but none done so out of charity. Deepmind included.

      • Zigurd a day ago

        Your mind emerges from a network of neurons. Machine models are probably far from enabling that kind of emergence, but if what's going on between our ears isn't computation, it's magic.

        • DrierCycle a day ago

          It's not magic. It's neural syntax. And nothing trapped by computation is occurring. It's not a model, it is the world as actions.

          The computer is a hand-me-down tool under evolution's glass ceiling. This should be obvious: binary, symbols, metaphors. These are toys (ie they are models), and humans are in our adolescent stage using these toys.

          Only analog correlation gets us to agency and thought.

      • MattRix a day ago

        Is there a fundamental difference between it and true agency/thought? I’m not so sure.

        • DrierCycle a day ago

          Agency will emerge from exceeding the bottleneck of evolution's hand-me-down tools: binary, symbols, metaphors. As long as these unconscious sportscasters for thought "explain" to us what thought "is", we are trapped. DeepMind is simply another circular hamster wheel of evolution. Just look at the status-propaganda the film heightens in order to justify the magic.

      • dboreham a day ago

        Why is it absurd? Because believing that would break some deep delusion humans have about themselves?

        • youngNed 18 hours ago

          Quite honestly, it's about time the penny dropped.

          Look around you, look at the absolute shit people are believing, the hope that we have any more agency than machines... to use the language of the kids, is cope.

          I have never considered myself particularly intelligent, which, I feel puts me at odds with many of HN readership, but I do always try to surround myself with myself with the smartest people I can.

          The amount of them that have fallen down the stupidest rabbit holes i have ever seen really makes me think: as a species, we have no agency

  • HarHarVeryFunny 20 hours ago

    Sure, but AlphaFold is still probably the most impactful and positive thing to have come out of "Deep Learning" so far.

    • theturtletalks 19 hours ago

      Didn’t the transformer model come from AlphaFold? I feel like we wouldn’t have had the LLMs we use today if it wasn’t for AlphaFold.

      • HarHarVeryFunny 17 hours ago

        The Transformer was invented at Google, but by a different team. AFAIK the original AlphaFold didn't use a transformer, but AlphaFold 2.0 and 3.0 do.

  • Rochus a day ago

    Not sure why this is downvoted. The comment cuts to the core of the "Intelligence vs. Curve-Fitting" debate. From my humble perspective as a PhD in the molecular biology /biophysics field you are fundamentally correct: AlphaFold is optimization (curve-fitting), not thinking. But calling it "propaganda" might be a slight oversimplification of why that optimization is useful. If you ask AlphaFold to predict a protein that violates the laws of physics (e.g. a designed sequence with impossible steric clashes), it will sometimes still confidently predict a folded structure because it is optimizing for "looking like a protein", not for "obeying physics". The "Propaganda" label likely comes from DeepMind's marketing, which uses words like "Solved"; instead, DeepMind found a way to bypass the protein folding problem.

    • dekhn a day ago

      If there's one thing I wish DeepMind did less of, it's conflating the protein folding problem with static structure prediction. The former is a grand challenge problem that remains 'unsolved' while the latter is an impressive achievment that really is optimization using a huge collection of prior knowledge. I've told John Moult, the organizer of CASP this (I used to "compete" in these things), and I think most people know he's overstating the significance of static structure prediction.

      Also, solving the protein folding problem (or getting to 100% accuracy on structure prediction) would not really move the needle in terms of curing diseases. These sorts of simplifications are great if you're trying to inspire students into a field of science, but get in the way when you are actually trying to rationally allocate a research budget for drug discovery.

      • smj-edison 18 hours ago

        I'm really curious about this space: what types of simulation/prediction (if any) do you see as being the most useful?

        Edit to clarify my question: What useful techniques 1. Exist and are used now, and 2. Theoretically exist but have insurmountable engineering issues?

        • dekhn 18 hours ago

          Right now techniques that exist and used now are mostly around target discovery (identifying proteins in humans that can be targeted by a drug), protein structure prediction and function prediction. Identifying sites on the protein that can be bound by a drug is also pretty common. I worked on a project recently where our goal was to identify useful mutations to make to an engineered antibody so that it bound to a specific protein in the body that is linked to cancer.

          If your goal is to bring a drug to market, the most useful thing is predicting the outcome of the FDA drug approval process before you run all the clinical trials. Nobody has a foolproof method to do this, so failure rates at the clinical stage remain high (and it's unlikely you could create a useful predictive model for this).

          Getting even more out there, you could in principle imagine an extremely high fidelity simulation model of humans that gave you detailed explanations of why a drug works but has side effects, and which patients would respond positively to the drug due to their genome or other factors. In principle, if you had that technology, you could iterate over large drug-like molecule libraries and just pick successful drugs (effective, few side effects, works for a large portion of the population). I would describe this as an insurmountable engineering issue because the space and time complexity is very high and we don't really know what level of fidelity is required to make useful predictions.

          "Solving the protein folding problem" is really more of an academic exercise to answer a fundamental question; personally, I believe you could create successful drugs without knowing the structure of the target at all.

          • smj-edison 17 hours ago

            Thank you for the detailed answer! I'm just about to start college, and I've been wanting to research molecular dynamics, as well as building a quantitative pathway database. My hope is to speed up the research pipeline, so it's heartening to know that it's not a complete dead end!

    • HarHarVeryFunny 20 hours ago

      It seems that to solve the protein folding problem in a fundamental way would require solving chemistry, yet the big lie (or false hope) of reductionism is that discovering the fundamental laws of the universe such as quantum theory doesn't in fact help that much with figuring out the laws/dynamics at higher levels of abstraction such as chemistry.

      So, in the meantime (or perhaps for ever), we look for patterns rather than laws, with neural nets being one of the best tools we have available to do this.

      Of course ANNs need massive amounts of data to "generalize" well, while protein folding only had a small amount available due to the months of effort needed to experimentally discover how any protein is folded, so DeepMind threw the kitchen sink at the problem, apparently using a diffusion like process in AlphaFold 3 to first determine large scale structure then refine it, and using co-evolution of proteins as another source of data to address the paucity.

      So, OK, they found a way around our lack of knowledge of chemistry and managed to get an extremely useful result all the same. The movie, propaganda or not, never suggested anything different, and "at least 90% correct" was always the level at which it was understood the result would be useful, even if 100% based on having solved chemistry / molecular geometry would be better.

      • dekhn 17 hours ago

        We have seen some suggestion that the classical molecular dynamics force fields are sufficient to predict protein folding (in the case of stable, soluble, globular proteins), in the sense that we don't need to solve chemistry but only need to know a coarse approximation of it.

    • DrierCycle a day ago

      I'm concerned that coders and the general public will confuse optimization with intelligence. That's the nature of propaganda, substituting sleight of hand to create a false narrative.

      btw an excellent explanation, thank you.

      • autonomousErwin 20 hours ago

        What's the difference between optimisation and intelligence?

        • HarHarVeryFunny 19 hours ago

          For a start optimization is a process, and intelligence is a capability.

    • tim333 a day ago

      I think if you watch the actual film you'd find they don't claim AlphaFold is thinking.

      • BanditDefender 19 hours ago

        There is quite a bit of bait-and-switch in AI, isn't there?

        "Oh, machine learning certainly is not real learning! It is a purely statistical process, but perhaps you need to take some linear algebra. Okay... Now watch this machine learn some theoretical physics!"

        "Of course chain-of-thought is not analogous to real thought. Goodness me, it was a metaphor! Okay... now let's see what ChatGPT is really thinking!"

        "Nobody is claiming that LLMs are provably intelligent. We are Serious Scientists. We have a responsibility. Okay... now let's prove this LLM is intelligent by having it take a Putnam exam!"

        One day AI researchers will be as honest as other researchers. Until then, Demis Hassabis will continue to tell people that MuZero improves via self-play. (MuZero is not capable of play and never will be)

        • tim333 7 hours ago

          Maybe but the film is about Hassabis thinking about thinking and working towards general intelligence that can think. It doesn't really make claims about their existing software regarding that.

  • dwa3592 a day ago

    what is thinking?

    • DrierCycle a day ago

      Sharp wave ripples, nested oscillations, cohering at action-syntax. The brain is "about actions" and lacks representations.

    • __patchbit__ a day ago

      Creatively peeling the hyper dimensional space in the scope of simplectic geometry, markhov blanket and helmholtz invariance????