• 0 Posts
  • 95 Comments
Joined 1 year ago
cake
Cake day: July 12th, 2023

help-circle




  • Generative AI, as it is being built right now, is a dead-end. It won’t get much better than it currently is (markedly worse once the next-gen is forced to scrape data that includes AI generated data) and hallucinations are always going to be the reality for them.

    It’s why there’s this big push over the last couple of years to get these products to market. Not because you’re going to corner some burgeoning industry (though the hype definitely is designed to look like that), but because this is a grift now and you have to get the goods while there’s still goods to get. Need to recoup those R&D dollars somehow.




  • chuckleslord@lemmy.worldtoLemmy Shitpost@lemmy.worldDayuuum
    link
    fedilink
    arrow-up
    31
    arrow-down
    3
    ·
    26 days ago

    Hi, I have autism and can tell you that it isn’t a “get out of social repercussions for free” card. The comment was still rude and still deserved a call-out, even if the commenter had autism.

    Maybe don’t use a hypothetical person with a disability to defend your take on a situation.

    And again, the word assault here doesn’t apply. They were rude, they got that energy back. It’s not hard to understand.


  • chuckleslord@lemmy.worldtoLemmy Shitpost@lemmy.worldDayuuum
    link
    fedilink
    arrow-up
    67
    arrow-down
    3
    ·
    26 days ago

    They asked a deeply personal, rude, and misogynistic question in a public space and you want to know if it was in bad faith? I think the clap back was very warranted.

    Also, that’s not verbal assault, it’s just an insult. If she threatened harm or made them feel unsafe, then it would be.


  • Having read the article and then the actual report from the Sakana team. Essentially, they’re letting their LLM perform research by allowing it to modify itself. The increased timeouts and self-referential calls appear to be the LLM trying to get around the research team’s guardrails on it. Not because it’s become aware or anything like that, but because its code was timing out and that was the least effort way to beat the timeout. It does handily prove that LLMs shouldn’t be the one steering any code base, because they don’t give a shit about parameters or requirements. And giving an LLM the ability to modify its own code will lead to disaster in any setting that isn’t highly controlled like this.

    Listen, I’ve been saying for a while that LLMs are a dead end towards any useful AI, and the fact that an AI Research team has turned to an LLM to try and find more avenues to explore feels like the nail in that coffin.