• Rentlar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 months ago

    “Replacing Talent” is not what AI is meant for, yet, it seems to be every penny-pinching, bean counting studio’s long term goal with it.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      8 months ago
      sed “s/studio’s/tech industry c-suite’s/“
      

      As an engineer, the amount of non-engineering idiots in tech corporate leadership trying to apply inappropriate technical solutions to something because it became a buzzword is just absurdly high.

      • 9488fcea02a9@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        I’m not a developer, but I use AI tools at work (mostly LLMs).

        You need to treat AI like a junior intern… You give it a task, but you still need to check the output and use critical thinking. You cant just take some work from an intern, blindly incorporate it into your presentation, and then blame the intern if the work is shoddy…

        AI should be a time saver for certain tasks. It cannot (currently) replace a good worker.

        • Rickety Thudds@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          It’s clutch for boring emails with several tedious document summaries. Sometimes I get a day’s work done in 4 hours.

          Automation can be great, when it comes from the bottom-up.

          • isles@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 months ago

            Honestly, that’s been my favorite - bringing in automation tech to help me in low-tech industries (almost all corporate-type office jobs). When I started my current role, I was working consistently 50 hours a week. I slowly automated almost all the processes and now usually work about 2-3 hours a day with the same outputs. The trick is to not increase outputs or that becomes the new baseline expectation.

        • Lmaydev@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          8 months ago

          As a developer I use it mainly for learning.

          What used to be a Google followed by skimming a few articles or docs pages is now a question.

          It pulls the specific info I need, sources it and allows follow up questions.

          I’ve noticed the new juniors can get up to speed on new tech very quickly nowadays.

          As for code I don’t trust it beyond snippets I can use as a base.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Current AI*

        I don’t see any reason to expect this to be the case indefinitely. It has been getting better all the time and lately been doing so at a quite rapid pace. In my view it’s just a matter of time untill it surpasses human capabilities. It can already do so in specific narrow fields. Once we reach AGI all bets are off.

        • thundermoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Maybe this comment will age poorly, but I think AGI is a long way off. LLMs are a dead-end, IMO. They are easy to improve with the tech we have today and they can be very useful, so there’s a ton of hype around them. They’re also easy to build tools around, so everyone in tech is trying to get their piece of AI now.

          However, LLMs are chat interfaces to searching a large dataset, and that’s about it. Even the image generators are doing this, the dataset just happens to be visual. All of the results you get from a prompt are just queries into that data, even when you get a result that makes it seem intelligent. The model is finding a best-fit response based on billions of parameters, like a hyperdimensional regression analysis. In other words, it’s pattern-matching.

          A lot of people will say that’s intelligence, but it’s different; the LLM isn’t capable of understanding anything new, it can only generate a response from something in its training set. More parameters, better training, and larger context windows just refine the search results, they don’t make the LLM smarter.

          AGI needs something new, we aren’t going to get there with any of the approaches used today. RemindMe! 5 years to see if this aged like wine or milk.

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            How does this amazing prediction engine discovery that basically works like our brain does not fit in a larger solution?

            The way emergent world simulation can be found in the larger models definitely point to this being a cornerstone, as it provides functional value in both image and text recall.

            Nevermid that tools like memgpt doesn’t satisfy long term memory and context windows doesn’t satisfy attention functions properly, I need a much harder sell on LLM technology not proving an important piece of agi

            • thundermoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              8 months ago

              I didn’t say it wasn’t amazing nor that it couldn’t be a component in a larger solution but I don’t think LLMs work like our brains and I think the current trend of more tokens/parameters/training LLMs is a dead-end. They’re simulating the language area of human brains, sure, but there’s no reasoning or understanding in an LLM.

              In most cases, the responses from well-trained models are great, but you can pretty easily see the cracks when you spend extended time with them on a topic. You’ll start to get oddly inconsistent answers the longer the conversation goes and the more branches you take. The best fit line (it’s a crude metaphor, but I don’t think it’s wrong) starts fitting less and less well until the conversation completely falls apart. That’s generally called “hallucination” but I’m not a fan of that because it implies a lot about the model that isn’t really true. Y

              You may have already read this, but if you haven’t: Steven Wolfram wrote a great overview of how GPT works that isn’t too technical. There’s also a great sci-fi novel from 2006 called Blindsight that explores the way facsimiles of intelligence can be had without consciousness or even understanding and I’ve found it to be a really interesting way to think about LLMs.

              It’s possible to build a really good Chinese room that can pass the Turing test, and I think LLMs are exactly that. More tokens/parameters/training aren’t going to change that, they’ll just make them better Chinese rooms.

              • KeenFlame@feddit.nu
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 months ago

                Thanks, I’ll check those out. The entire point of your comment was that llm is a dead end. The branching as you call it is just more parameters which approach, in lower token models a collapse. Which is why more tokens and larger context does improve accuracy and why it does make sense to increase them. LLMs have also proven to in some cases have what you call reason and what many call reason but which is not a good word for the error. Larger models provide a way to stimulate the world which in turn gives us access to the sensing mechanism of our brain, which is to stimulate and then attend to disparages between the simulation and actual. This in turn gives access to action which unfortunately is not very well understood. Simulation, or prediction, is what our brains constantly do to be able to react and adapt to the world without massive timing failure and massive energy cost, for instance consider driving where you focus on unusual sensing and let action be an extension of purpose by just allowing constant prediction to happen where your muscles have already prepared to commit even precise movements due to enough practice with your “model” of how wheel and foot apply to the vehicle.

        • FiniteBanjo@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          8 months ago

          Not really, no, all of the current models built to intended scale are selling it as a product, especially OpenAI, Microsoft, and Google. It was built with a purpose and that purpose was to potentially replace expensive human assets.

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            8 months ago

            Yes, it was. Like all scientific discoveries several corporations started building proprietary products. You are wrong that it was built with that purpose.