US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.

In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that “experts are far more positive and enthusiastic about AI than the public” and “far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years” (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).

The public does not share this confidence. Only about 11 percent of the public says that “they are more excited than concerned about the increased use of AI in daily life.” They’re much more likely (51 percent) to say they’re more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.

  • dylanmorgan@slrpnk.net
    link
    fedilink
    English
    arrow-up
    18
    ·
    3 hours ago

    It’s not really a matter of opinion at this point. What is available has little if any benefit to anyone who isn’t trying to justify rock bottom wages or sweeping layoffs. Most Americans, and most people on earth, stand to lose far more than they gain from LLMs.

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      42 minutes ago

      Everyone gains from progress. We’ve had the same discussion over and over again. When the first sewing machines came along, when the steam engine was invented, when the internet became a thing. Some people will lose their job every time progress is made. But being against progress for that reason is just stupid.

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    72
    arrow-down
    1
    ·
    7 hours ago

    If it was marketed and used for what it’s actually good at this wouldn’t be an issue. We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors. It should be used as a tool to make those people’s jobs easier and achieve better results. I understand its uses and that it’s not a useless technology. The problem is that capitalism and greedy CEOs are ruining the technology by trying to replace everyone but themselves so they can maximize profits.

    • faltryka@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      6 hours ago

      The natural outcome of making jobs easier in a profit driven business model is to either add more work or reduce the number of workers.

      • ferb@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        6 hours ago

        This is exactly the result. No matter how advanced AI gets, unless the singularity is realized, we will be no closer to some kind of 8-hour workweek utopia. These AI Silicon Valley fanatics are the same ones saying that basic social welfare programs are naive and un-implementable - so why would they suddenly change their entire perspective on life?

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 hours ago

        Yes, but when the price is low enough (honestly free in a lot of cases) for a single person to use it, it also makes people less reliant on the services of big corporations.

        For example, today’s AI can reliably make decent marketing websites, even when run by nontechnical people. Definitely in the “good enough” zone. So now small businesses don’t have to pay Webflow those crazy rates.

        And if you run the AI locally, you can also be free of paying a subscription to a big AI company.

        • einkorn@feddit.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          Except, no employer will allow you to use your own AI model. Just like you can’t bring your own work equipment (which in many regards even is a good thing) companies will force you to use their specific type of AI for your work.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            Presumably “small business” means self-employed or other employee-owned company. Not the bureaucratic nightmare that most companies are.

    • count_dongulus@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      6 hours ago

      Mayne pedantic, but:

      Everyone seems to think CEOs are the problem. They are not. They report to and get broad instruction from the board. The board can fire the CEO. If you got rid of a CEO, the board will just hire a replacement.

      • Zorque@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 hours ago

        And if you get rid of the board, the shareholders will appointment a new one. If you somehow get rid of all the shareholders, like-minded people will slot themselves into those positions.

        The problems are systemic, not individual.

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      7 hours ago

      More like asking the slaves about productivity advances in slavery. “Nothing good will come of this”.

  • moonlight@fedia.io
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    7 hours ago

    Depends on what we mean by “AI”.

    Machine learning? It’s already had a huge effect, drug discovery alone is transformative.

    LLMs and the like? Yeah I’m not sure how positive these are. I don’t think they’ve actually been all that impactful so far.

    Once we have true machine intelligence, then we have the potential for great improvements in daily life and society, but that entirely depends on how it will be used.

    It could be a bridge to post-scarcity, but under capitalism it’s much more likely it will erode the working class further and exacerbate inequality.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      As long as open source AI keeps up (it has so far) it’ll enable technocommunism as much as it enables rampant capitalism.

      • moonlight@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        5 hours ago

        I considered this, and I think it depends mostly on ownership and means of production.

        Even in the scenario where everyone has access to superhuman models, that would still lead to labor being devalued. When combined with robotics and other forms of automation, the capitalist class will no longer need workers, and large parts of the economy would disappear. That would create a two tiered society, where those with resources become incredibly wealthy and powerful, and those without have no ability to do much of anything, and would likely revert to an agricultural society (assuming access to land), or just propped up with something like UBI.

        Basically, I don’t see how it would lead to any form of communism on its own. It would still require a revolution. That being said, I do think AGI could absolutely be a pillar of a post capitalist utopia, I just don’t think it will do much to get us there.

  • Sibshops@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 hours ago

    No surprise there. We just went through how blockchain is going to drastically help our lives in some unspecified future.

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    7 hours ago

    Experts are working from their perspective, which involves being employed to know the details of how the AI works and the potential benefits. They are invested in it being successful as well, since they spent the time gaining that expertise. I would guess a number of them work in fields that are not easily visible to the public, and use AI systems in ways the public never will because they are focused on things like pattern recognition on virii or idendifying locations to excavate for archeology that always end with a human verifying the results. They use AI as a tool and see the indirect benefits.

    The general public’s experience is being told AI is a magic box that will be smarter than the average person, has made some flashy images and sounds more like a person than previous automated voice things. They see it spit out a bunch of incorrect or incoherent answers, because they are using it the way it was promoted, as actually intelligent. They also see this unreliable tech being jammed into things that worked previously, and the negative outcome of the hype not meeting the promises. They reject it because how it is being pushed onto the public is not meeting their expectations based on advertising.

    That is before the public is being told that AI will drive people out of their jobs, which is doubly insulting when it does a shitty job of replacing people. It is a tool, not a replacement.

  • carrion0409@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    7 hours ago

    Because it won’t. So far it’s only been used to replace people and cut costs. If it were used for what it was actually intended for then it’d be a different story.

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      36 minutes ago

      Replacing people is a good thing. It means less people do more work. It means progress. It means products and services will get cheaper and more available. The fact that people are being replaced means that AI actually has tremendous value for our society.

  • artificialfish@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 hours ago

    Lol they get a capable chatbot that blows everything out of the water and suddenly they are like “yeah, this will be the last big thing”

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    7 hours ago

    AI is mainly a tool for the powerful to oppress the lesser blessed. I mean cutting actual professionals out of the process to let CEOs wildest dreams go unchecked has devastating consequences already if rumors are to believed that some kids using ChatGPT cooked up those massive tariffs that have already erased trillions.

    • applemao@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      Yet my libertarian centrist friend INSISTS that AI is great for humanity. I keep telling him the billionaires don’t give a fuck about you and he keeps licking boots. How many others are like this??

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      I would agree with that if the cost of the tool was prohibitively expensive for the average person, but it’s really not.

    • pinball_wizard@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 hours ago

      Every technology shift creates winners and losers.

      There’s already documented harm from algorithms making callous biased decisions that ruin people’s lives - an example is automated insurance claim rejections.

      We know that AI is going to bring algorithmic decisions into many new places where it can do harm. AI adoption is currently on track to get to those places well before the most important harm reduction solutions are mature.

      We should take care that we do not gaslight people who will be harmed by this trend, by telling them they are better off.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        Translations apps would be the main one for LLM tech, LLMs largely came out of google’s research into machine translation.

  • PunkRockSportsFan@fanaticus.social
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    10
    ·
    7 hours ago

    The amount of failed efforts the ruling class has made to corner ai shows me that it is a democratizing force.

    I reap benefits from it already.

    I can create local models with zero involvement from billionaires.

    It scares them more than us.

    And it should. It shows how evil they are. It’s objectively true. Ai knows it.

    • nadram@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      7 hours ago

      But you’re using these billionaires’ ai models are you not? Even if you use the free models they still benefit from your profile and query data

        • mesa@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          7 hours ago

          Yep you can run models without giving $$ to tech billionaires!

          Now we are giving it to the power billionaires! unless you own your own power sources.

            • mesa@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 hours ago

              Meh I like some of the others on hugging face a bit more for coding and such. But its all the same at the end of the day. I do like what you are saying though!

              Models + moderate power should be what we strive for. I’m hoping for a star trek ending where we live in a post scarcity world. Im planing on a post apocalypse haha.

              Once ASIC chips come out (essentially a specific model on a chip) the amount of power we use will be dramatically less.

                • mesa@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  37 minutes ago

                  Its an interesting field! I think the reason we have not gone there is the LLM specific models all have very different models/languages/etc… right now. So the algorithms that create them and use them need flexibility. GPUs are very flexible with what they can do with multiprocessing.

                  But in 5 years (or less) time, I can see a black box kinda system that can run 1000x+ speed that will make GPU LLMs obsolete. All the new GPU farm places that are popping up will have a rude awakening lol.

        • einkorn@feddit.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 hours ago

          Uhm, I guess you missed the news when it was revealed that Deepseek had a little more backing than they claimed.

    • SeeMarkFly@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      There is a BIG difference between what you can do and what you should do.

      We have ZERO understanding on the long term effects this new technology will have on our civilization.

      Why is everybody so eager to go “all in”?