US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.

In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that “experts are far more positive and enthusiastic about AI than the public” and “far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years” (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).

The public does not share this confidence. Only about 11 percent of the public says that “they are more excited than concerned about the increased use of AI in daily life.” They’re much more likely (51 percent) to say they’re more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.

  • nadram@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    7 hours ago

    But you’re using these billionaires’ ai models are you not? Even if you use the free models they still benefit from your profile and query data

      • mesa@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        7 hours ago

        Yep you can run models without giving $$ to tech billionaires!

        Now we are giving it to the power billionaires! unless you own your own power sources.

          • mesa@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 hours ago

            Meh I like some of the others on hugging face a bit more for coding and such. But its all the same at the end of the day. I do like what you are saying though!

            Models + moderate power should be what we strive for. I’m hoping for a star trek ending where we live in a post scarcity world. Im planing on a post apocalypse haha.

            Once ASIC chips come out (essentially a specific model on a chip) the amount of power we use will be dramatically less.

              • mesa@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                60 minutes ago

                Its an interesting field! I think the reason we have not gone there is the LLM specific models all have very different models/languages/etc… right now. So the algorithms that create them and use them need flexibility. GPUs are very flexible with what they can do with multiprocessing.

                But in 5 years (or less) time, I can see a black box kinda system that can run 1000x+ speed that will make GPU LLMs obsolete. All the new GPU farm places that are popping up will have a rude awakening lol.

      • einkorn@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        Uhm, I guess you missed the news when it was revealed that Deepseek had a little more backing than they claimed.