US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.
In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that “experts are far more positive and enthusiastic about AI than the public” and “far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years” (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).
The public does not share this confidence. Only about 11 percent of the public says that “they are more excited than concerned about the increased use of AI in daily life.” They’re much more likely (51 percent) to say they’re more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.
But you’re using these billionaires’ ai models are you not? Even if you use the free models they still benefit from your profile and query data
Nope :)
Deepseek GitHub fork ftw
Tax billionaires til they don’t exist ! Or some other way!
Yep you can run models without giving $$ to tech billionaires!
Now we are giving it to the power billionaires! unless you own your own power sources.
Luckily deep seek uses less power than any locally run models. Can run it on almost any modernish machine
Meh I like some of the others on hugging face a bit more for coding and such. But its all the same at the end of the day. I do like what you are saying though!
Models + moderate power should be what we strive for. I’m hoping for a star trek ending where we live in a post scarcity world. Im planing on a post apocalypse haha.
Once ASIC chips come out (essentially a specific model on a chip) the amount of power we use will be dramatically less.
ASIC ai seems like a trouble some thing. Imagine ai powered hacker dongles. Wow.
Its an interesting field! I think the reason we have not gone there is the LLM specific models all have very different models/languages/etc… right now. So the algorithms that create them and use them need flexibility. GPUs are very flexible with what they can do with multiprocessing.
But in 5 years (or less) time, I can see a black box kinda system that can run 1000x+ speed that will make GPU LLMs obsolete. All the new GPU farm places that are popping up will have a rude awakening lol.
Will have a look, thx.
Uhm, I guess you missed the news when it was revealed that Deepseek had a little more backing than they claimed.
Yeah they were sponsored
But the code is open sourced.
So
FOSS FTW