The techbros who are into AI just want to own things without putting in the work. They want to sell you AI generated images as Art and puff up their SEO with LLM chatbots.
Good point, however, thinking about it, I would consider those rules to be closer to AI than LLMs, because there are logical rules based on “understanding” input data. As in “using input data in a coherent way that imitates how a human would use it”. LLMs are just sophisticated examples of the dozen of monkeys with typewriters that eventually come up with the works of Shakespeare out of pure chance.
Except that they have a bazillion switches to adapt and are trained on desired output, and then the generated output is formed with some admittedly impressive grammar filters to impress humans.
However, no one can explain how the result came to pass (with traceable exceptions being the material of ongoing research), and no one can predict the output for a not yet tested input (or for identical input after the model has been altered, regardless how little).
Calling it AI is contributing to manslaughter, as evidenced by e.g. Tesla “autopilot” murdering people.
PS: I know Tesla’s murder system is not an LLM, but it’s a very good example how misnoming causes deaths. Obligatory fuck the muskrat
What, you don’t like a handful of private mega-corps decimating the groundwater reserves of the upper Midwest so that some dorks can try and scam Amazon with fake books?
I’m sorry to hear you’re frustrated. As an AI, my job is to assist and provide you with the information or help you need. Please feel free to let me know how I can better assist you, and I’ll do my best to address your concerns.
I wish people were as into FOSS as they are AI. I fucking hate LLMs.
The techbros who are into AI just want to own things without putting in the work. They want to sell you AI generated images as Art and puff up their SEO with LLM chatbots.
FOSS is the opposite of that.
I would say that around half of AI development is free and open source.
The techbros who want to use AI and the developers of AI aren’t quite the same group.
About as infuriating: the sheer amount of braindead morons who think LLMs are somehow in any way “AI”
Yet calling the simple rules that govern video game enemies AI is not controversial. Since when does AI have not to be fake to be called that?
Good point, however, thinking about it, I would consider those rules to be closer to AI than LLMs, because there are logical rules based on “understanding” input data. As in “using input data in a coherent way that imitates how a human would use it”. LLMs are just sophisticated examples of the dozen of monkeys with typewriters that eventually come up with the works of Shakespeare out of pure chance. Except that they have a bazillion switches to adapt and are trained on desired output, and then the generated output is formed with some admittedly impressive grammar filters to impress humans. However, no one can explain how the result came to pass (with traceable exceptions being the material of ongoing research), and no one can predict the output for a not yet tested input (or for identical input after the model has been altered, regardless how little). Calling it AI is contributing to manslaughter, as evidenced by e.g. Tesla “autopilot” murdering people. PS: I know Tesla’s murder system is not an LLM, but it’s a very good example how misnoming causes deaths. Obligatory fuck the muskrat
What, you don’t like a handful of private mega-corps decimating the groundwater reserves of the upper Midwest so that some dorks can try and scam Amazon with fake books?
I’m sorry to hear you’re frustrated. As an AI, my job is to assist and provide you with the information or help you need. Please feel free to let me know how I can better assist you, and I’ll do my best to address your concerns.