The other day I tried to have it help me with a programming task on a personal project. I am an experienced programmer, but I only “get by” in Python (typically just by looking up the documentation for the standard library). I thought, “OK. This is it. I will ask Llama 3.3 and GPT4 for help.”
That shit literally set me back a weekend. It gave me such bad approaches and answers, that I could tell were bad (aforementioned experience in programming, degree in comp sci, etc) that I got confused about writing Python. Had I just done what I usually do, which is to look up the documentation and use my brain, I would have gotten my weekend task done a whole weekend sooner.
It scares me to think what people are doing to themselves by relying on this, especially if they’re novices.
It scares me to think what people are doing to themselves by relying on this, especially if they’re novices.
Same here. There’s a lot of denial going on but, LLMs are not good for anything that requires factual information. They likely will never be on account of just being statistical models for language. Summarizing long text where correctness isn’t an issue is really one of the only places where I still think that they are good.
Search? Not if you want anything factual with citations.
Code? Fuck no. They constantly produce code of poor quality that may depend on non-existent libraries or functionality. More time it’s spent debugging than writing code and it leaves the dev with a poor understanding of what the code actually does and ways to optimize/extend/etc.
Generating literary smut? Well, it’s not going to do as good of a job as a person who can create something completely novel but can be passable without likely harm to authors (I’d classify it as a tier below erotic fan fiction).
The other day I tried to have it help me with a programming task on a personal project. I am an experienced programmer, but I only “get by” in Python (typically just by looking up the documentation for the standard library). I thought, “OK. This is it. I will ask Llama 3.3 and GPT4 for help.”
That shit literally set me back a weekend. It gave me such bad approaches and answers, that I could tell were bad (aforementioned experience in programming, degree in comp sci, etc) that I got confused about writing Python. Had I just done what I usually do, which is to look up the documentation and use my brain, I would have gotten my weekend task done a whole weekend sooner.
It scares me to think what people are doing to themselves by relying on this, especially if they’re novices.
Same here. There’s a lot of denial going on but, LLMs are not good for anything that requires factual information. They likely will never be on account of just being statistical models for language. Summarizing long text where correctness isn’t an issue is really one of the only places where I still think that they are good.
Search? Not if you want anything factual with citations.
Code? Fuck no. They constantly produce code of poor quality that may depend on non-existent libraries or functionality. More time it’s spent debugging than writing code and it leaves the dev with a poor understanding of what the code actually does and ways to optimize/extend/etc.
Generating literary smut? Well, it’s not going to do as good of a job as a person who can create something completely novel but can be passable without likely harm to authors (I’d classify it as a tier below erotic fan fiction).
deleted by creator