Meh. Pedant time.
The first step of every testcase is supposed to fail to ensure a testing framework can fail in the scope of the testcase. In practice nobody acknowledges this, much less incorporates it.
Without, one is forced to concede that the test environment is not sane—and, therefore, no case under test is certifiable—because it cannot be known, for certain, if failure was a possibility.
Nobody cares.
There’s a lot more implication to her reaction when you hear the question she was asked.
The parent wasn’t looking for an explanation.
They were commenting on how hard your wetware bricked.
not being defined except for being confusing and possibly enraging
You just defined shitposting, my guy.
Are you sure you didn’t set low-detail with the viewport cranked way down? I played it on the same model with a math co-processor and it could not handle high-detail and the large viewport in the video.
Edit: I’m fairly certain I had a math co-processor, but I’ll defer to you on this detail just in case. That would certainly make a sizeable difference.
40MHz is plenty for doom.
Ew, no. Even 386DX-40 is terrible for Doom:
Doom timedemo 386 DX 40 MHz DOS PC
486SX-33 is certainly playable, but you really want 486DX2/66:
Edit: grammar
Edit 2: These videos are accurate, btw. I upgraded from 386SX-25 to 486SX-33 just for Doom while my friend got the 486DX2/66 Packard Bell. Envy.
Edit 3: My memory forced me to go back and properly designate the models.
I am concerned about the energy abuse of LLMs, but it gets worse. AGI is right around the corner, and I fear that law of diminishing return may not apply due to advantages it will bring. We’re in need of new, sustainable energy like nuclear now because it will not stop.
Would you kindly find a source for that?
I can personally speak from the 80s, so that’s not exactly a golden age of reliable information. There was concern about scale of infinite growth and power requirements in a perpetual 24/7 full-load timeshare by people that were almost certainly not qualified to talk about the subject.
I was never concerned enough to look into it, but I sure remember the FUD: “They are going to grow to the size of countries!” - “They are going to drink our oceans dry!” … Like I said, unqualified people.
Another factor is that there aren’t that many supercomputers in the world, a handful of thousand of them.
They never took off like the concerned feared. We don’t even concern ourselves with their existence.
Edit: grammar
While I absolutely agree with everything you’ve stated, I’m not taking a moral position here. I’m just positing that the same arguments of concern have been on the table since the establishment of massive computational power regardless of how, or by whom, it was to be utilized.
Supercomputers were feared to be untenable resource consumers then, too.
Utilizing nuclear to feed AI may be the responsible and sustainable option, but there’s a lot of FUD surrounding all of these things.
One thing is certain: Humans (and now AI) will continue to advance technology, regardless of consequence.
The forefront of technology overutilizes resources?
Always has been.
Edit: Supercomputers have existed for 60 years.
Not even close to the only thing.
Says a lot about society
Sure does. This was mainstream, cutting-edge in 1988.
How have we strayed so far?
Opposite. I get the feeling that Willem Dafoe is a down-to-earth normal guy with a haunting voice, intense stare, and unsettling smile with extreme comical affect. He strikes me as an individual with the patience of a saint.
And where will it get the cash? An analogue of OnlyFans.
Watch, it’ll happen.
Edit: grammar
Main character syndrome.
“Oh, just Delayed Blast Fireball. Why do you ask?”
“Of course I know where everyone is going to be over the course of the next minute. I mean, really… how far can they go?”
Why do they ban it?
Full disclosure: You’ll notice an edit on my parent. I originally submitted isnt. No one is immune from the curse.