Yep that’s what nvidia marketing seems to be calling their denoiser nowadays. Gods spare us marketing departments.
Yep that’s what nvidia marketing seems to be calling their denoiser nowadays. Gods spare us marketing departments.
Tensor cores have nothing to do with raytracing. They’re cut-down GPU cores specialising in tensor operations (hence the name) and nothing else. Raytracing is accelerated by RT cores, doing BVH traversal operations and ray intersections, the tensor cores are in there to run a denoiser to turn the noisy mess that real-time RT produces into something that’s, well, not messy. Upscaling, essentially, the only difference between denoising and upscaling is that in upscaling the noise is all square.
And judging by how AMD has done this stuff before nope they won’t do separate cores, but make sure that the ordinary cores can do all that stuff well.
The trick to nixos, in this instance, is to use a python venv. Python dependencies are fickle and nasty in the first place, triply so when talking about fast-churning AI code, I tried specifying everything with nix, I succeeded, and then you have random comfyui plugins assuming they can get a writeable location by constructing a path from comfyui’s main.py. It’s not worth it: Let python be the only dependency you feed in, let pip and general python jank do the rest.
5500 here. I can’t use any recent rocm version because the GFX override I use is for a card that apparently has a couple more instructions and the newer kernels instantly crash with an illegal operation exception.
I found a build someone made buried in a docker image and it indeed does work, without override, for the 5500 but it’s using all generic code for the kernels and is like 4x slower than the ancient version.
What’s ultimately the worst thing about this isn’t that AMD isn’t supporting all cards for rocm – it’s that the support is all or nothing. There’s no “we won’t be spending time on this but it passes automated tests so ship it” kind of thing. “oh the new kernels broke that old card tough luck you don’t get new kernels”.
So in the meantime I’m living with the occasional (every couple of days?) freeze when using rocm because I can’t reasonably upgrade. Not just the driver crashes, the kernel tries to restart it, the whole card needs a reset before doing anything but display a vga console.
Oh ball point pens. Last I heard one of the thing they do preserve in primary school over here is the good ole progression from pencil to fountain pen and sticking for that for the whole four years. Pencil because if you use too much force you break the thing without breaking it, it’s just annoying, and that’s the point, once they switch to fountain pens they’re not going to bend them. Also, cursive from the start. There’s important lessons about connecting up letters in there: Writing single letters properly is harder than cursive because on top of moving your pen over the paper, you have to lift it. Much easier if you already have proper on-paper movement down.
I am quite partial to ink rollers nowadays but still can’t stand ordinary ball points. They feel wrong.
Software is still jank. Well maybe except zfs and sqlite, but the rest is jank. Also seL4.
So my right hand has drifted further to the right over the years,
That should literally never be the case. How do you even find your home position like that.
The quick and simple way to learn proper touch tying is simple: Use a typing tutor program. It really is all about writing random stuff without looking at your keyboard, that’s all there is to it, depending on layout what you write may make more or less sense. Do that until you can actually type blindly, if you need a refresher for symbols then do that, it’s worth the time investment, just for the love of everything don’t look at your keyboard and don’t ever rest your index anywhere but where you feel that they’re in the right position. Not some feel-good “feel” but those nubs on the keys (f and j on qwerty). feel them.
Perilous, eh. Threatening tales of impeding doom and destruction. Who are you actually trying to convince, here. I doubt it’s me I’d be flattered but don’t think you care enough.
If Roko’s Basilisk is forcing you, blink twice.
3840 * 1600 * 4B / 1024 / 1024 = 23.4375MiB for uncompressed RGBA (four bytes per pixel).
That is, even if that thing was pure random pixels and would have to be stored uncompressed and you’d use a completely useless alpha channel you still don’t hit 25M.
dismiss at your own peril.
Oooo I’m scared. Just as much as I was scared of missing out on crypto or the last 10000 hype trains VCs rode into bankruptcy. I’m both too old and too much of an engineer for that BS especially when the answer to a technical argument, a fucking information-theoretical one on top of that, is “Dude, but consider FOMO”.
That said, I still wish you all the best in your scientific career in applied statistics. Stuff can be interesting and useful aside from AI BS. If OTOH you’re in that career path because AI BS and not a love for the maths… let’s just say that vacation doesn’t help against burnout. Switch tracks, instead, don’t do what you want but what you can.
Or do dive into AGI. But then actually read the paper, and understand why current approaches are nowhere near sufficient. We’re not talking about changes in architecture, we’re about architectures that change as a function of training and inference, that learn how to learn. Say goodbye to the VC cesspit, get tenure aka a day job, maybe in 50 years there’s going to be another sigmoid and you’ll have written one of the papers leading up to it because you actually addressed the fucking core problem.
But that isn’t to say human intelligence can’t be surpassed by something distinctly inhuman.
Tell me you haven’t read the paper without telling me you haven’t read the paper. The paper is about T2 vs. T3 systems, humans are just an example.
Most turbo buttons never worked for that purpose, though, they were still way too fast Like, even ignoring other advances such as better IPC (or rather CPI back in those days) you don’t get to an 8MHz 8086 by halving the clock speed of a 50MHz 486. You get to 25MHz. And practically all games past that 8086 stuff was written with proper timing code because devs knew perfectly well that they’re writing for more than one CPU. Also there’s software to do the same job but more precisely and flexibly.
It probably worked fine for the original PC-AT or something when running PC-XT programs (how would I know our first family box was a 386) but after that it was pointless. Then it hung on for years, then it vanished.
And my point isn’t about where we’re at, it’s about how far the same tech progressed on another domain adjacent task in three years.
First off, are you extrapolating the middle part of the sigmoid thinking it’s an exponential. Secondly, https://link.springer.com/content/pdf/10.1007/s11633-017-1093-8.pdf
Nuclear energy is more expensive than renewable so not really, no. Having a good combination of starting materials to minimise the amount of energy you need to fuse everything together, or even starting out with something heavier, would be the way to go.
For more details ask a nuclear physicist of which I’m not one. Honestly there doesn’t seem to be much work on it.
We’ve known how to turn lead into gold for ages, you just add a couple of protons, neutrons, and electrons. Long story short: Uses a fuckton of energy, not worth it.
Fun fact: When Ernest Rutherford and colleagues put together the first paper about their findings they avoided the word “transmutation” like the plague. It has been considered impossible since before alchemy became chemistry and even though he was publishing in physics chemists would probably still have had his head.
From what I understand this is just the recommended feed so it wouldn’t affect searching for specific stuff, or binging a channel’s backlog.
And frankly speaking this should be a default feature. All too often the algorithm thinks “oh you watched this one video let me drown you in that shit at the expense of everything else”.
The whole thing meshes well with what we know from child/youth psychology, btw: Agency makes all the difference, whether they’re seeking information, or are (in currentyear), doomscrolling it. One tends to involve critical engagement, the other is an osmosis sponge.
Oh. Speaking of youtube fitness channels, here’s a good one. And another one. Like, especially if you haven’t done anything in a while, just watch this.
They don’t keep copies and learning speed? Why one day? Does it count if I skim through a book?
it is that they didn’t pay for the books they read, like people are supposed to do legally.
If I can read a book from a library, why shouldn’t OpenAI or anybody else?
…but yes from what I’ve heard they (or whoever, don’t remember) actually trained on libgen. OpenAI can be scummy without the general process of feeding AI books you only have read access to being scummy.
So you would rather Ukrainians lay down their weapons and we’ll have 20 years of Bucha and Holodomor, again? I somehow doubt you would prefer that to continued warfare, more likely thinking “war is awful” is taking precedence over “not fighting it would be a hell a lot worse”. But that’s why wars are, by and large, fought: Because people think that not doing it would be worse. Some because they’re nuts, some, like Ukrainians, because they’re spot-on.
The only party which can lay down their weapons and not get absolutely kicked in the face for it is Russia. Every minute it continues is on them.
About the only AI company currently alive that I’m sure will survive is CivitAI. Huggingface probably, too. Both are, in the end, in the datacenter business. Huggingface has exposure to VC BS in their client base, they might be in trouble if a significant number suddenly go belly-up but if they have any sense they’ll simply not overextend. And, well, they, too, can switch to cat pictures.