Reusable second stage, and more economic rockets with better turnaround.
Elon is stupid and has a lot of money. SpaceX somehow got competent people managing him to somehow steer the company in decent direction.
Ah yes. An American itasha to represent the thing they love.
Oh. So that’s why it looks better. Photo stacking is OP.
Found the spreadsheet https://goo.gl/z8nt3A
And the source: https://www.hardwareluxx.de/community/threads/die-sparsamsten-systeme-30w-idle.1007101/
Still you can calculate how much you will save with 2w power reduction with selling this one and buying different NAS.
You can reduce the disk idle time after access to 5-15 min for better power saving.
Maybe you are looking at the wrong thing. CPU + motherboard controllers idle state matters more than spun down hdds
I saw a spreadsheet somewhere of a lot of cpu + motherboard combinations with idle power consumption for ultra low energy NAS optimisation.
If this is a textbook that you need to have in class. I would say go to the print shop and order couple of copies for you and your classmates (They also want cheaper textbooks). I think the biggest problem will be to have usable binding as loose or stapled paper won’t cut it. A print shop will have the machines and expertise to do it relatively cheaply.
I saw once pirated textbook in class and it was done like that. I think half the class had a pirated copy.
Modem translates fiber signals / DSL into twisted pair cable
Acces point translates twisted pair into wifi
I think you are looking for all in one router
For AI/ML workloads the VRAM is king
As you are starting out something older with lots of VRAM would be better than something faster with less VRAM for the same price.
The 4060 ti is a good baseline to compare against as it has a 16GB variant
“Minimum” VRAM for ML is around 10GB the more the better, less VRAM could be usable but with sacrefices with speed and quality.
If you like that stuff in couple of months, you could sell the GPU that you would buy and swap it with 4090 super
For AMD support is confusing as there is no official support for rocm (for mid range GPUs) on linux but someone said that it works.
There is new ZLUDA that enables running CUDA workloads on ROCm
https://www.xda-developers.com/nvidia-cuda-amd-zluda/
I don’t have enough info to reccomend AMD cards
The langues create natural barriers on the internet and it creates language specific sites that deal only in one language.
FMHY has quite a few entries in non-english section
I think more important is compute per watt and idle power consumption than raw max compute power.