We’re still using them on machines where performance doesn’t matter
On build machines, they’re on a special VLAN and don’t have endpoint protection, but they only download from a protected mirror
We’re still using them on machines where performance doesn’t matter
On build machines, they’re on a special VLAN and don’t have endpoint protection, but they only download from a protected mirror
Their ftrace hooks caused all disk usage to be serialized, making your multi-core processor single-core when doing anything I/O bound
We saw between 500% - 800% increases in build times with their software installed
Oh god. Sentinel one is horrible. If they’re taking issue with your testing, you’ve really screwed the pooch
Somewhere around 0,0 or 1,1
There are amazing possibilities in the theoretical space, but there hasn’t been enough of a breakthrough on how to practically make stable qubits on a scale to create widespread hype
If an attacker gets access to your system, they will be able to ensure you can’t get rid of their access
It will persist across operating system installs
However, this requires them to get access first
How does one flash a ROM without unlocking the bootloader these days?
Shouldn’t that break Android Verified Boot?
A pure GSI image could use a Google key, I suppose, but others shouldn’t, right?
Isn’t #2 the only option?
Websites specifying color for foreground (or background) and assuming browsers will use whatever color they’re expecting for the other has always existed, and still exists
If you’re getting fancy and specifying colors, you can’t cheap out and not specify all colors
If the browser ignores all your colors at that point, then it’s displaying as the user intended
If you only specified some of the colors, it’s a bug of the website
The even crazier part to me is some chip makers we were working with pulled out of guaranteed projects with reasonably decent revenue to chase AI instead
We had to redesign our boards and they paid us the penalties in our contract for not delivering so they could put more of their fab time towards AI
Separated over the PCIe bus with an IOMMU between it and system memory, as well as hardware switches to disable it if I’m not reachable
I haven’t found a way to remove it entirely. It’s the only option I’ve found so far, but if you know of a better designed option, I’m certainly interested
You have to enable developer mode and install with --bypass-low-target-sdk-block
now.
Dunno if they’ll remove that eventually
Google is certainly planning on it being viable.
They’ve been merging RISC-V support in Android and have documented the minimum extensions over the base ISA that must be implemented for Android certification
Yeah, that’s bizarre. I’d never have guessed /home was created by tmpfiles
The RK3588 is pretty nifty, and is the first Mali GPU (610) where ARM themselves have contributed the firmware upstream and have helped with Collabora with Panfrost development
Bleeding edge, still, but kernel 6.10 and Mesa 24.1 have GPU support
HDMI TX and DSI/CSI are still in-progress
I’m working off the assumption you are using one GPU for the host and one for the guest
The guest one is permanently blacklisted on the host, and you can select the passthrough settings in the GUI
If you’re dynamically detaching the GPU, my statement was incorrect
If your motherboard supports it, it’s really easy
Ensure IOMMU is enabled and run the little script in section 2.2 to see if you can isolate the graphics card
https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF
After that, you can do everything in the virtual-manager GUI
If you’re relying on iMessage for privacy, ensure you and everyone you’re messaging have gone to iCloud settings and enabled “Advanced Data Protection”
I can’t say for all of them, I just knew that e.g. the z790 chipset still ran the ethernet phy, audio dsp, SPI, their version of TrustZone, etc through the chipset
https://www.funkykit.com/wp-content/uploads/2022/10/intel-z790-chipset-diagram.jpg
If you have the block diagrams for the laptop ones, I’d be curious
I enjoy that they literally did. The article says the OTA update is just to ignore a hardware sensor
Which begs the question, why was that sensor needed originally?
I haven’t looked that closely at laptop CPUs
My guess would be partially because there are fewer possible interfaces, and they’re directly connecting the CPU to a separate Ethernet/WiFi MAC, USB hub controller, and audio DSP rather than having a separate chipset arbitrating who’s talking to the CPU and doing some of those functions?
It’s not just betas - it’s in the main release, too