This one terrifies me every time… When you pass a car going the opposite way, and it basically looks wike the steering wheel have a wig on… It’s always an old woman… Can they even see the road? Or are they navigating using the sky?
This one terrifies me every time… When you pass a car going the opposite way, and it basically looks wike the steering wheel have a wig on… It’s always an old woman… Can they even see the road? Or are they navigating using the sky?
Agree on both parts, but the second part can still be achieved from an unconnected car, you just can’t do it remotely
IPv6 does not require you to open your machine to the Internet, even without making use of a NAT. Sure you get an IP that’s valid on the whole internet, but that doesn’t mean that anyone can send you traffic.
Are these restrictions set out by the ISP or the dorm?
If you don’t do business with the ISP, then you don’t have to agree to and follow their terms.
So as long as the dorms doesn’t have rules against setting up your own WiFi, then you should be well within your rights to purchase an Internet connection from another provider, but since you are likely not allowed to get your own line installed, you are probably restricted to ISPs that provide a service over the cellular network.
Of course using a cellular connection will give you worse latencies for online games, but at least you can have your own WiFi with low latency for your VR.
If you want to be nice, you could then run as much of your Internet network over ethernet as possible, so you congest the air waves as little as possible, possibly only running the VR headset over WiFi, and maybe even only enabling the WiFi radio when you want to play VR. If all your WiFi devices support 5GHz, you might also completely disable your 2.4GHz WiFi, to leave the most congested frequencies alone.
To lower the chance of someone complaining about your WiFi, you should configure it as a “hidden network”, such that it doesn’t broadcast an SSID, and therefore doesn’t show up when people are looking for WiFi networks to connect to.
I really don’t see much benefit to running two clusters.
I’m also running single clusters with multiple ingress controllers both at home and at work.
If you are concerned with blast radius, you should probably first look into setting up Network Policies to ensure that pods can’t talk to things they shouldn’t.
There is of course still the risk of something escaping the container, but the risk is rather low in comparison. There are options out there for hardening the container runtime further.
You might also look into adding things that can monitor the cluster for intrusions or prevent them. Stuff like running CrowdSec on your ingresses, and using Falco to watch for various malicious behaviour.
No need for a physically separated network, that’s what VLANs are for
That sound like you need a more serious setup, where you can control the network priorities and set a QoS, so the devices that you use interactively get priority over the other devices.
So as far as I understand, you have
Is that correct?
Why not get the WiFi in the Comcast router disabled, and use your inner network exclusively, such that both WiFi and ethernet devices are on the same network?
That’s what I did with my network, and I even got the ISP to put their modem/router into bridge mode, so it’s completely transparent.
That makes perfect sense, and switching is definitely annoying then… But the person I responded to said they had multiple WiFi networks at home… E.g. Not on holiday
Why on earth would you have multiple WiFi networks in your home?
This, but for playing VR games
Immutable distros were originally very focused on servers, and more recently distros for workstations has stated gaining more interest as the concept has matured.
With the advent of cloud computing “immutable infrastructure” started becoming more and more popular. This concept started out as someone sitting down and grabbing a normal Linux distribution, and installing all the necessary bits for the server purpose they needed. Then baking that into an image. Now you could launch new copies of that machine whenever you felt like it, and they would behave exactly the same. If any of them started doing something wonky, you just destroyed it and launched a new copy. This was very useful for software developers and operations people who could now more easily reason about how things behaved. And be sure that the difference in behaviour wasn’t because someone forgot to enable a setting, install a tool, or skipped a step in the setup.
On the software development side, you also simultaneously saw more and more developers make use of functional programming methods, and al’ng with those immutable data structures. Fundamentally, instead of adding an item to a list, you make a new list with all the old and the new items in it. You never change the data after it’s creation. Each “change” is a new copy, with the difference already built in.
Then containers started becoming popular. Which allowed software developers to build a container image on their local computer, and then ship that image to a server, where the image behaved exactly as it did on their local machine. This also meant that the actual OS became less and less important, as everything needed by the container was already bundled in the container. The containers also worked as “immutable”, since everything you would install or change within the containers would be immediately lost when the container was destroyed, and recreating it would be exactly as when the image was built.
The advent of containerised workloads, gave rise to a lot of different Linux distributions. Since the containers pretty much only needed the Linux kernel from the OS, it was pretty easy to make a container-centric operating system. And in turn lock down everything else, even completely omitting having a package manager. Stuff like CoreOS, Flatcar, Rancher OS, and many others were immutable linuxes that only catered to containers. I don’t know the exact mechanism for all of these, but at least the original CoreOS and Flatcar make the actual system read only, and on top of that had two man partitions, one of the partitions would be the current system, and the other would be where updates were downloaded. Once an update was downloaded and ready, you just rebooted the machine, and it would be running off the updated partition. Which also meant easy rollback if you got a broken update. You could just boot off the other unupdated partition.
Containers were however rather ill suited for desktop applications, as there were no good way to provide a GUI. You could serve up a Web page, but native GUI apps were tricky.
That’s where Flatpak, Snaps and all that came, which essentially brings the container mentality to normal desktop apps. This brought immutability to individual apps, as they brought their own dependencies. And therefore didn’t have to rely on the correct versions of dependencies being available on the machine.
The logical next step was of course to add immutability to workstation distributions. This is where the popularity of Fedora Silverblue, NixOS, and many others really started taking shape.
I believe Fedora Silverblue uses ostree to make the system “immutable”. Of course you can still make changes to your system, but the system is built to be completely aware of the state before and the state after, this is what’s called “atomic”. There’s no such thing as a partially installed package. There is only the state before installing something, and the state when the thing is fully installed. You can roll back to any of the previous states, to recover from a broken update or misconfiguration. This also makes trying out new things with no risk. Trying out a new desktop environment, and it broke your system? Just roll back. Accidentally uninstalled a critical package? Just roll back. What to try out a new display manager? Just apply the config and roll back if you don’t like it.
SteamOS also does the thing with multiple partitions, and even allows you to turn off the immutability. Other distributions aren’t as lenient. There’s no way to turn off the immutability in NixOS or Fedora Silverblue.
I have a few cheap cameras that can handle both WiFi and ethernet, they support an SD card, and they do continuous recording regardless of connection type.
You mean Pozidriv?
ZFS doesn’t really support mismatched disks. In OP’s case it would behave as if it was 4x 2TB disks, making 4 TB of raw storage unusable, with 1 disk of parity that would yield 6TB of usable storage. In the future the 2x 2TB disks could be swapped with 4 TB disks, and then ZFS would make use of all the storage, yielding 12 TB of usable storage.
BTRFS handles mismatched disks just fine, however it’s RAID5 and RAID6 modes are still partially broken. RAID1 works fine, but results in half the storage being used for parity, so this would again yield a total of 6TB usable with the current disks.
SSD longevity seems to be better than HDDs overall. The limiting factor is how many write cycles the SSD can handle, but in most cases the write endurance is so high that it’s unreachable by most home/NAS systems.
SSDs are however really bad for cold storage, as they will lose the charge stored in their cells if left unpowered too long. When the SSD is powered it will automatically refresh the cells in the background to ensure they don’t lose their charge.
Yes, but Google would not have done that if nobody used Firefox