It’s called Lemmy-Safety of Fedi-Safety depending on where you look.
One thing to note, I wasn’t able to get it running on a VPS because it requires some sort of GPU.
It’s called Lemmy-Safety of Fedi-Safety depending on where you look.
One thing to note, I wasn’t able to get it running on a VPS because it requires some sort of GPU.
Yeah, it’s just something like “Tell us why you want to join this instance”. If the answer is “to promote my content” or “qq”, for example, they don’t get approved.
It’s done by the Lemmy software.
We require applications, and most applications we get are extremely low effort and we don’t approve them. If you have open registrations you’ll be doing a lot of moderation for spam.
Run the software that scans images for CSAM. It’s not perfect but it’s something. If your instance freely hosts whatever without any oversight, word will spread and all of a sudden you’re hosting all sorts of bad stuff. It’s not technically illegal if you don’t know about it, but I personally don’t want anything to do with that.
Looking forward to trying the latest update.
They can check existing code. You have to be able to trust people who are contributing.
They can check new code by these risky people as it comes in, but it why risk it?
And as a bonus, presumably you have a nice file filled with historic dates and times!
I notice the same thing. I think it’s because they are busy moving it from a distant warehouse to one closer to you, because you can’t possibly keep all of the same crap in all of the warehouses. So it’s being transported, but not “shipped”, allowing them to take longer.
This is the main reason I cancelled Prime. They started advertising “More than just free shipping”, and I realized that I only used it for free shipping, and as Prime got more and more expensive I wasn’t getting any value from it.
Now I just put stuff in my cart until I have $35 worth of stuff, and get free shipping anyway. It’s not that much slower. An extra day or two usually, and it doesn’t bother me one bit. I can wait a little while for my $10 guitar strap and it’s not the end of the world.
It’s sarcasm because nobody should be held liable.
Seems like the electric companies should also pay a hefty fine, as they provided the needed infrastructure to enable the piracy. /s
I switched from portainer to dockge. Dockge makes updating a 1-click process which I love. Portainer is overkill for homelab, but I like how it lists things like images and networks.
I use zfs with Proxmox. I have it as a bind mount to Turnkey Fileserver (a default lxc template).
I access everything through NFS (via turnkey Fileserver). Even other VMs just get the NFS added to the fstab file. File transfers happen extremely fast VM to VM, even though it’s “network” storage.
This gives me the benefits of zfs, and NFS handles the “what if’s”, like what if two VMs access the same file at the same time. I don’t know exactly what NFS does in that case, but I haven’t run into any problems in the past 5+ years.
Another thing that comes to mind is you should make turnkey Fileserver a privileged container, so that file ownership is done through the default user (1000 if I remember correctly). Unprivileged uses wonky UIDs which requires some magic config which you can find in the docs. It works either way, but I chose the privileged route. Others will have different opinions.
Thanks for the suggestion. I ended up using a Raspberry Pi and an old computer monitor to run MagigMirror and MMM-ImmichSlideShow.
I tried ImmichFrame, too, and will revisit it in the future. For now MMM-ImmichSlideShow is working well.
The developer is still active with their other main project, Uptime Kuma. So that’s good.
That’s good
There are two types, CMR and SMR. You can read online about the differences. CMR is better because SMR tries to be all fancy in order to increase capacity, but at the cost of speed and data integrity.
It won’t be front and center in the specs of a particular drive, but you usually find the info somewhere.
I wouldn’t worry about higher capacity failing sooner. If you have 10x4TB vs 2x20TB, that’s 5x as many drives to go bad. So a 20TB drive would need a 5x worse fail rate to be considered worse. A pro of larger (fewer) drives is lower power consumption. 5-10 watts per drive doesn’t sound like much, but it adds up.
Good question, and I’m curious what the experts say. Surely it depends on the software that handles DHCP.
I’ve always set static addresses in the DHCP address range and it has always been reserved and never assigned to other devices. I’ve used ASUS and MikroTik for what it’s worth.
If you’re the type to set static addresses on the devices themselves, then that would certainly increase the risk of a conflict if it’s inside the address range.
Removed by mod
Removed by mod
I just checked and we have that turned on, too.
We don’t get a lot of applications. A couple per week, maybe.