Give portainer a try. It’s actually pretty good for getting a birdseye view, and let’s you manage more than one docker server.
It’s not perfect of course.
Give portainer a try. It’s actually pretty good for getting a birdseye view, and let’s you manage more than one docker server.
It’s not perfect of course.
Pfblockerng on pfsense is very powerful.
Can you not just backup the pg txn logs (with periodic full backups, purged in accordance with your needs?). That’s a much safer way to approach DBs anyway.
(exclude the online db files from your file system replication)
My concern (back then) with keeping the greens spun up would be that I’d lose the energy savings potential of them without the benefits of a purpose built NAS drive.
In my current NAS, I just have a pair of WD Red+. I don’t have a NVME cache or anything but it’s never been an issue given my limited needs.
I am starting to plan out my next NAS though, as the current on (Synology DS716+) has been running for a long time. I figure I can get a couple more years out of it, but I want to have something in the wings planned just in case. (seriously looking at a switch to TrueNas but grappling with price for HW vs appliance…). My hope is that SSDs drop on price enough to make the leap when the time comes.
I had WD Greens in my first NAS (they were HDDs, though). This was ill-advised. Definitely better for power consumption, but they took forever to spin up for access to the point where it seemed like the NAS was always on the fritz.
Now I swear by WD Red. Much, much better (in my use case).
(I’m not sure how things pan out in SSD land though. Right now it’s just too pricey for me to consider.)
I wasn’t even thinking about the performance/efficiency impact! That’s a good point. For me, I like me phone to do what I want it to do. I didn’t ask for AI. I don’t need it. I actually don’t even want it - I really don’t want my devices to ‘think’. What I want is fast, efficient, and predictable execution of what I tell it to do. (by predictable, I mean that I don’t want it to anticipate my needs or surprise me - I want a nice, responsive and dumb device with a great display, a good camera, good connectivity, and a good battery.
If it’s got AI, it means I can feel even better about saving some money and getting an S23 when I finally replace my current phone.
Exactly. The best solution is one that is simple, covers almost all scenarios and generally doesn’t require rethinking when new things come along.
I do wish the Apple stuff played a bit more nicely - my wife uses it and it’s honestly the biggest headache of the design.
Onedrive /google drive for immediate stuff. Other stuff (too big for cloud services) from local to Synology, or simply served from Synology. Cloudsync from OneDrive/Google drive to Synology. (Periodic verification that things are sync’d this is very important!). Snapshots on Synology for local ‘oops’ recovery. Synology hyperbackup to Wasabi for catastrophic recovery. (used to use Glacier for this but it was a bit unwieldy for the amount of money saved - I don’t have that much data)
I’m aware that the loopback from onedrive/Google drive to synology doubles network traffic in the background but, again, I don’t have that much data and a consistent approach makes things easier/safer in the long run. And with more than one computer sharing a cloud drive link, the redundancy/complexity is further diminished. (let the cloud drive experts deal solving race conditions and synchronization/concurrency fun).
This works because every computer I have can plug into the process. Everything ends up on Synology (direct or via onedrive/Google drive) and everything ends up off site at Wasabi.
I very rarely need to touch the Wasabi stuff (unless to test, or because of boneheaded mistakes I make (not often) while configuring things.
It’s a good model (for me), adapts well to almost every situation and let’s me control my data.
Have been looking forward to seeing your they implement this. Once it gels a bit I’ll likely dive in.
It’s 1s and 0s all the way down (notwithstanding qbits…). But it all comes down to workflow and reducing friction of use securely. How will Bitwarden (and others) sit within the process? That remains to be seen. In the meantime, I’m going to see how it goes as I’m not switching gears until I have a thorough understanding of the actual implementation wrt general operation, multiple devices, family accounts (Bitwarden ‘organizations’), backups and recovery, and how to teach and support non-tech-savvy family members through the change).
It absolutely looks promising, but too risky to be bleeding edge.
I’m waiting until Bitwarden supports passkeys before diving in. From what I could tell, they are aiming to release in late October this year, but I’m not certain. (ie - should be imminent).
Pfsense is fantastic. Extremely flexible. I am contemplating switching to opensense when it’s time for an upgrade (it’s been running seamlessly for many years, but someday I’ll need to).
Note that it’s a router, not a wireless access point. For that I use a few Ubiquity APs (I forget the model).
Note that if you want actual virtualization then perhaps Proxmox (not sure if it manages multiple hypervisors - I haven’t obtained something to test it on yet). Portainer is best for Docker management (it, and it’s client agents, run as docker containers themselves. Don’t forget to enable web sockets if proxying.