i’d avoid BIOS-based RAID… it doesn’t really offer many benefits over linux-based raid like MDADM, and MDADM offers a LOT of up-sides for portability, repairability, diagnostics, etc
i’d avoid BIOS-based RAID… it doesn’t really offer many benefits over linux-based raid like MDADM, and MDADM offers a LOT of up-sides for portability, repairability, diagnostics, etc
let’s not go too far though… the holders of h264/h265 did put a lot of money and effort into developing the codec: a new actual thing… they are not patent trolls, who by definition produce nothing new other than legal mess
add tailscale and you’re golden
as a linux professional, congrats you’re a junior and have a lot to learn about the world
kinda different there though… it’s trivial to add whatever data you like to images etc (and that’s without even resorting to steganography), but that data is only accessible with an application. i believe the question was intended as whether you could get a virus from downloading/playing media files… the content of that “hidden data” isn’t executable, so whilst it’s reasonable to say it’s possible to transport a virus via hidden data in media, it’s not reasonable to say that you can “get” a virus using that same method alone
the protocol that allows instances to communicate is, but AFAIK there’s an API that apps use… the protocol is kinda just for how to push raw bulk data around, whilst the instance itself does things like filter based on “top”, “hot”, etc
also, in activitypub things like the actor (user), each comment, post, etc are individual objects which must be requested individually (or in a list via a search i think?), so any app that communicates via activitypub would need to make hundreds of requests to the instance to display a single post, comments, and user information!
also as a kbin user
god damn i want a native kbin app!
so i just did a quick search and apparently
Starting with Gitea 1.19, Gitea Actions are available as a built-in CI/CD solution.
*edited:
also they support being a package repo, including container registry
not related to backup solution, but this is a great time to get some home monitoring sorted! put prometheus on it, run prometheus at home too, and have them monitor each other… great way to know why/when things aren’t working in general, but adds another level of confidence that your data are nice and safe
so what you ideally want is people to ONLY be able to access your backend service through caddy, so caddy should be the only one with ports publicly accessible, yes
caddy running in the same docker network as your services can talk to those services on their original ports; they don’t need to even be mapped to the host! in this case, you have 3 containers: caddy, service 1, service 2… caddy is the only one that needs to have ports forwarded and you can just forward caddy:443 and no need to worry! then caddy can talk directly to services:80 or services:443 (docker containers show up to other docker containers by their container name! so if you run eg: docker run … —name lemmy, then caddy in the same docker network would be able to connect to http://lemmy:80!)
… but if you forward say service 1 and 2 on :8443 and :9443 (without firewall, and even with it makes me uncomfortable - that’s 1 step away from a subtle security problem), someone could be able to access <yourserver>:8443, right? so they don’t have to go through caddy to get to the backend service… for some services, that can be a big deal in ways that it’s difficult to understand, so it’s best to just not allow it if possible
an alternative is to make sure your services are firewalled so that nobody from the internet can hit them, but caddy still can… but i like this less, because it’s less explicit what’s happening so it’s easier to forget about
if you’re only going to be using those services through the proxy, it can also be a useful security upgrade to not forward their ports at all, and run caddy inside docker to connect to them directly!
if you forward the ports (without firewalling them), people can connect to them directly which can be a security risk (for example, many services require a proxy to add the x-forwarded-for header to show which IP address originally made the request… if users can access the service directly, they can add this header themselves and make it appear as though they came from anywhere! even 127.0.0.1, which can sometimes bypass things like admin authentication)
also great, and entirely in web browser so no download needed:
https://www.sharedrop.io/ - also works over the internet! no local network necessary
useful thing to remember about these systems: you fuck up and it’s a high likelihood literally nobody at the company can do any work because all their files are inaccessible
that’s like… $10000/hr in lost man hours alone, let alone reputation from not being able to respond to customers accurately, possibly missed SLAs or other contract obligations
unless your company is all about tech, it’s highly unlikely your IT team has the skills necessary to take on that level of responsibility
the threadiverse wasn’t exactly popular until the reddit exodus
id probably flip it: are lemmy users smoking crack? if you’re going to run to a reddit alternative, it’d be wise not to choose alpha software!
yes and no: there are a couple of schools of thought!
of course, code by a lot of people without proper review is… risky
however, at least it’s able to be reviewed! and in time and with enough eyeballs, hopefully that code will become far more robust. that’s the benefit of transparency: anyone can review any line at any time!
remember: closed-source code as plenty of vulnerabilities too! just if we can’t review it, it’s much harder to work out what they might be… often, closed source vulnerabilities can exist for years without the vendor ever patching them because nobody is calling them out on it… hell, they can even know that their software is actively being exploited and just… not tell anyone
kinda the same reason people suggest something like linux mint over slackware, gentoo, arch, etc… mint is easy to install and is preconfigured to be an easy to use user desktop environment. you can configure any other option to be have like that, but they tend to be a bit more “DIY”, which is great if you know what you’re doing!
dedicated NAS OSes will have good software out of the box that make it easy to configure and manage various common disk-related configurations (RAID, SMB, NFS, etc). you can certainly do all this yourself, but it might not have a pretty, unified user interface, or you might have to deal with software that isn’t compatible with some version of a library that’s in your distro of choice… all resolvable things, but they take time to solve: anywhere from installing a package manually to applying a kernel patch and recompiling the kernel to get something to work