Could it be an MTU issue? Networking van be weird if packets get fragmented unexpectedly, but I see this mostly for IKEv2 and other VPN Services. Try to lower your MTU on the WAN side Maybe?
Could it be an MTU issue? Networking van be weird if packets get fragmented unexpectedly, but I see this mostly for IKEv2 and other VPN Services. Try to lower your MTU on the WAN side Maybe?
I run Nextcloud for many, many years. I hosted it for a very long time at Hetzners second lowest tier of Webspace they rent. It was not very fast there (you get what you pay for), but fast enough for our need here. Later I moved it to an Azure VM and after that to my Homeserver where it runs blazingly fast, especially since the last updates they pushed out.
In all that time I never reinstalled. I just upgraded to the newer versions when they were out. The only times I had problems upgrading was when I was hosting at the cheap Webspace instance at Hetzner and an upgrade process took longer than the PHP timeout my very cheap hosting instance provided. So it was never a fault of Nextcloud, but just that I hosted it on basically the cheapest hosting plan I could find.
We use it for file sharing, calendar + contacts (+ Sync with DAVx), Notes and of course Talk. For talk to make full use of Voice + Video calls, you should have a TURN Server, but if you do not use that (if you just text) it was running great even on the Webspace instance at Hetzner.
We are very happy in our family that it exists, that it is free and that it serves us well since many years.
You would think so, yes. But to my surprise, my well over 60 Containers so far consume less than 7 GB of RAM, according to htop. Also, of course Containers can network and share services. For external access for example I run only one instance of traefik. Or one COTURN for Nextcloud and Synapse.
I would absolutely look into it. Many years ago when Docker emerged, I did not understand it and called it “Hipster shit”. But also a lot of people around me who used Docker at that time did not understand it either. Some lost data, some had servicec that stopped working and they had no idea how to fix it.
Years passed and Containers stayed, so I started to have a closer look at it, tried to understand it. Understand what you can do with it and what you can not. As others here said, I also had to learn how to troubleshoot, because stuff now runs inside a container and you don´t just copy a new binary or library into a container to try to fix something.
Today, my homelab runs 50 Containers and I am not looking back. When I rebuild my Homelab this year, I went full Docker. The most important reason for me was: Every application I run dockerized is predictable and isolated from the others (from the binary side, network side is another story). The issues I had earlier with my Homelab when running everything directly in the Box in Linux is having problems when let´s say one application needs PHP 8.x and another, older one still only runs with PHP 7.x. Or multiple applications have a dependency of a specific library when after updating it, one app works, the other doesn´t anymore because it would need an update too. Running an apt upgrade was always a very exciting moment… and not in a good way. With Docker I do not have these problems. I can update each container on its own. If something breaks in one Container, it does not affect the others.
Another big plus is the Backups you can do. I back up every docker-compose + data for each container with Kopia. Since barely anything is installed in Linux directly, I can spin up a VM, restore my Backups withi Kopia and start all containers again to test my Backup strategy. Stuff just works. No fiddling with the Linux system itself adjusting tons of Config files, installing hundreds of packages to get all my services up and running again when I have a hardware failure.
I really started to love Docker, especially in my Homelab.
Oh, and you would think you have a big resource usage when everything is containerized? My 50 Containers right now consume less than 6 GB of RAM and I run stuff like Jellyfin, Pi-Hole, Homeassistant, Mosquitto, multiple Kopia instances, multiple Traefik Instances with Crowdsec, Logitech Mediaserver, Tandoor, Zabbix and a lot of other things.
deleted by creator
I love Traefik! When I started, I tried NGinx, but could not wrap my head around it. So I tried Caddy. Pretty easy to understand andI used it for a while. Then I had demands Caddy could not do ant stumbled uponTraefik. As you said, a learning curve, butfor me much easier than NGinx. I like that you can put the Traefik config inside the Compose files and that the service only is active in Traefik when the actual Containers are up and running. I added Crowdsec to my external facing Traefik instance and even use a plain Traefik instance for all my internal services also. And it can forward http, https, TCP and UDP.
Thank you for your feedback! I get the impression that it might work if used on a small scale when it´s not public. I guess I will have a new container soon :-)
One reason is because I can. And because of that, I tend to host things myself which I can. This generates cost and work to maintain it on my side and not for others. A few less users from our household on a public instance means more room for others who are just not as tech-savvy and have no other choice as to rely on public instances. So it is a mix of respecting other peoples time, effort and money and a part is just the nerd that wants to find out how it works and how it´s done :-)
Oh wow, that is a lot more usage than I can think of for all of us here, haha! Thank you very much. That sounds very promising.
Is Revolt an option, maybe?
I was just looking for cheap backup space recently and Hetzners Storage Box BX21 is 13€ per month for 5 TB, 20 Snapshots and unlimited traffic. I did not compare the service with backblaze yet, though.
Setup of the HMAC Key for the CouchDB was indeed the step I struggled with too. I think the first time I either made a mistake or used a broken Website to generate a Base64 value. The 2nd time my mistake was that I put in the Base64 value for the HMAC Key into the jwt.ini AND in the docker-compose.yml. But in the docker-compose.yml COUCHDB_HMAC_KEY, I had to put it unencoded and in the jwt.ini hmac:_default it has to be Base64 encoded. Maybe this is the thing you did wrong too?
I bet you are close!
On the other hand, if you are the only person using the shopping list and your current setup offers you what you need, maybe it is not worth it for you. For me it was (and updating when it runs is super easy, I promise!). The instant sync over all devices is great + it keeps working when I lose reception in a shop and syncs again instantly when I have internet again. But what makes Groceries for me are:
Oh, and adding a photo to an item is super useful if you are like me and need very close instructions what to get for your partner if you stand in front of a shelf with 100 different types of cheese which look all exactly the same to you… having a photo is sometimes a life saver for me :-)
As others mentioned, you probably do not need VMs. If you thought about VMs because of isolation, then yes. that might be a good idea.
In an ideal world, if I had the budget / hardware, I would have a Server with multiple NICs (Network Interface Cards) connected to different ports on my Firewall for LAN and DMZ. Then I would create VMs for LAN and DMZ and on those the Docker Containers needed for that zone. Everything that is accessible from the Internet gets into the DMZ, the rest in the LAN. I could further lock it down by creating 2 DMZ zones and only put let´s say NGINX or Traefik into the Zone that gets exposed and the services behind the Reverse Proxy in the 2nd DMZ zone, which will still be isolated from LAN.
But since I only have a small box with 1 NIC, instead I created VLANs on my Router and created a Docker Network for each VLAN. Every single service I run is a docker container and in one of the VLANs, appropriate to their level of exposure. I have one VLAN called LAN that obviously is connected to my LAN and 2 other VLANs where I basically do what I described above. One holds Traefik and has exposed ports to the Internet and the other VLAN hosts the Services which are accissible through traefik. With that setup you at least isolate network traffic and it is something I would look into if you plan to expose any of your services to the internet. Usually when you start with Docker, you probably would just expose Ports from the Containers, which get mapped to the IP of your host… and so all those Containers will have access to your LAN. At least try to separate that.
The next thing I wanted to do, is run my Containers rootless, which means that no container has root permissions if in case something within the container decides to let the docker service do something malicious on the host, it should not be able to run as root. The caveat here is, that docker does not support VLANs in rootless mode. I spend half a day converting everything to Podman, because people where praising podman left and right if you want to run rootless, but then I found out that Podman does not support VLANs in rootless mode either :->
Using VMs as described above would make the “I can not use docker rootless” problem less of a problem, but I decided against VMs because of Resources / Budget.
What I can recommend when you start, do not try to make things too complicated until you are familiar with Docker and understand what you are doing. As you get better, you might want more and learn more stuff as you go.
You could just install a Linux Distribution you are familiar with (I use Ubuntu Server LTS 22), install Docker and just play around with it a bit to see how everyting works. Only start exposing Services to the Internet if you know what you are doing.
Maybe a few tips or keywords for you of stuff I went through step by step for later usage.
All in all: Do not rush it, do not feel the pressure to do everything I wrote. You might even come up with other, much better fitting solutions for you than what I or others here are doing. The most important things? Have fun and think twice what and how you expose a service to the public :-)
Hmm, what do docker logs -f <container> tell? I made myself a compose file and use traefik. Not on my PC atm, but when I had problems getting it running, I made mistakes with the secrets. But that should show in the logs.
Maybe Tandoor for recipes and Groceries from David Shay for shopping lists of all kind. So far the best multi User shopping list / app I ever had.
We follow the principle of doing one thing well instead of all things mediocre, so we use 2 solutions for what you asked. As others in the thread, we do use Tandoor, but only for Recipes and Meal Planning. It does this execeptionally well, but the shopping list part is fitting to our style of shopping.
As a shopping list, we use David Shays Groceries / Specifically Clementines. Why?
There is more, but this post got too long already. It also has User Management, Permissions and Live Sync. Yes, my Partner can see live when I tick of items on the list and can put stuff on the list while I am shopping :-)
Everything in that software feels like it was created by a person that goes actually shopping.
It has a very good web interface (which also has the offline mode AFAIK) and a very good Android App.
Does it look fancy? No. Has it everything we ever searched for in a shopping list app? Absolutely!