Hi all, sorry if this has been asked/discussed before (I couldn’t find any directly overlapping posts):

I have been running the Nextcloud snap now for quite some time, and although things have run quite smoothly, I never really managed to properly back things up.

I make weekly backups of the database, config and data, but it’s very hard and time consuming to glue these elements back together. And as they say: when you can’t check whether a backup works, it’s not really a backup.

I have been experimenting with KVM/qemu lately and things look pretty great. The idea of simply backing up the entire OS that runs Nextcloud (a backup that you can easily deploy/run somewhere else to test if it’s working) sounds very attractive.

Reading around, however, tells me that some of you recommend running the Nextcloud docker (instead of a VM).

My questions:

  1. What would be the advantage of running Nextcloud as a docker, instead of within a VM?
  2. What would be a sensible way to have an incremental/differential backup of the VM/Docker?
  3. The storage usage of my Nextcloud instance exceeds 1TB. If I run it within a VM, I will have to connect it to a 2TB SSD. Does it make sense to add the external storage space to the VM? How does that affect the ease of backing the full VM up? Or (as I have read here and there) should I simply put the entire VM on the external SSD?
  • fraydabson@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    From my experience docker seems to be best for me. I’m also no expert in any of this.

    What I do is run the container in docker and then I user rsync to backup my files to both a secondary hard drive and off site storage with a backup provider.

    I haven’t looked into database backups yet. Just files.

    • partizan@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago
      docker exec nextcloud-mariadb-1 /usr/bin/mariadb-dump --defaults-extra-file=/backup/.mylogin.cnf -u root --single-transaction --quick --all-databases |gzip > /mnt/mysql/backup/nc${NUM}_dump.gz
      

      You are welcome

  • Swiggles@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If you can use containers always use containers as a rule of thumb. VMs are less efficient in almost every way and they add some unnecessary complexity.

    For docker you basically only have to backup the persistent data. So in case of the docker setup you just have to backup the mounts and probably your compose file you are using. This probably also answers your third question already. Container files can be left alone and don’t need to be considered for backups as they should be stateless and can reside in their default location (/var/lib/docker/overlay2 or so by default).

    Overall it is quite simple as you only really have to consider the mounts and the docker setup. The mounts you define and should be really obvious and the docker setup is just a few config files at most or just the compose file.

  • maggio@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    My friend and I run most things in kubernetes (k3s), and then we use longhorn to backup volumes, which then can be re-used if your cluster crashes. Here’s a blog post describing the process (although not for nextcloud specifically, it could be applied to Nextcloud as well, as we do this for nextcloud as well)

    Octopusx blog: Backup and Restore

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    HTTP Hypertext Transfer Protocol, the Web
    HTTPS HTTP over SSL
    SSH Secure Shell for remote terminal access
    SSL Secure Sockets Layer, for transparent encryption

    2 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

    [Thread #34 for this sub, first seen 13th Aug 2023, 08:55] [FAQ] [Full list] [Contact] [Source code]

  • Landrin201@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I have been FIGHTING TOOTH AND NAIL for about a week to get the AIO working in docker on Linux, and I’m getting extremely frustrated with it.

    I FINALLY got it to actually function yesterday to where I could attempt to do its internal setup, but now I’m stuck with this page:

    I’m genuinely starting to question whether it’s worth it at this point, I haven’t once been able to actually get it all the way set up and functional.

    • p_consti@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I tried the AIO image as well, I would recommend against it. https://github.com/nextcloud/docker is a more manual setup, but it’s also much more flexible. AIO forces you to have a domain name and HTTPS certificate etc, which might not be necessary for you.

      As for the page you are seeing, this is the administration page afaik, the actual nextcloud interface is running on a different port (https 443 with AIO).

      • Landrin201@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I just keep hitting issues with the damn AIO, I got past this and now it’s stuck in maintenance mode. Who the fuck thought this was in a release ready state? I swear I’ve never had this much issue with ANY other docker container- and the documentation doesn’t help at all. I’m at a loss here, i’m super frustrated with this.