• SirEDCaLot@lemmy.today
    link
    fedilink
    English
    arrow-up
    37
    ·
    10 days ago

    About damn time. We got a boost every few years from 10 to 100 to 1000. Then we just… Stopped. Stagnated. It’s understandable why, for a good long time one gigabit was all anybody needed, 100 MByte/sec is pretty good even for a NAS.

    Of course then fiber ISPs got in the game, now in a lot of places you can buy 7-8gbps as a consumer product. And even multi-gig, which was supposed to ‘fix’ this, really ended up being insufficient. You could make a salad argument that multi gig was a waste of time and we should have just started moving to 10 gig.

    Unfortunately, 10 gig switches still carry a significant premium. But this will start to shake that up. Sooner the better.

    • ftbd@feddit.org
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 days ago

      100MB/s are frustrating for a NAS. SSDs have been common for a decade, and the old spinning rust storage in my NAS is still faster than the network can handle?

        • SirEDCaLot@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          100 MByte/sec. 8 bits per byte, call it 10 when you include overhead / CRC / etc.
          1000 mbit = 100 mbyte

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 days ago

            Sure. My point was that even for 100mbit/s, even UHD could probably still be streamed.

            HDDs can probably max a 1gbit/s connection as well (often get 150MB/s sequential), which is more than sufficient for multiple IHD streams. Moving to 10gbit/s really isn’t needed for anything, and SSDs aren’t needed either to max a gbit/s network, unless doing random reads (i.e. lots of small files).

            • SirEDCaLot@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              All true. But what if you aren’t just storing media for consumption? What if you’re doing photo editing, video editing, etc? If your NAS is either flash-based or has a flash cache, that extra speed can be really useful.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 days ago

                Are you saying you’d be loading all that data strictly over the network instead of having a local copy that gets synced periodically? That would be terrible on a 100 mbit/s line… If that was my workflow, I’d run 10 gbit/s cable everywhere and make sure clients had at least 2.5G.

                I use my NAS for local backups and streaming when we watch something as a family. 100 mbit/s would be fine for that use case.

                • SirEDCaLot@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  4 hours ago

                  Yes I am, and that is exactly the point. I do not want spinning disks in my desktop, or anyone’s desktop or laptop. Give the actual computer a fast SSD for the OS and programs, then store the big data on a NAS or server. How’s the computer access it from that server in real time.
                  At 100 megabits (10 megabytes per second) that isn’t very fun. Gigabit ethernet is 100 megabytes per second give or take. That is where it starts to become useful for storage, as most spinning disks themselves have a transfer rate between 100 and 150 megabytes per second.

                  But as you just pointed out, that can become a bottleneck. Especially if you have multiple people accessing the server. How much of a problem it becomes depends on what they’re doing. IE, 10 people editing photos can happily share a gigabit link to the server because they load the photo once and then the link sits idle while they work as the photo is cached in RAM, 10 people editing uncompressed high definition video will probably want a constant full gigabit to each of them because they’ll be using almost all of it constantly so you need a gigabit to each desk and 10 gig to the server (and a storage array with sufficient bandwidth)

                  • sugar_in_your_tea@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    3 hours ago

                    You could look into automatic local caching for diles you’re planning to seed, and stick that on an SSD. That way you don’t hammer the HDDs in the NAS and still get the good feels of seeding. Then automatically delete files once they get to a certain seed rate or something and you’re golden.

                    How aggressive you go with this depends on your actual use case. Are you actually editing raw footage over the network while multiple other clients are streaming other stuff? Or are you just interested in having it be capable? What’s the budget?

                    But that sounds complicated. I’d personally rather just DIY it, that way you can put an SSD in there for cache and you get most of the benefits with a lot less cost, and you should be able to respond to issues with minimal changes (i.e. add more RAM or another caching drive).