• 0 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle



  • So if I understand this right you will need to change the network on the port attached to the synology in your UniFi configuration or set the vlan tag in the synology OS, I would do the former. It sounds like you just added a second network/vlan to the existing interface which means you actually created a trunk and are getting the old network untagged and the new network with vlan tags which the synology is dropping. Synology OS also doesn’t really support trunked ports through the UI (even though it does support a port that only uses a vlan tag) so it’s much easier to just leave them untagged.











  • bigredgiraffe@lemmy.worldtoSelfhosted@lemmy.worldWeird 10Gbe networking problems
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    10 months ago

    So then it doesn’t work across the ubiquity switch just to double check? If so, you will need to enable jumbo frames on that for sure and it is not enabled by default and that could also explain the throughput as it is having to fragment then defragment the frames to cross the switch or iperf is using MSS to determine that it can only send 1500 byte frames, your slower speed is about line rate for 1500 byte frames no matter the speed of the actual link.

    ETA: you can verify this by pinging with a large size and setting the “do not fragment” flag, so something like ‘ping -s 2000 -M do ip.addr ’ on Linux, windows uses different flags.



  • I would get one 2x32 kit somewhere you can return it (or even 1x32 if you are worried) and try it out, sometimes it does work but sometimes it won’t POST. Like the other person said, it might work but there really isn’t a way to know for sure other than that. I have run into situations with systems like that where that was just the largest available at release date for them to test and validate and larger DIMMs work fine so it’s probably worth testing in my opinion.

    I am curious myself, let me know if you do test it, those look like cool machines for small clusters.




  • Yeah this smells like a bug in Caddy or something. I agree to try nginx or something else to see if it’s Caddy or if it’s something with the configuration of the host. The only thing I could think of is if caddy isn’t caching DNS responses and maybe is getting rate limited so it appears slower while it’s waiting on the DNS request but I am shooting in the dark as I haven’t spent much time with caddy.


  • Yep you are correct, that’s what I was trying to when I was talking about the logs on the public instance and forwarding them to a central place of that is important information, sorry if it didn’t make sense, I must have been tired haha.

    I forgot before, it is also possible use ProxyProtocol for TCP applications but the application will need to understand it for it to show in the application logs. It would also be possible to use this to allow the on-prem instance (nginx->nginx let’s say) to see the true client IP from the public instance, the exact configuration is implementation dependent though.