Exactly this. Everyone focuses on how fast you can charge a phone, but 99% of the time I’m charging over night and would prefer a slower charge.
I just capped mine to 90%, if that goes well I might go down to 80.
Exactly this. Everyone focuses on how fast you can charge a phone, but 99% of the time I’m charging over night and would prefer a slower charge.
I just capped mine to 90%, if that goes well I might go down to 80.
Do you think trickle charging via wireless would be significantly worse?
I was actually thinking of using the battery charge limit feature to prevent charging above 90%. Not sure I could do 80 without an charge during the day, lol
StandardNotes for me
I try to balance things between what I find enjoyable/ worth the effort, and what ends up becoming more of a recurring headache
I have a somewhat dated (but decently specd) NUC running Proxmox, and it’s the backbone of my home lab. No issues to date.
I was using a WD PR4100, but I upgraded to a Synology RS1221+ and it’s been fantastic :)
I have a beefed up Intel NUC running Proxmox (and my self hosted services within those VMs) and a stand alone NAS that I mount on the necessary VMs via fstab.
I really like this approach, as it decouples my storage and compute servers.
4 currently with 8GB RAM and no pass through for transcoding (only direct play)
That’s a good point; My Virtualization server is running on a (fairly beefy) Intel NUC, and it has 2 eth ports on it. One is for management, and the other I plug my VLAN trunk into, which is where all the traffic is going through. I will limit the connection speed of the client that is pulling large video files in hopes the line does not saturate, and long term I’ll try to get a different box where I can separate the VLAN’s onto their own ports instead of gloming them all into one port.
Very nice of you to offer. I made a few changes (routing my problem Jellyfin client directly to the Jellyfin server and cutting out the NGINX hop, as well as limiting the bandwidth of that client incase the line is getting saturated).
I’ll try to report back if there’s any updates.
Good bot.
Good point. I just checked and streaming something to my TV causes IO delay to spike to like 70%. I’m also wondering if maybe me routing my Jellyfin (and some other things) through NGINX (also hosted on Proxmox) has something to do with it… Maybe I need to allocate more resources to NGINX(?)
The system running Proxmox has a couple Samsung Evo 980s in it, so I don’t think they would be the issue.
I typically prefer VM’s just because I can change the kernel as I please (containers such as LXC will use the host kernel). I know it’s overkill, but I have the storage/ memory to spare. Typically I’m at about 80% (memory) utilization under full load.
Yeah, I’ve been looking into it for some time. It seems to normally be an issue on the client side (Nvidia shield), the playback will stop randomly and then restart, and this may happen a couple times (no one really knows why, it seems). I recently reinstalled that server on a new VM and a new OS (Debian) with nothing else running on it, and the only client to seem to be able to cause the crash is the TV running the Shield. It’s hard to find a good client for Jellyfin on the TV it seems :(
The one piece is reeeeaaaaal.
I’ll have a specific VLAN for people needing those things
Wish something like that would come back.
Wait, If Windows is 96.21% and Linux is 1.96%, then MacOS is 1.83%?
Wouldn’t that make Linux 2nd place?
Is keeping everything inside of a local “walled garden”, then exposing the minimum amount of services needed to a WireGuard VPN not sufficient?
There would be be no attack surface from WAN other than the port opened to WireGuard