Not sure what I expected them to cost, but $5.70 is somehow less than I expected.
Not sure what I expected them to cost, but $5.70 is somehow less than I expected.
I had a nicer Acer monitor that I replaced with a similar Samsung model about a year ago. I still kinda miss the Acer. Both were 32" curved LCD and 1440p. The Acer had a much more uniform curve to it, and the Samsung has a bunch of firmware issues that sometimes can only be worked around by unplugging it and power cycling it that way. The only reason I “upgraded” was the Samsung had better support for PS5 and scaling 4K inputs down to the native 1440p without artifacts.
Yeah, but can we get Sony to re-release Morbius in theaters one more time?
No way that’s the real reason, the real reason is taxes. So many California millionaires move to Texas for the low tax, only to realize once they’re there that it’s a shitty state with a barely-functioning power grid. Unfortunately it never seems to click for most of them that the low taxes is a big part of why they don’t have a competent state government.
Yeah, I know a lot of the smaller, independent search engines are lacking, but the people using the “udm=14” trick to remove Google’s AI results now, as if that won’t be removed as soon as Google needs to show investors the AI is more profitable.
To add to this, Scarlett Johansson took on Disney and they settled. And Disney is like the final boss of litigious companies (either them or Nintendo). If she has the same legal team for this, and they think she has a case against OpenAI, this could open the door for OpenAI to get rightfully clobbered for their tech-bro ignoring of copyright laws.
You could play for 15 minutes and feel like you were speedrunning carpal tunnel and arthritis.
Just to add a bit of clarification, the image wasn’t just a headshot, yes that’s the part that was originally scanned and used, but it’s a cropped in section of the centerfold, a 3-page fold-out image in the magazine. If I remember the story correctly, they needed a large image to scan, and several people brought in images to scan in, and one guy brought a Playboy.
I remember seeing an interview with the model, who at the time of the interview was in her 70s or 80s, she apparently wasn’t enthusiastic about having become a common test image. But since she had technically consented to be in Playboy (which was only a magazine at the time), there wasn’t anything she could do to stop it. I think in this case it’s probably best to stop using her image specifically, as it does kinda get into a weird messy situation of consent, and how her consent to be in a magazine morphed through technology into something more “permanent” than she originally realized. There are plenty of other models who would absolutely be down for that, and given enough time, knowing how nerds are, there will be other test images of women. But I think it’s probably for the best that this one gets retired from this use.
And yes, there are people who have tried to use this instance as a “there shouldn’t be images of attractive/implied nude women a standard test images, because it can cause body image issues for women who go into that field.” Which on one hand, I can see where they’re coming from, but also people take pictures of people, and some people do look better than most of us, having more diverse test images would be a good thing, because we don’t all look like that. But some do, and they’re probably going to get more pictures taken of them than the rest if us.
Not sure exactly how good this would work for your use case of all traffic, but I use autossh and ssh reverse tunneling to forward a few local ports/services from my local machine to my VPS, where I can then proxy those ports in nginx or apache on the VPS. It might take a bit of extra configuration to go this route, but it’s been reliable for years for me. Wireguard is probably the “newer, right way” to do what I’m doing, but personally I find using ssh tunnels a bit simpler to wrap my head around and manage.
Technically wireguard would have a touch less latency, but most of the latency will be due to the round trip distance between you and your VPS and the difference in protocols is comparatively negligible.
Maybe I’ll try and give it another go soon to see if things have improved for what I need since I last tried. I do have a couple aging servers that will probably need upgraded soon anyway, and I’m sure my python scripts that I’ve used in the past to help automate server migration will need updated anyway since I last used them.
I think that my skepticism and desire to have docker get out of my way, has more to do with already knowing the underlying mechanics, being used to managing services before docker was a thing, and then docker coming along and saying “just learn docker instead.” Which is fine, if it didn’t mean not only an entire shift from what I already know, but a separation from it, with extra networking and docker configuration to fuss with. If I wasn’t already used to managing servers pre-docker, then yeah, I totally get it.
That’s a big reason I actively avoid docker on my servers, I don’t like running a dozen instances of my database software, and considering how much work it would take to go through and configure each docker container to use an external database, to me it’s just as easy to learn to configure each piece of software for yourself and know what’s going on under the hood, rather than relying on a bunch of defaults made by whoever made the docker image.
I hope a good amount of my issues with docker have been solved since I last seriously tried to use docker (which was back when they were literally giving away free tee shirts to get people to try it). But the times I’ve peeked at it since, to me it seems that docker gets in the way more often than it solves problems.
I don’t mean to yuck other people’s yum though, so if you like docker, and it works for you, don’t let me stop you from enjoying it. I just can’t justify the overhead for myself (both at the system resource level, and personal time level of inserting an additional layer of configuration between me and my software).
My vote would be NuTwo.
Source has been posted on Internet Archive (along with the latest builds for a bunch of platforms). Something will likely rise from the ashes of YuZu, but it wouldn’t surprise me if it takes a few years. Nintendo is probably gonna be extra litigious this year (even more than usual), due to them likely failing to have the Switch’s successor ready this year, and not really having a full slate of games ready, so with Switch sales projected to be down, best to lay low on anything that might get Nintendo’s attention for a while.
I just use public trackers and search for “VR180” - more than half the results are usually porn. If you want non-porn 3D movies “HSBS” is a good term to use as it’s probably the most common format for 3D Blu-rays.
I have a similar setup. Even for hard drives and slower SSDs on a NAS, 10g has been beneficial. 2.5 gig would probably be sufficient for most of what I do, but even a few years ago when I bought my used mellanox sfp+ cards on eBay it was basically just as cheap to go full 10g (although 2.5 gig Ethernet ports are a bit more common to find built-in these days, so depending on your hardware, that might be a cheaper place to start). But even from a network congestion standpoint, having my own private link to my NAS is really nice.
I’ve dabbled with some monitoring tools in the past, but never really stuck with anything proper for very long. I usually notice issues myself. I self-host my own custom new-tab page that I use across all my devices and between that, Nextcloud clients, and my home-assistant reverse proxy on the same vps, when I do have unexpected downtime, I usually notice within a few minutes.
Other than that I run fail2ban, and have my vps configured to send me a text message/notification whenever someone successfully logs in to a shell via ssh, just in case.
Based on the logs over the years, most bots that try to login try with usernames like admin or root, I have root login disabled for ssh, and the one account that can be used over ssh has a non-obvious username that would also have to be guessed before an attacker could even try passwords, and fail2ban does a good job of blocking ips that fail after a few tries.
If I used containers, I would probably want a way to monitor them, but I personally dislike containers (for myself, I’m not here to “yuck” anyone’s “yum”) and deliberately avoid them.
Just doing a single pass all the same like zeroes, often still leaves the original data recoverable. Doing passes of random data and then zeroing it lowers the chance that the original data can be recovered.
So more like Judas and Goliath then?