• 0 Posts
  • 45 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle

  • I’ve seen it mentioned that ryzen is more memory speed sensitive, seen Corsair Vengeance LPX 16GB (2 X 8GB) DDR4 3600 MHz CL16 kit for £35 on UK amazon, see a 32 GB kit for £60 for 3600, £52 for 3200. 32 is super overkill for most people still (shit I recall when 16GB was considered overkill), but it’s cheap enough that it’s harder to say it’s a waste imo.

    Side note, GOW is what sold me on hdr and was the game that got me to upgrade from a 780ti and 3rd gen i5, literally couldn’t even run the game.





  • Could use Polars, afaik it supports streaming from CSVs too, and frankly the syntax is so much nicer than pandas coming from spark land.

    Do you need to persist? What are you doing with them? A really common pattern for analytics is landing those in something like Parquet, Delta, less frequently seen Avro or ORC and then working right off that. If they don’t change, it’s an option. 100 gigs of CSVs will take some time to write to a database depending on resources, tools, db flavour, tbf writing into a compressed format takes time too, but saves you managing databases (unless you want to, just presenting some alternates)

    Could look at a document db, again, will take time to ingest and index, but definitely another tool, I’ve touched elastic and stood up mongo before, but Solr is around and built on top of lucene which I knew elastic was but apparently so is mongo.

    Edit: searchable? I’d look into a document db, it’s quite literally what they’re meant for, all of those I mentioned are used for enterprise search.





  • I really dislike the locking of the taskbar to the bottom, having to click twice to see all my right click options, having to dig through multiple layers of menus to find a setting, not a fan of copilot being pushed in the OS (though I did totally use cortana back in the day, had some somewhat nice assistant features like traffic monitoring to recommend when I left for work), generally not a fan of for lack of better term “streamlining”, it’s mostly minor annoyances and the like but they add up.

    I do really like Auto HDR, winget being there ootb (I think? Was amazing when I migrated work computers), windows terminal is straight up fantastic. It’s still definitely useable, it’s just only on my work machines (no choice, but I live in the terminal, text editors and browser for almost everything so OS doesn’t really matter much to me) and my desktop, run linux on everything else.


  • I have an Asus ROG laptop I bought in 2013 with a 3rd gen i7, whatever the gtx 660 mobile chip was and 16gb of ram, it’s definitely old by any definition, but swapping for an ssd makes it super useable, it’s the machine that lives in my garage as a shop/lab computer. To be fair, its job is web browsing, CAD touchups, slicing and PDF viewing most of the time, but I bet I could be more demanding on it.

    I had been running mint w/ cinnamon on it before as I was concerned about resource usage, was a klipper and octoprint host to printer for a year and a bit. Wiped it and went for Debian with xfce becauae again, was originally concerned about resource usage but ended up swapping to KDE and I don’t notice any difference so it’s staying that way.

    I really hate waste so I appreciate just how useable older hardware can be, Yeah there’s probably an era that’s less true but I’ll go out on a limb (based on feeling only) and suggest that anything in the last 15 years this’ll be true for, but that’s going to depend on what you’re trying to do with it, you won’t have all the capability of more modern hardware but frankly a lot of use cases probably don’t need that anyhow (web browsing, word processing, programming, music playback for sure, probably some video playback, pretty much haven’t hit a wall yet with my laptop)




  • I gave it a fair shake after my team members were raving about it saving time last year, I tried a SFTP function and some Terraform modules and man both of them just didn’t work. it did however do a really solid job of explaining some data operation functions I wrote, which I was really happy to see. I do try to add a detail block to my functions and be explicit with typing where appropriate so that probably helped some but yeah, was actually impressed by that. For generation though, maybe it’s better now, but I still prefer to pull up the documentation as I spent more time debugging the crap it gave me than piecing together myself.

    I’d use a llm tool for interactive documentation and reverse engineering aids though, I personally think that’s where it shines, otherwise I’m not sold on the “gen ai will somehow fix all your problems” hype train.