• 0 Posts
  • 43 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • Reddit had the ability to have a per-subreddit wiki. I never dug into it on the moderator side, but it was useful for some things like setting up pages with subreddit rules and the like. I think that moderators had some level of control over it, at least to allow non-moderator edits or not, maybe on a per-page basis.

    That could be a useful option for communities; I think that in general, there is more utility for per-community than per-instance wiki spaces, though I know that you admin a server with one major community which you also moderate, so in your case, there may not be much difference.

    I don’t know how amenable django-wiki is to partitioning things up like that, though.

    EDIT: https://www.reddit.com/wiki/wiki/ has a brief summary.




  • tal@kbin.socialtoAndroid@lemdro.idAndroid helps Apple "Get the Message"
    link
    fedilink
    arrow-up
    45
    arrow-down
    1
    ·
    edit-2
    1 year ago

    The rest of the world doesn’t use SMS/RCS/iMessage as much as WhatsApp and the like

    SMSes use a standard available to any app. WhatsApp is controlled by a single company.

    If you were arguing that XMPP or something like that should be used instead of SMS, okay, that’s one thing, but I have a hard time favoring a walled garden.



  • That ratio doesn’t matter.

    What matters is the value derived from some prohibited activity relative to the fine/lawsuits resulting from that activity.

    Let’s say that Company A sells oranges, and uses some pesticide that isn’t approved, and gets a fine for it.

    Let’s say that Company B sells apples, and improperly claimed that the apples were fresher than they were to grocery stores and is sued for that.

    Let’s say that Company A and Company B merge and form Company C. The value of Company C would be larger, but it would make no sense for either of the above two disincentives to be larger. Being part of Company C doesn’t make engaging in bad behavior more-desirable than it does for when A and B were separate, and so the disincentives one establishes for bad behavior shouldn’t grow either.


  • I mean, scrolling down that list, those all make sense.

    I’m not arguing that Google should have kept them going.

    But I think that it might be fair to say that Google did start a number of projects and then cancel them – even if sensibly – and that for people who start to rely on them, that’s frustrating.

    In some cases, like with Google Labs stuff, it was very explicit that anything there was experimental and not something that Google was committing to. If one relied on it, well, that’s kind of their fault.






  • Maybe they should have called it “Temporary Test Kitchen” to drive the point home with even more of a sledgehammer.

    I suspect that some of it is that the author isn’t used to apps that are already installed no longer running, but it sounds like this, like most AI things, doesn’t just run on the local Android device, but leverages off-machine computational capacity, and you can’t expect that to be permanent.



  • Speaking generally about ads, the issue is that people (a) don’t like ads, but (b) also don’t like paying for things that could be ad-supported. And the money for things that are ad-supported is going to come from one place or another, or they won’t be done.

    Wanting to get rid of ads is a legitimate preference – but I’m saying that that probably comes with paying for something that wasn’t paid for before.




  • tal@kbin.socialtoSelfhosted@lemmy.worldWhy is DNS still hard to learn?
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Yeah, I don’t think I really agree with the author as to the difficulty with dig. Maybe it could be better, but as protocols and tools go, I’d say that dig and DNS is an example where a tool does a pretty good job of coverage. Maybe not DNSSEC, dunno about how dig does there, and knowing to use +norecurse is maybe not immediately obvious, but I can list a lot of network protocols for which I wish that there were the equivalent to dig.

    However, a lot of what of what the author seems to be complaining about is not really stuff at the network level, but the stuff happening on the host level. And it is true that there are a lot of parts in there if one considers name resolution as a whole, not just DNS, and no one tool that can look at the whole process.

    If I’m doing a resolution with Firefox, I’ve got a browser cache for name resolutions independently of the OS. I may be doing DNS over HTTP, and that may always happen or be a fallback. I may have a caching nameserver at my OS level. There’s the /etc/hosts file. There’s configuration in /etc/resolv.conf. There’s NIS/yp. Windows has its own name resolution stuff hooked into the Windows domains stuff and several mechanisms to do name resolution, whether via broadcasts without a domain controller or with a DC whether that’s present; Apple has Bonjour and more-generally there’s zeroconf. It’s not immediately clear to someone the order of this or a tool that can monitor the whole process end to end – these are indeed independent systems that kind of grew organically.

    Maybe it’d be nice to have an API to let external software initiate name resolutions via the browser and get information about what’s going on, and then have a single “name resolution diagnostic” tool that could span multiple of these name resolution systems, describe what’s happening and help highlight problems. I can say that gethostbyname() could also use a diagnostic call to extract more information about what a resolution attempt attempted to do and why it failed; libc doesn’t expose a lot of useful diagnostic information to the application, though libc does know what it is doing in a resolution attempt.


  • make dig’s output a little more friendly. If I were better at C programming, I might try to write a dig pull request that adds a +human flag to dig that formats the long form output in a more structured and readable way, maybe something like this:

    Okay, fair enough.

    One quick note on dig: newer versions of dig do have a +yaml output format which feels a little clearer to me, though it’s too verbose for my taste (a pretty simple DNS response doesn’t fit on my screen)

    Man, that is like the opposite approach to what you want. If YAML output is easier to read, that’s incidental; that’s intended to be machine-readable, a stable output format.


  • Duplicity uses rsync internally for efficient transport. I have used that. I’m presently using rdiff-backup, driven by backupninja out of a cron job, to backup to a local hard drive and which does incremental backups (which would address @Nr97JcmjjiXZud’s concern). That also uses rsync. There’s also rsbackup, which also uses rsync and I have not used.

    Two caveats I’d note that may or may not be a concern for one’s specific use case (which apply to rdiff-backup, and I believe both also apply to the other two rsync-based solutions above, though it’s been a while since I’ve looked at them, so don’t quote me on that):

    • One property that a backup system can have is to make backups immutable – so that only the backup system has the ability to purge old backups. That could be useful if, for example, the system with the data one is preserving is broken into – you may not want someone compromising the backed up system to be able to wipe the old backups. Rdiff-backup expects to be able to connect to the backup system and write to it. Unless there’s some additional layer of backups that the backup server is doing, that may be a concern for you.

    • Rdiff-backup doesn’t do dedup of data. That is, if you have a 1GB file named “A” and one byte in that file changes, it will only send over a small delta and will efficiently store that delta. But if you have another 1GB file named “B” that is identical to “A” in content, rdiff-backup won’t detect that and only use 1GB of storage – it will require 2GB and store the identical files separately. That’s not a huge concern for me, since I’m backing up a one-user system and I don’t have a lot of duplicate data stored, but for someone else’s use case, that may be important. Possibly more-importantly to OP, since this is offsite and bandwidth may be a constraining factor, the 1GB file will be retransferred. I think that this also applies to renames, though I could be wrong there (i.e. you’d get that for free with dedup; I don’t think that it looks at inode numbers or something to specially try to detect renames).


  • While I realize that there are people for whom having a camera aimed at themselves is really important, I have to say that I have virtually never used the self-facing camera on a phone.

    Honestly, every videoconference I’ve ever done on a computer for work could have really been done just fine with a audio-only call too.

    I’d be pretty comfortable getting a phone that just drops the self-facing camera. Could just use a USB-attached webcam if I ever ran into a very rare situation where I really critically wanted the ability to videoconference on a phone.

    Now, okay, that’s not true for everyone. For some people’s uses, having a self-facing camera is legitimately important. But at least for my own uses, I’d rather just have the extra pixels.