• Australis13@fedia.io
    link
    fedilink
    arrow-up
    40
    arrow-down
    2
    ·
    3 days ago

    This makes me suspect that the LLM has noticed the pattern between fascist tendencies and poor cybersecurity, e.g. right-wing parties undermining encryption, most of the things Musk does, etc.

    Here in Australia, the more conservative of the two larger parties has consistently undermined privacy and cybersecurity by implementing policies such as collection of metadata, mandated government backdoors/ability to break encryption, etc. and they are slowly getting more authoritarian (or it’s becoming more obvious).

    Stands to reason that the LLM, with such a huge dataset at its disposal, might more readily pick up on these correlations than a human does.

      • Australis13@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        18 hours ago

        Why? LLMs are built by training maching learning models on vast amounts of text data; essentially it looks for patterns. We’ve seen this repeatedly with other behaviour from LLMs regarding race and gender, highlighting the underlying bias in the dataset. This would be no different, unless you’re disputing that there is a possible correlation between bad code and fascist/racist/sexist tendencies?