Hot off the back of its recent leadership rejig, Mozilla has announced users of Firefox will soon be subject to a ‘Terms of Use’ policy — a first for the iconic open source web browser.
This official Terms of Use will, Mozilla argues, offer users ‘more transparency’ over their ‘rights and permissions’ as they use Firefox to browse the information superhighway — as well well as Mozilla’s “rights” to help them do it, as this excerpt makes clear:
You give Mozilla all rights necessary to operate Firefox, including processing data as we describe in the Firefox Privacy Notice, as well as acting on your behalf to help you navigate the internet.
When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox.
Also about to go into effect is an updated privacy notice (aka privacy policy). This adds a crop of cushy caveats to cover the company’s planned AI chatbot integrations, cloud-based service features, and more ads and sponsored content on Firefox New Tab page.
I’m not. Apologies if I was unclear, but I was specifically referencing the fact that you were saying AI was going to accelerate to the point that it replaces human labor, and I was simply stating that I would prefer a world in which human labor is not required for humans to survive, and we can simply pursue other passions, if such a world where to exist, as a result of what you claim is happening with AI. You claimed AI will get so good it replaces all the jobs. Cool, I would enjoy that, because I don’t believe that jobs are what gives human lives meaning, and thus am fine if people are free to do other things with their lives.
The automation of labor is not even remotely comparable to the creation of a technology who’s explicit, sole purpose is to cause the largest amount of destruction possible.
Could there hypothetically be an AI model far in the future, once we secure enough computing power, and develop the right architecture, that technically meets the definition of AGI, (however subjective it may be) that then decides to do something to harm humans? I suppose, but that’s simply not looking to be likely in any way, (and I’d love if you could actually show any data/evidence proving otherwise instead of saying “it just is” when claiming it’s more dangerous) and anyone claiming we’re getting close (e.g. Sam Altman) just simply has a vested financial interest in saying that AI development is moving quicker and at a higher scale than it actually is.
It’s called having a disagreement and refuting your points. Just because someone doesn’t instantly agree with you doesn’t mean that I’m automatically mistaken. You’re not the sole arbiter of truth. Judging from how you, three times now, have assumed that I must be secretly suppressing the fact that AI is actually going to do more damage than nuclear bombs, just because I disagree with you, it’s clear that you are the one making post-hoc justifications here.
You are automatically assuming that because I disagree, I actually don’t disagree, and must secretly believe the same thing as you, but am just covering it up. Do not approach arguments from the assumption that the other person involved is just feigning disagreement, or you will never be capable of even considering a view other than the one you currently hold.
The fact you’d even consider me possibly using AI to write a comment is ridiculous. Why would I do that? What would I gain? I’m here to articulate my views, not my views but only kind of, without any of my personal context, run through a statistical probability machine.
I’m sorry, but you seem to have misinterpreted what I was saying. I never claimed that AI would get so good it replaces all jobs. I stated that the potential consequences were extremely concerning, without necessarily specifying what those consequences would be. One consequence is the automation of various forms of labor, but there are many other social and psychological consequences that are arguably more worrying.
Your conception of labor is limited. You’re only taking into account jobs as they exist within a capitalist framework. What if AI was statistically proven to be better at raising children than human parents? What if AI was a better romantic partner than a human one? Can you see how this could be catastrophic for the fabric of human society and happiness? I agree that jobs don’t give human lives meaning, but I would contend that a crucial part of human happiness is feeling that one is a valued, contributing member of a community or family unit.
If you actually understood my point, you wouldn’t be saying this. The intended purpose of the creation of a technology often turns out to be completely different from the actual consequences. We intended to create fire to keep warm and cook food, but it eventually came to be used to create weapons and explosives. We intended to use the printing press to spread knowledge and understanding, but it ultimately came to spread hatred and fear. This dichotomy is applicable to almost every technological development. Human creators are never wise enough to foresee the negative externalities that will ultimately result from their creations.
Again, you’re the one who has been positing some type of AI singularity and simultaneously arguing it would be a good thing. I never said anything of the sort, you simply attached a meaning to my comment that wasn’t there.
And again, nuclear weapons have been used twice in wartime. Guns, swords, spears, automobiles, man made famines, aeroplanes, literally hundreds of other technologies have killed more human beings than nuclear weapons have. Nuclear fission has also provided one of the cleanest sources of energy we possess, and probably saved untold amounts of environmental damage and additional warfare over control of fossil fuels.
Just because nuclear weapons make a big boom doesn’t make them more destructive than other technologies.
I’m glad that you didn’t use AI. I was wrong to assume you were feigning disagreement, but sometimes it just baffles me how things that I consider so obvious can be so difficult to grasp for other people. My apologies for my tone, but I still think you’re very naive in your dismissal of my arguments, and quite frankly you come off as somewhat arrogant and close minded by the way you attempt to systematically refute everything that I say, instead of engaging with my ideas in a more constructive way.
As far as I can tell, all three of your initial retorts about the relative danger of nuclear weapons are basically incoherent word salads. Even if I were to concede your arguments regarding the relative dangers of AI (which I am absolutely not going to do, although you did make some good points), you would still be wrong about your initial statement because you clearly overestimated the relative danger of nuclear weapons. I essentially dismantled your position from both sides, and yet you refuse to concede even a single inch of ground, even on the more obvious issue of nuclear weapons only being responsible for a relatively paltry number of deaths.