• 0 Posts
  • 114 Comments
Joined 1 year ago
cake
Cake day: November 22nd, 2023

help-circle





  • We’re already living in a dystopia. Companies are selling your work to be used in training sets already. Every social media company that I’m aware of has already tried it at least once, and most are actively doing it. Though that’s not why we live in a dystopia, it’s just one more piece on the pile.

    When I say licensing, I’m not talking about licensing fees like social media companies are already taking in, I’m talking about open source software style licensing - groups of predefined rules that artists can apply to their work that AI companies must abide by if they want to use their work. Under these licensing rules are everything from “do whatever you want with my code” to “my code can only be used to make not-for-profit software,” and all derivative works have the same license applied to them. Obviously, the closed source alternative doesn’t apply here - the d’jinn’s already out of the bottle and as you said, once your work is out there, there’s always the risk somebody is going to steal it.

    I’m not against AI, I’m simply against corporations being left unregulated to do whatever the hell they want. That’s one of the reasons to make the distinction between people taking inspiration from a work and a LLM being trained off of analysing that work as part of its data set. Profit-motivation is largely antithetical to progress. Companies hate taking risks. Even at the height of corporate research spending, the so-called Blue Skies Research, the majority of research funding was done by the government. Today, medical research is done at colleges and universities on government dollars, with companies coming in afterward to patent a product out of the research when there is no longer any risk. This is how AI companies currently work. Letting people like you and me do all the work and then swooping in to take that and turn it into a multi-billion dollar profit. The work that made the COVID vaccines possible was done decades before, but no company could figure out how to make a profit off of it until COVID happened, so nothing was ever done with it.

    As for walled off communities of artists, you should check out Cara, a new social media platform that’s a mix of Artstation and Instagam and 100% anti-AI. I forget the details, but AI art is banned on the site, and I believe they have Nightshade or something built in. I believe that when it was first announced, they had something like 200,000 people create accounts in the first 3 months.

    People aren’t anti-AI. They’re anti late-stage capitalism. And with what little power they have, they’d rather poison the well or watch it all burn than be trampled on any further.


  • The worst I have to do is use a different proton version or add in a launch option.

    And therein lies the problem that keeps most people from switching to Linux. It’s a super simple thing to do, but Linux users fall into the same fallacy that experts in any field do: just how little the average person knows about the subject. The fact that something doesn’t just work when you try to open it would leave many people stumped. Especially with tech literacy rates declining thanks to kids growing up using mostly cell phones as their daily driver rather than an actual computer and the plug and play nature of Windows and Macs. Asking your average gamer to add command line arguments to a launcher would probably be like telling them they just have to hot wire their car if it doesn’t start when you turn the key.


  • And what free infrastructure would that be? Social media is privately run, as are websites. Art posted online largely falls under the category of advertising, as artists are advertising their services for commission purposes.

    AI bros say that image generators have democratized art. Do you know what actually democratized art? The pencil. The chisel and slate. The idea that taking the effort of other people and using it for your own convenience without giving them proper credit isn’t democracy or fair use. It’s corporate middle management. People simply don’t want to put in the effort to learn a valuable skill, and they don’t want to pay for it either, but they still want the reward for said effort. It’s like expecting your friend to fix your computer for free because they work in IT.


  • It’s not about “analysis” but about for-profit use. Public domain still falls under Fair Use. I think you’re being too optimistic about support for UBI, but I absolutely agree on that point. There are countries that believe UBI will be necessary in a decades time due to more and more of the population becoming permanently unemployed by jobs being replaced. I say myself that I don’t think anybody would really care if their livelihoods weren’t at stake (except for dealing with the people who look down on artists and say that writing prompts makes them just as good as if not better than artists). As it stands, artists are already forming their own walled off communities to isolate their work from being publicly available and creating software to poison LLMs. So either art becomes largely inaccessible to the public, or some form of horrible copyright action is taken because those are the only options available to artists.

    Ultimately, I’d like a licensing system put in place, like for open source software where people can license their works and companies have to cite their sources for their training data. Academics have to cite their sources for research, and holding for-profit companies to the same standards seems like it would be a step in the right direction. Simply require your data scraper to keep track of where it got its data from in a publicly available list. That way, if they’ve used stuff that they legally shouldn’t, it can be proven.





  • Have you ever heard the saying that there are only 4 or 5 stories in the world? That’s basically what you’re arguing, and we’re getting into heavy philosophical areas here.

    The difference is in the process. Anybody can take a photo, but it takes knowledge and experience to be a photographer. An artist understands concepts in the way that a physicist understands the rules that govern particles. The issue with AI isn’t that it’s derivative in the sense that “everything old is new again” or “nature doesn’t break her own laws,” it’s derivative in the sense that it merely regurgitates a collage of vectorized arrays of its training data. Even somebody who lives in a cave would understand how light falls and could extrapolate that knowledge to paint a sunset if you told them what a sunset is like. Given A and B, you can figure out C. The image generators we have today don’t understand how light works, even with all the images on the internet to examine. They can give you sets of A, B, and AB, but never C. If I draw a line and then tell you to draw a line, your line and my line will be different even though they’re both lines. If you tell an image generator to draw a line, it’ll spit out what is effectively a collage of lines from its training set.

    And even this would only matter in terms of prompters saying that they are artists because they wrote the phrase that caused the tool to generate an image, but we live in a world where we must make money to live, and the way that the companies that make these tools work amounts to wage theft.

    AI is like a camera. It’s a tool that will spawn entirely new genres of art and be used to improve the work of artists in many other areas. But like any other tool, it can be put together and used ethically or unethically, and that’s where the issues lie.

    AI bros say that it’s like when the camera was first invented and all the painters freaked out. But that’s a strawman. Artists are asking, “Is a man not entitled to the sweat of his brow?”


  • Copyright is a whole mess and a dangerous can of worms, but before I get any further, I just want to quote a funny meme: “I’m not doing homework for you. I’ve known you for 30 seconds and enjoyed none of them.” If you’re going to make a point, give the actual point before citing sources because there’s no guarantee that the person you’re talking to will even understand what you’re trying to say.

    Having said that, I agree that anything around copyright and AI is a dangerous road. Copyright is extremely flawed in its design.

    I compare image generators to the Gaussian Blur tool for a reason - it’s a tool that outputs an algorithm based on its inputs. Your prompt and its training set, in this case. And like any other tool, its work on its own is derivative of all the works in its training set and therefore the burning question comes down to whether or not that training data was ethically sourced, ie used with permission. So the question comes down to whether or not the companies behind the tool had the right to use the images that they did and how to prove that. I’m a fan of requiring generators to list the works that they used for their training data somewhere. Basically, a similar licensing system as open source software. This way, people could openly license their work for use or not and have a way to prove if their works were used without their permission legally. There are some companies that are actually moving to commissioning artists to create works specifically for use in their training sets, and I think that’s great.

    AI is a tool like any other, and like any other tool, it can be made using unethical means. In an ideal world, it wouldn’t matter because artists wouldn’t have to worry about putting food on the table and would be able to just make art for the sake of following their passions. But we don’t live in an ideal world, and the generators we have today are equivalent to the fast fashion industry.

    Basically, I ask, “Is a man not entitled to the sweat of his brow?” And the AI companies of today respond, “No! It belongs to me.”

    There’s a whole other discussion to be had about prompters and the attitude that they created the works generated by these tools and how similar they are to corporate middle managers taking credit for the work of the people under them, but that’s a discussion for another time.



  • But just about any artist isn’t reproducing a still from The Mandalorian in the middle of a picture like right-clicking and hitting “save as” on a picture you got from a Google search. Which these generators have done multiple times. A “sufficiently convoluted machine model” would be a senient machine. At the level required for what you’re talking about, we’re getting into the philosophical area of what it means to be a sentient being, which is so far removed from these generators as to be irrelevant to the point. And at that point, you’re not creating anything anyway. You’ve hired a machine to create for you.

    These models are tools that use an algorithm to collage pre-existing works into a derivative work. They can not create. If you tell a generator to draw a cat, but it hasn’t any pictures of cats in its data set, you won’t get anything. If you feed AI images back into these generators, they quickly degrade into garbage. Because they don’t have a concept of anything. They don’t understand color theory or two point perspective or anything. They simply are programmed to output their collection of vectorized arrays in an algorithmic format based upon certain keywords.





  • The issue has never been the tech itself. Image generators are basically just a more complicated Gaussian Blur tool.

    The issue is, and always has been, the ethics involved in the creation of the tools. The companies steal the work they use to train these models without paying the artists for their efforts (wage theft). They’ve outright said that they couldn’t afford to make these tools if they had to pay copyright fees for the images that they scrape from the internet. They replace jobs with AI tools that aren’t fit for the task because it’s cheaper to fire people. They train these models on the works of those employees. When you pay for a subscription to these things, you’re paying a corporation to do all the things we hate about late stage capitalism.