I’m afraid that would not be sufficient.
These instructions are a small part of what makes a model answer like it does. Much more important is the training data. If you want to make a racist model, training it on racist text is sufficient.
Great care is put in the training data of these models by AI companies, to ensure that their biases are socially acceptable. If you train an LLM on the internet without care, a user will easily be able to prompt them into saying racist text.
Gab is forced to use this prompt because they’re unable to train a model, but as other comments show it’s pretty weak way to force a bias.
The ideal solution for transparency would be public sharing of the training data.
I also have a similar experience, I was mugged at knife point and spit on by two adolescents. After that I was jumpy around groups of teens.
That said , I do not think my fear of teens was rational, neither was it healthy. Only a small minority of teens will mug people. Fearing a whole group for the actions of the few is in human nature, but it is something we must fight against.
I mean what is the end goal if women are in fear of men ? You can probably reduce violent crime even more, but it remains a rare event. Only 31 out of 1000 people were victims of a violent crime in the UK in 2010. If that doesn’t work, what remains? Sex segregation ?