Of course, the jungle-green Versace dress that Jennifer Lopez wore to the 2000 Grammys represented another turning point in Donatella Versace's career (with the singer later wearing a reissued version on the catwalk during a Versace show in 2019). We also can’t talk about the power of the naked dress without mentioning Elizabeth Hurley, who upstaged her then-boyfriend Hugh Grant at the 1994 premiere of Four Weddings and a Funeral wearing a daring Versace dress, held together by safety pins. The two ensembles broke red-carpet boundaries forever.Ĭelebrating the female form has been central to many of fashion’s most iconic moments over the years – spanning decades – such as Madonna’s catwalk debut for Jean Paul Gaultier in 1992, where she famously wore nothing but a high-waisted skirt and the frame of a bra. An early pioneer of sheer dressing was Cher, who wore two barely-there Bob Mackie dresses at both the Met Gala in 1974 and the Oscars in 1988. “In 1,000 years, when we look back as we are generating the thumbprint of our society and culture right now through these images, is this how we want to see women?” she says.Naked dressing is far from a new trend when it comes to celebrity style and red-carpet fashion. The stereotypes and biases it’s helping to further embed can also be hugely detrimental to how women and girls see themselves and how others see them, Caliskan says. It might seem fun and innocent, but there’s nothing stopping people from using it to generate nonconsensual nude images of women based on their social media images, or to create naked images of children. Lensa is the first hugely popular app to be developed from Stable Diffusion, and it won’t be the last. Stable Diffusion has also made it harder to generate graphic content, and the creators of the LAION database have introduced NSFW filters. In a blog post, Prisma Labs says it has adapted the relationship between certain words and images in a way that aims to reduce biases, but the spokesperson did not go into more detail. “The man-made, unfiltered online data introduced the model to the existing biases of humankind,” the spokesperson says.ĭespite that, the company claims it is working on trying to address the problem. The company says that because Stable Diffusion is trained on unfiltered data from across the internet, neither they nor Stability.AI, the company behind Stable Diffusion, “could consciously apply any representation biases or intentionally integrate conventional beauty elements.” In several images, I was wearing a white coat that appeared to belong to either a chef or a doctor. I got avatars of myself wearing clothes (!) and in neutral poses. “Women are associated with sexual content, whereas men are associated with professional, career-related content in any important domain such as medicine, science, business, and so on,” Caliskan says.įunnily enough, my Lensa avatars were more realistic when my pictures went through male content filters. Caliskan found that it was full of problematic gender and racial biases. CLIP learns to match images in a data set to descriptive text prompts. The more often something is repeated, such as Asian women in sexually graphic scenes, the stronger the association becomes in the AI model.Ĭaliskan has studied CLIP (Contrastive Language Image Pretraining), which is a system that helps Stable Diffusion generate images. One way Stable Diffusion 2.0 filters content is by removing images that are repeated often. A spokesperson says that the original model was released with a safety filter, which Lensa does not appear to have used, as it would remove these outputs. Stability.AI, the company that developed Stable Diffusion, launched a new version of the AI model in late November. Most other popular image-making AIs, such as Google’s Imagen and OpenAI’s DALL-E, are not open but are built in a similar way, using similar sorts of training data, which suggests that this is a sector-wide problem.Īs I reported in September when the first version of Stable Diffusion had just been launched, searching the model’s data set for keywords such as “Asian” brought back almost exclusively porn. It’s notable that their findings were only possible because the LAION data set is open source. This leads to AI models that sexualize women regardless of whether they want to be depicted that way, Caliskan says-especially women with identities that have been historically disadvantaged.ĪI training data is filled with racist stereotypes, pornography, and explicit images of rape, researchers Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe found after analyzing a data set similar to the one used to build Stable Diffusion. And because the internet is overflowing with images of naked or barely dressed women, and pictures reflecting sexist, racist stereotypes, the data set is also skewed toward these kinds of images.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |