From left: composite heterosexual faces, composite homosexual faces and “average facial landmarks”

A facial recognition experiment that claims to be able to distinguish between gay and heterosexual people has sparked a row between its creators and two leading LGBT rights groups.

The Stanford University study claims its software recognises facial features relating to sexual orientation that are not perceived by human observers, according to new research that suggests machines can have significantly better “gaydar” than humans.

The work has been accused of being “dangerous” and “junk science”.

But the scientists involved say these are “knee-jerk” reactions.

The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology, and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.

The machine intelligence tested in the research, which was published in the Journal of Personality and Social Psychology and first reported in the Economist, was based on a sample of more than 35,000 facial images that men and women publicly posted on a US dating website. The researchers, Michal Kosinski and Yilun Wang, extracted features from the images using “deep neural networks”, meaning a sophisticated mathematical system that learns to analyze visuals based on a large dataset.

The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles”, essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women.

Human judges performed much worse than the algorithm, accurately identifying orientation only 61% of the time for men and 54% for women. When the software reviewed five images per person, it was even more successful – 91% of the time with men and 83% with women. Broadly, that means “faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain”, the authors wrote.

This is an exceedingly bad idea. I bet Saudi Arabia or other hellholes where being gay is punishable by death, would love to have a machine like this.

You May Also Like:

http://sambagblog.com/gallery/2017/09/gayfaceai.jpghttp://sambagblog.com/gallery/2017/09/gayfaceai-120x56.jpgsambagGay Issues
From left: composite heterosexual faces, composite homosexual faces and 'average facial landmarks' A facial recognition experiment that claims to be able to distinguish between gay and heterosexual people has sparked a row between its creators and two leading LGBT rights groups. The Stanford University study claims its software recognises facial features...