As smart assistants and voice interfaces become more common, we’re giving away a new form of personal data — our speech. This goes far beyond just the words we say out loud.
Speech lies at the heart of our social interactions, and we unwittingly reveal much about ourselves when we talk. When someone hears a voice, they immediately start picking up on accent and intonation and make assumptions about the speaker’s age, education, personality, etc. Humans do this so we can make a good guess at how best to respond to the person speaking. But what happens when machines start analyzing how we talk?
0 Comments
"Hearing generic language to describe a category of people, such as “boys have short hair,” can lead children to endorse a range of other stereotypes about the category, a study by researchers at NYU and Princeton University has found. Their research, which appeared in the Proceedings of the National Academy of Sciences (PNAS), also points to more effective methods to reduce stereotyping and prejudice."
Prejudice about regional accents is still prevalent in Britain, and can lead to discrimination, according to leading UCL neuroscientist Professor Sophie Scott.
“Studies have shown that whether you are from the North or South, a Southern twang pegs the speaker as comparatively dimwitted, but also likely to be a nicer person than folks who speak like a Yankee.”
“People in Appalachia consume the same national media as everyone else, and they fully realize how other parts of the nation look down on them. These negative portrayals can have a harmful impact on perceptions of Appalachian people, both inside and outside the region.”
|