Monday, December 30, 2019
Why good people turn bad online Defeat your inner troll
Why good people turn schwimmbad angeschlossen Defeat your innerhalb trollWhy good people turn bad online Defeat your inner trollOn the evening of 17 February 2018, Professor Mary Beard posted on Twitter a photograph of herself crying. The eminent University of Cambridge classicist, who has almost 200,000 Twitter followers, was distraught after receiving a storm of abuse online. This was the reaction to a comment she had made about Haiti. She also tweeted I speak from the heart (and of course I may be wrong). But the crap I get in response just isnt on really it isnt.In the days that followed, Beard received support from several high-profile people. Greg Jenner, a fellow celebrity historian, tweeted about his own experience of a Twitterstorm Ill always remember how traumatic it was to suddenly be hated by strangers. Regardless of stimmungity I may have been wrong or right in my opinion I was amazed (later, when I recovered) at how psychologically destabilizing it was to me.Those twe eting support for Beard irrespective of whether they agreed with her initial tweet that had triggered the abusive responses were themselves then targeted. And when one of Beards critics, fellow Cambridge academic Priyamvada Gopal, a woman of Asian heritage, tischset out her response to Beards original tweet in an online article, she received her own torrent of abuse.There is overwhelming evidence that women and members of ethnic minority groups are disproportionately the target of Twitter abuse. Where these identity markers intersect, the bullying can become particularly intense, as experienced by black female MP Diane Abbott, who alone receivednearly halfof all the abusive tweets sent to female MPs during the run-up to the 2017 UK general election. Black and Asian female MPs received on average 35 percent mora abusive tweets than their white female colleagues even when Abbott was excluded from the total.The constant barrage of abuse, including death threats and threats of sexual violence, is silencing people, pushing them off online platforms and further reducing the diversity of online voices and opinion. And it shows no sign of abating. Asurvey last yearfound that 40 percent of American adults had personally experienced online abuse, with almost half of them receiving severe forms of harassment, including physical threats and stalking. 70 percent of women described online harassment as a major dicke bretter bohren mssen.The geschftsleben models of social media platforms, such as YouTube and Facebook, promote content that is mora likely to get a response from other users because more engagement means better opportunities for advertising. But this has aconsequence of favoring divisive and strongly emotive or extreme content, which can, in turn, nurture online bubbles of groups who reflect and reinforce each others opinions, helping propel the spread of more extreme content and providing a niche for fake news. In recent months,researchers have revealedmany w ays that various vested interests, including Russian operatives, have sought to manipulate public opinion by infiltrating social media bubbles.Our philanthropisch ability to communicate ideas across networks of people enabled us to build the modern world. The web offers unparalleled promise of cooperation and communication between all of humanity. But instead of embracing a massive extension of our social circles online, we seem to be reverting to tribalism and conflict, and belief in the potential of the internet to bring humanity together in a glorious collaborating network now begins to seem naive. While we generally conduct our real-life interactions with strangers politely and respectfully, online we can be horrible. How can we relearn the collaborative techniques that enabled us to find common ground and thrive as a species?Dont overthink it, just press the buttonI click an amount, impoverishing myself in an instant, and quickly move on to the next question, aware that were al l playing against the clock. My teammates are far away and unknown to me. I have no idea if were all in it together or whether Im being played for a fool, but I press on, knowing that the others are depending on me.Im playing in a so-called public goods game at Yale Universitys Human Cooperation Lab. The researchers here use it as a tool to help understand how and why we cooperate, and whether we can enhance our prosocial behavior.Over the years, scientists have proposed various theories about why humans cooperate so well that we form strong societies. The evolutionary roots of our general niceness, most researchers now believe, can be found in the individual survival advantage humans experience when we cooperate as a group. Ive come to New Haven, Connecticut, in a snowy February, to visit a cluster of labs where researchers are using experiments to explore further our extraordinary impulse to be nice to others even at our own expense.The game Im playing, on Amazons Mechanical Turk online platform, is one of the labs ongoing experiments. Im in a team of four people in different locations, and each of us is given the same amount of money to play with. We are asked to choose how much money we will contribute to a group pot, on the understanding that this pot will then be doubled and split equally among us.This sort of social dilemma, like all cooperation, relies on a certain level of trust that the others in your group will be nice. If everybody in the group contributes all of their money, all the money gets doubled, redistributed four ways, and everyone doubles their money. WinwinBut if you think about it from the perspective of an individual, says lab director David Rand, for each dollar that you contribute, it gets doubled to two dollars and then split four ways which means each person only gets 50 cents back for the dollar they contributed.Even though everyone is better off collectively by contributing to a group project that no one could manage alone in r eal life, this could be paying towards a hospital building, or digging a community irrigation ditch there is a cost at the individual level. Financially, you make more money by being more selfish.Rands team has run this game with thousands of players. Half of them are asked, as I was, to decide their contribution rapidly within 10 seconds whereas the other half are asked to take their time and carefully consider their decision. It turns out that when people go with their gut, they are much more generous than when they spend time deliberating.There is a lot of evidence that cooperation is a central feature of human evolution, says Rand. Individuals benefit, and are more likely to survive, by cooperating with the group. And being allowed to stay in the group and benefit from it is reliant on our reputation for behaving cooperatively.In the small-scale societies that our ancestors were living in, all our interactions were with people that you were going to see again and interact wit hin the immediate future, Rand says. That kept in check any temptation to act aggressively or take advantage and free-ride off other peoples contributions. It makes sense, in a self-interested way, to be cooperative.Cooperation breeds more cooperation in a mutually beneficial cycle. Rather than work out every time whether its in our long-term interests to be nice, its more efficient and less effort to have the basic rule be nice to other people. Thats why our unthinking response in the experiment is a generous one.Throughout our lives, we learn from the society around us how cooperative to be. But our learned behaviors can also change quickly.Those in Rands experiment who play the quickfire round are mostly generous and receive generous dividends, reinforcing their generous outlook. Whereas those who consider their decisions are more selfish, resulting in a meager group pot, reinforcing an idea that it doesnt pay to rely on the group. So, in a further experiment, Rand gave some mone y to people who had played a round of the game. They were then asked how much they wanted to give to an anonymous stranger. This time, there was no incentive to give they would be acting entirely charitably.It turned out there were big differences. The people who had got used to cooperating in the first stage gave twice as much money in the second stage as the people who had got used to being selfish did. So were affecting peoples internal lives and behavior, Rand says. The way they behave even when no ones watching and when theres no institution in distributionspolitik to punish or reward them.Rands team have tested how people in different countries playthegame, to see how the strength of social institutions such as government, family, education and legal systems influences behavior. In Kenya, wherepublic sector corruption is high, players initially gave lessgenerously to the stranger than players in the US, which has less corruption. This suggests that people who can rely on rel atively fair social institutions behave in a more public-spirited way those whose institutions are less reliable are more protectionist. However, after playing just one round of the cooperation-promoting versionof the public goods game, the Kenyans generosity equaled the Americans. And it cut both ways Americans who were trained to be selfish gave a lot less.So is there something about online social media culture that makes some people behave meanly? Unlike ancienthunter-gatherer societies, which rely on cooperation and sharing to survive and often have rules for when to offer food to whom across their social network, social media have weak institutions. They offer physical distance, relative anonymity and little reputational or punitive risk for bad behavior if youre mean, no one you know is going to see.I trudge a couple of blocks through driving snow to find Molly Crocketts Psychology Lab, where researchers are investigating moral decision-making in society. One area they focus o n is how social emotions are transformed online, in particular, moral outrage. Brain-imaging studies show that when people act on their moral outrage, their brains reward center is activated they feel good about it. This reinforces their behavior, so they are more likely to intervene in a similar way again. So, if they see somebody acting in a way that violates a social norm, by allowing their dog to foul a playground, for instance, and they publicly confront the perpetrator about it, they feel good afterwards. And while challenging a violator of your communitys social norms has its risks you may get attacked it also boosts your reputation.In our relatively peaceful lives, we are rarely faced with outrageous behavior, so we rarely see moral outrage expressed. Open up Twitter or Facebook and you get a very different picture.Recent researchshows that messages with both moral and emotional words are more likely to spread on social media each moral or emotional word in a tweet incre ases the likelihood of it being retweeted by 20 percent.Content that triggers outrage and that expresses outrage is much more likely to be shared, Crockett says. What weve created online is an ecosystem that selects for the most outrageous content, paired with a platform where its easier than ever before to express outrage.Unlike in the offline world, there is no personal risk in confronting and exposing someone. It only takes a few clicks of a button and you dont have to be physically nearby, so there is a lot more outrage expressed online. And it feeds itself. If you punish somebody for violating a norm, that makes you seem more trustworthy to others, so you can broadcast your moral character by expressing outrage and punishing social norm violations, Crockett says. And people believe that they are spreading good by expressing outrage that it comes from a place of morality and righteousness.When you go from offline where you might boost your reputation for whoever happens to be standing around at the moment to online, where you broadcast it to your entire social network, then that dramatically amplifies the personal rewards of expressing outrage.This is compounded by the feedback people get on social media, in the form of likes and retweets and so on. Our hypothesis is that the entwurf of these platforms could make expressing outrage into a habit, and a habit is something thats done without regard to its consequences its insensitive to what happens next, its just a blind response to a stimulus, Crockett explains.I think its worth having a conversation as a society as to whether we want our morality to be under the control of algorithms whose purpose is to make money for giant tech companies, she adds. I think we would all like to believe and feel that our moral emotions, thoughts, and behaviors are intentional and not knee-jerk reactions to whatever is placed in front of us that our smartphone designer thinks will bring them the most profit.On the upside , the lower costs of expressing outrage online have allowed marginalized, less-empowered groups to promote causes that have traditionally been harder to advance. Moral outrage on social media played an important role in focusing attention on the sexual abuse of women by high-status men. And in February 2018, Florida teens railing on social media against yet another high-school shooting in their state helped toshift public opinion, as well as shaming a number of big corporations into dropping their discount schemes for National Rifle Association members.I think that there must be ways to maintain the benefits of the online world, says Crockett, while thinking more carefully about redesigning these interactions to do away with some of the more costly bits.Someone whos thought a great deal about the design of our interactions in social networks is Nicholas Christakis, director of Yales Human Nature Lab, located just a few more snowy blocks away. His team studies how our position in a s ocial network influences our behavior, and even how certain influential individuals can dramatically alter the culture of a whole network.The team is exploring ways to identify these individuals and enlist them in public health programmes that could benefit the community. In Honduras, they are using this approach to influence vaccination enrolment and maternal care, for example. Online, such people have the potential to turn a bullying culture into a supportive one.Corporations already use a crude system of identifying so-called Instagram influencers to advertise their brands for them. But Christakis is looking not just at how popular an individual is, but also their position in the network and the shape of that network. In some networks, like a small isolated village, everyone is closely connected and youre likely to know everyone at a party in a city, by contrast, people may be living more closely by as a whole, but you are less likely to know everyone at a party there. How thorou ghly interconnected a network is affects how behaviors and information spread around it, he explains.If you take carbon atoms and you assemble them one way, they become graphite, which is soft and dark. Take the same carbon atoms and assemble them a different way, and it becomes diamond, which is hard and clear. These properties of hardness and clearness arent properties of the carbon atoms theyre properties of the collection of carbon atoms and depend on how you connect the carbon atoms to each other, he says. And its the same with human groups.Christakis has designed software to explore this by creating temporary artificial societies online. We drop people in and then we let them interact with each other and see how they play a public goods game, for example, to assess how kind they are to other people.Then he manipulates the network. By engineering their interactions one way, I can make them really sweet to each other, work well together, and they are healthy and happy and they cooperate. Or you take the same people and connect them a different way and theyre mean jerks to each other and they dont cooperate and they dont share information and they are not kind to each other.In one experiment, he randomly assigned strangers to play the public goods game with each other. In the beginning, he says, about two-thirds of people were cooperative. But some of the people they interact with will take advantage of them and, because their only option is either to be kind and cooperative or to be a defector, they choose to defect because theyre stuck with these people taking advantage of them. And by the end of the experiment everyone is a jerk to everyone else.Christakis turned this around simply by giving each person a little bit of control over who they were connected to after each round. They had to make two decisions am I kind to my neighbors or am I not and do I stick with this neighbor or do I not. The only thing each player knew about their neighbors was whethe r each had cooperated or defected in the round before. What we were able to show is that people cut ties to defectors and form ties to cooperators, and the network rewired itself and converted itself into a diamond-like structure instead of a graphite-like structure. In other words, a cooperative prosocial structure instead of an uncooperative structure.In an attempt to generate more cooperative online communities, Christakiss team have started adding bots to their temporary societies. He takes me over to a laptop and sets me up on a different game. In this game, anonymous players have to work together as a team to solve a dilemma that tilers will be familiar with each of us has to pick from one of three colors, but the colors of players directly connected to each other must be different. If we solve the puzzle within a time limit, we all get a share of the prize money if we fail, no one gets anything. Im playing with at least 30 other people. None of us can see the whole network of connections, only the people we are directly connected to nevertheless, we have to cooperate to win.Im connected to two neighbors, whose colors are green and blue, so I pick red. My left neighbor then changes to red so I quickly change to blue. The game continues and I become increasingly tense, cursing my slow reaction times. I frequently have to switch my color, responding to unseen changes elsewhere in the network, which send a cascade of changes along the connections. Times up before we solve the puzzle, prompting irate responses in the games comments box from remote players condemning everyone elses stupidity. Personally, Im relieved its over and theres no longer anyone depending on my cackhanded gaming skills to earn money.Christakis tells me that some of the networks are so complex that the puzzle is impossible to solve in the timeframe. My relief is shortlived, however the one I played was solvable. He rewinds the game, revealing for the first time the whole network to me. I see now that I was on a lower branch off the main hub of the network. Some of the players were connected to just one other person, but most were connected to three or more. Thousands of people from around the world play these games on Amazon Mechanical Turk, drawn by the small fee they earn per round. But as Im watching the game I just played unfold, Christakis reveals that three of these players are actually planted bots. We call them dumb AI, he says.His team is not interested in inventing super-smart AI to replace human cognition. Instead, the plan is to infiltrate a population of smart humans with dumb-bots to help the humans help themselves.We wanted to see if we could use the dumb-bots to get the people unstuck so they can cooperate and coordinate a little bit more so that their native capacity to perform well can be revealed by a little assistance, Christakis says. He found that if the bots played perfectly, that didnt help the humans. But if the bots made some mistakes, they unlocked the potential of the group to find a solution.Some of these bots made counter-intuitive choices. Even though their neighbors all had green and they should have picked orange, instead they also picked green. When they did that, it allowed one of the green neighbors to pick orange, which unlocks the next guy over, he can pick a different color and, wow, now we solve the problem. Without the bot, those human players would probably all have stuck with green, not realizing that was the problem. Increasing the conflicts temporarily allows their neighbors to make better choices.By adding a little noise into the system, the bots helped the network to function more efficiently. Perhaps a version of this model could involve infiltrating the newsfeeds of partisan people with occasional items offering a different perspective, helping to shift people out of their social media comfort-bubbles and allow society as a whole to cooperate more.Much antisocial behavior online stems from t he anonymity of internet interactions the reputational costs of being mean are much lower than offline. Here, bots may also offer a solution.One experimentfound that the level of racist abuse tweeted at black users could be dramatically slashed by using bot accounts with white profile images to respond to racist tweeters. A typical bot response to a racist tweet would be Hey man, just remember that there are real people who are hurt when you harass them with that kind of language. Simply cultivating a little empathy in such tweeters reduced their racist tweets almost to zero for weeks afterwards.Another way of addressing the low reputational cost for bad behavior online is to engineer in some form of social punishment. One game company, League of Legends, did that by introducing a Tribunal feature, in which negative play is punished by other players. The company reported that 280,000 players were reformed in one year, meaning that after being punished by the Tribunal they had chang ed their behavior and then achieved a positive standing in the community. Developers could also build in social rewards for good behavior, encouraging more cooperative elements that help build relationships.Researchers are already starting to learn how to predict when an exchange is about to turn bad the moment at which it could benefit from pre-emptive intervention. You might think that there is a minority of sociopaths online, which we call trolls, who are doing all this harm, says Cristian Danescu-Niculescu-Mizil, at Cornell Universitys Department of Information Science. What we actually find in our work is that ordinary people, just like you and me, can engage in such antisocial behavior. For a specific period of time, you can actually become a troll. And thats surprising.Its also alarming. I mentally flick back through my own recent tweets, hoping I havent veered into bullying in some awkward attempt to appear funny or cool to my online followers. After all, it can be very tem pting to be abusive to someone far away, who you dont know, if you think it will impress your social group.Danescu-Niculescu-Mizil has been investigating the comments sections below online articles. He identifies two main triggers for trolling the context of the exchange how other users are behaving and your mood. If youre having a bad day, or if it happens to be Monday, for example, youre much more likely to troll in the same situation, he says. Youre nicer on a Saturday morning.After collecting data, including from people who had engaged in trolling behavior in the past, Danescu-Niculescu-Mizil built an algorithm that predicts with 80 percent accuracy when someone is about to become abusive online. This provides an opportunity to, for example, introduce a delay in how fast they can post their response. If people have to think twice before they write something, that improves the context of the exchange for everyone youre less likely to witness people misbehaving, and so less like ly to misbehave yourself.The good news is that, in spite of the horrible behavior many of us have experienced online, the majority of interactions are nice and cooperative. Justified moral outrage is usefully employed in challenging hateful tweets. Arecent British studylooking at anti-Semitism on Twitter found that posts challenging anti-Semitic tweets are shared far more widely than the anti-Semitic tweets themselves. fruchtwein hateful posts were ignored or only shared within a small echo chamber of similar accounts. Perhaps were already starting to do the work of the bots ourselves.As Danescu-Niculescu-Mizil points out, weve had thousands of years to hone our person-to-person interactions, but only 20 years of social media. Offline, we have all these cues from facial expressions to body language to pitch whereas online we discuss things only through text. I think we shouldnt be surprised that were having so much difficulty in finding the right way to discuss and cooperate online. As our online behavior develops, we may well introduce subtle signals, digital equivalents of facial cues, to help smooth online discussions. In the meantime, the advice for dealing with online abuse is to stay calm, its not your fault. Dont retaliate but block and ignore bullies, or if you feel up to it, tell them to stop. Talk to family or friends about whats happening and ask them to help you. Take screenshots and report online harassment to the social media service where its happening, and if it includes physical threats, report it to the police.If social media as we know it is going to survive, the companies running these platforms are going to have to keep steering their algorithms, perhaps informed by behavioral science, to encourage cooperation rather than division, positive online experiences rather than abuse. As users, we too may well learn to adapt to this new communication environment so that civil and productive interaction remains the norm online as it is offline.Im o ptimistic, Danescu-Niculescu-Mizil says. This is just a different game and we have to evolve.Advice and support on dealing with online abuse is availablefrom a range of organizations, such asHeartMob,Stop Online Abuse,ConnectSafely, and the social media services themselves, for exampleTwitter,Facebook,Instagram.This article first appeared on Mosaic.Wellcome, the publisher of Mosaic, has shares in Facebook, Alphabet and other social media companies as part of itsinvestment portfolio.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.