Last week in class, we discussed internet trolling at length. Essentially, these trolls are people who engage negatively with others online just because they want to. I found similarities in how Samuel Woolley talks about bots to how we talked about trolls in class.
The first thing that made me think of this was the story of a bot sending a death threat on twitter. A twitter user made a bot that took chunks of his tweets and rearranged them into new tweets. One of these new tweets sounded a lot like a death threat, and the Dutch police came knocking. Many examples of trolling on the more malicious end of the spectrum include making death threats to a target, like in the Gamergate “shitshow” where women received violent messages from online trolls.
The article’s discussion of bots also reminded me of trolls because of the difficulty in moderation. For both, the issue of free speech comes into play, because to limit bots or trolls could also limit free speech as a consequence. As Woolley says: “rumination on bots should also work to avoid policies or perspectives that simply blacklist all bots.”
The problem with comparing bots to trolls is that bots lack the sentience that trolls possess. One of the key points of a troll is that they get a kick out of what they’re doing to other people, and that is their motivation for continuing to troll. Bots, on the other hand, don’t have feelings, and therefore can’t get satisfaction out of what they’re doing.
Can a bot be an evil troll if they’re not even aware of what they’re doing? Is sentience a critical part of being a bully? These are questions that further investigation, persecution, and revision of bots will hopefully answer.