Page 38 →Chapter 1 Propagandistic Masquerade
Breitbart Story 15, Example 1
No collusion, no impact. Just a bunch of Russians dressed up like liberal trolls hacking unsecured Democrat servers.
Dressing up with a mask conceals one’s identity. And a mask can create chaos by instilling uncertainty about who does what on whose behalf, as illustrated in the example above. Online masks are constructs that are mediated by, or shaped through, technology. Thus, an internet user who opts to wear an online mask can choose how to shape it—for example, by choosing to be a troll or not, or to be a specific kind of troll. In online spaces, masks are worn by using elements that constitute one’s online presence—mostly through text. A mask in online spaces can also be represented through a visual element or a GIF (Fichman & Dainas, 2019). In fact, masks can be signified by other online identity markers, and as such, they can reflect the self-determined option of identity disclosure or concealment—for example, by registering online with an actual legal name or opting for an anonymous online identity. Thus, masks are mediated by the online spaces in which they thrive or come to life. The life of a given mask is partially defined through the media infrastructures in which that mask operates. In some news portals, masks are constructed through anonymous posting, while in others, registration is required to sustain them.
Online identity is also mediated by text-based linguistic expressions—the argument strategies one uses and the sides one takes on an issue. Identity can be projected through various text-based faces, and authentic or nonauthentic narratives, such as those projected by the Internet Research Agency’s Page 39 →instructions that are established in a propaganda playbook. While masks enable the performance of Russian trolling, figuratively this phenomenon can be treated as a “worn” mask. Furthermore, the user narratives behind online performance scripts live their own lives. For instance, reactions surrounding Russian trolling messages mirror, amplify, and make Russian trolling come to life. While Russian trolls are operating online, they do not necessarily want to be called out, since calling them out would expose the fact that Russian trolls are lurking beneath masks. Yet at the same time, the act of calling out Russian trolls and “catching” them exposes an emblematic mask of its own. The wearing of a mask, reflected through text-based practices, exemplifies an orchestrated performativity.
This chapter delves into a construction of Russian trolling online as a form of performativity by analyzing sociotechnical contexts in which Russian trolling is taking place. This process of performativity, when captured as a still shot, illustrates disinformation that projects doubt and leads to chaos. This uncertainty is evident in representations of Russian trolls—and these representations, in turn, have been theorized and examined in several ways. This chapter is theoretically grounded in the analysis Goffman’s (1959) presentation of self. It also uses Brenda Danet’s (1998) notion of online mask as identify marker.
To situate manipulation of the propagandistic mask, the first approach to analyze automation in this book was by counting comment duplicates; the second was to assess anonymity by comparing private and public posts. Such comparison yields the conclusion that private posting is a standard means of hiding and repurposing a mask. The third method analyzes individual user activity as a proxy for one’s online masquerade: Individual users can choose to perform Russian trolling by calling themselves Russian trolls, while others can elect to call out Russian trolls. This chapter is based on news portal comments across media platforms that justify Russian trolling where users can make sense of Russian trolling and call it out. Yet such unmasking does not authenticate one’s identity. Even if the focus in this book is not on identity verification, masks online may instill uncertainty and chaos.
Text as a Mask
Prior to the emergence of the Russian trolling phenomenon, online trolls projected themselves through a set of imaginary masks. Trolls in different spaces have materialized in various guises. If one visits Seattle, Washington, and walks down Troll Street, the street eventually leads to a bridge. A colossal Page 40 →statue of a lurking troll has been planted beneath it. Clearly, the statue represents a grumpy unpleasant creature that lives under the bridge in enigmatic secrecy. This grotesque caricature of a troll has been reified in online spaces through text-based means. The image has, in fact, accumulated a plethora of textual descriptors that appear vividly in online news story comments. Russian troll masks are unique, even if, as argued throughout this book, they are, by definition, invisible. These invisible masks, paradoxically, obscure the visibility of Russian trolls and invite us to ask: What types of masks are worn by Russian trolls?
The text as a mask can provide the critical lens through which self-presentation online is expressed. The mask performed through the text allows for simulation, or what scholars like Danet (1998) called textual masquerade. In contrast to masquerade that allows for further exploration of one’s identity, as argued by Danet (1998), masks can also serve to perform a task of persuasion or at least dissemination of information with the goal of propagating it as a new fact.
Masks can be viewed as face management. The concept of face is used here as defined by Goffman (1967)—as an image of self-delineated, approved social attributes: “Face is a positive social value a person effectively claims for themselves by the line others assume he has taken during a particular contact” (p. 5). Goffman argued that face maintenance is a condition of interaction, not an objective. People engage in “facework,” where for example, face-saving strategies allow for neutralizing a given threat (Goffman, 1967). This chapter describes how face saving can be used to sustain Russian trolling justification over time.
This chapter showcases masks as creating an alternative reality through elements traceable through sociotechnical materiality, that is, locations, timing, and types of actors involved. These three approaches—locations, timing, and actors—and numerous instances, show how chaos is created by the emergence of the alternative realities mediated by what Starbird et al. (2019) called crisis actors. In this chapter the mechanics of the wearing of the masks are described as frames to spread and justify Russian trolling, e.g., by framing it as hoaxes (it didn’t happen) or false-flag narratives (i.e., it happened but not like the media portrayed).
Paradoxes of a Mask
Why is it hard to uncover a mask? The mask entails obscuring, or covering, the face, whereby the face denotes the self and online identity. In the performativePage 41 → process, the mask can be put on, changed, or taken off. Such gestures are enacted in online spaces through sociotechnical means—the online technological affordances, such as modalities of the text, online profile, and self-presentation—for example, by creating an online account or making posts anonymous. In either case, there is always the assumption that beyond the mask, there is a real or different or authentic self. Several paradoxes are involved when approaching Russian trolling as performative self-masking. Thus, this specific discursive phenomenon is discussed in this sequence: the paradox of invisibility, the paradox of tricks of invisibility, and the modus operandi of the propagandist mask.
Invisibility
Just as faces can be hidden behind masks, online influence can be invisible—and paradoxically, invisibility guarantees its effectiveness. The sociotechnical online infrastructure renders online trolls invisible, as they operate in the back end of the online environment they inhabit—be it through algorithms or programmed spaces. Additionally, the availability of tools, such as application programming interfaces (known as APIs), along with machine learning or, synthetic data-driven artificial intelligence, for online developers, make some online spaces more accessible for automation than others (see discussion in Zelenkauskaite & Bucy, 2016). As discussed earlier, these online spaces can be exploited to enable invisible acting, or the infiltration in online communities.
Invisible forces in the current media landscape can be unleashed not only by human users but also by automated bots that run according to algorithmic programs. As nonhuman actors, bots are invisible while operating in the background system through programming commands that run the surface content. The system is algorithmic because information can be driven by bots or algorithms that push, promote, or circulate content. Russian trolls can use to their advantage algorithmic tools that involve automated responses or frequent posting. The encoding and circulation of algorithmic information further enable Russian trolls to hide, and these can be customized to generate influence.
From the perspective of information structure, online spaces are invisible because of the inaccessible identities of actors that populate and promote them. For example, it is difficult to pinpoint who is posting and to determine whether that user is an actual person or a bot, even if tools have been developed for some platforms, such as Botometer for Twitter (Botometer, Page 42 →n.d.). Moreover, bot detection is currently unavailable for users accessing online news comments. Because general public users lack both knowledge about bot technologies and the ability to recognize disinformation and their actors, hiding in online spaces is guaranteed. Furthermore, leading scholars of the digital divide might argue that such invisibility is problematic not merely as an outcome of lack of technology access but of the requisite skills and online tools for understanding how online influence takes place (Van Dijk, 2013).
To fully understand how information systems work, general readers in online spaces need to move beyond individual posting levels to access contexts in which messages are posted. Such contexts involve access to a bird’s-eye view of big data of all posting flows to observe the circulation of information to start making sense of the patterns of influence, besides uncovering techniques that enable their proliferation which we currently lack. Technologically speaking, online users typically do not have access to or the capability of viewing patterns of information circulation, as critical accounts of remediation and big data uphold (Zelenkauskaite, 2017). However, it is worth mentioning that news portals have implemented user’s activity access, especially if they register.
Similar arguments hold for fighting disinformation where an understanding of digital tools used becomes instrumental. While these lists of tools are compiled publicly (e.g., Rand’s tool list is thorough and comprehensive) for bot and spam detection, credibility scoring, disinformation tracking, education and training, verification, whitelisting, and establishing codes and standards (“Rand,” n.d.), their use in everyday life is still limited given that these lists appear post hoc and are not implemented as preemptive measures.
Given that users in online spaces typically cannot focus on how systems process the information surrounding them, invisibility becomes a key element that facilitates Russian trolling. Such invisibility can be enabled by the masks that mediated environments create. Thus, technological affordances of system-based seamlessness can be exploited to hide the influence of Russian trolling. Yet it remains crucial to uncover the mechanisms through which influence can take place and to enable users to recognize them.
Masks at Play
Trolling as a masquerade has been contextualized by Bakardjieva’s (2008) analysis of the news comments within a larger national popular culture by describing online news comments as carnivalesque. As carnivalesque discourse,Page 43 → comments can be grotesque, humorous, or loud. Thus, Bakardjieva (2008) has emphasized the loose and informal aspects of online commenting. How can masks be interpreted through text? And how can masks instill chaos? How do these informal practices are translatable into contexts where not all actors involved are genuine?
Since the carnivalesque nature of online comments allows discursive participants to wear masks, such masks depend on the wearer’s targeted audience. Moreover, given that any user can participate with any type of mask, and with any form of commenting, carnivalesque masks can also include ideological influence and disinformation. In such instances, trolling is rendered visible by approaching discourse as a mask—especially when Russian trolling is defended or called out. These discursive practices expose Russian trolling’s subverted masks. The purpose here is to outline how subversion through online news comments takes place and which tactics are used to persuade. Specific masks that are examined here show how Russian trolling appears in comment spaces.
Propagandistic trolls are characterized by the intention to subvert authentic participation in the Habermasian public sphere, where everyone is invited to discuss news story issues. This subversion can take place in several ways. To be effective, ideological or propagandistic trolls need to blend into a conversation that is already taking place by presenting topics that would be approved by the group they join. Rather than employing a classical trolling strategy of aimlessly opposing ideas that have already been presented by group members, such trolls infiltrate conversations by amplifying, introducing new interpretations, or contesting a given issue at a specific moment (see, e.g., Herring et al., 2002). Such discursive tactics allow for a propagandistic mask to be perceived as one of the voices that constitutes online public debate. Thus, because online incivility attracts user scrutiny, the Russian troll, who intends to engage in disinformation, tries to hide in the crowd of other users to become one of the alternative or amplifying voices.
In short, such Russian trolls, to be effective, deliberately aim for online invisibility through discursive assimilation. Such deliberate efforts are sustained through the creation of masks and their continuous use. The true (authentic self) is hidden behind a customized mask, as is the case, for example, with Internet Research Agency’s employees who pretend to be someone else when posting online, as further detailed in Chapter 4. The mask is based on a predefined ideological framework that is created for a specific case of influence. And the invisibility of trolls can be directly linked to the presence of a mask. Rather than the usual physical mask that may come to mind, the mask, in these online instances, is a nebulous abstraction that is based on Page 44 →text, video, and internet links—and most importantly, an abstraction that projects a specific ideology used to craft messages for disinformation.
Within the parameters of this discussion, Russian troll masks need to be distinguished from the masks that other kinds of internet trolls wear in the online social sphere. Typically, online masks have been associated with internet trolls to explain uncivil behaviors. However, the behavior of an internet troll as disruptor is geared toward visibility because the objective is to monopolize attention or to stand out from the user crowd. Thus, internet trolls are represented as users who wear masks or who crave attention. While they may be masked by their social media name or handle, they still want their online comments to attract notice.
Although their masks assume numerous forms, they are invariably represented as grotesque. Yet Russian trolls can be distinguished from other internet trolls because they take advantage of user crowds to conceal themselves while advancing disinformation agendas. Furthermore, because Russian trolls endorse such agendas, they prefer denial and obfuscation to adopting a series of standardized troll masks to engage in uncivil behaviors in online spaces. For instance, they will deny the existence of Russian trolling in order to defend its operations while obscuring its effects. Thus, numerous questions arise: What types of masks can hide online identities or be replaced? How are masks worn? Who is behind the masks? How can Russian troll voices be amplified? And, why are masks convenient for engaging in information warfare? However, rather than confirm the presence of Russian trolls behind masks, the goal is to authenticate their discursive practices and their effects—in other words, to explain how troll masks can be created if they do not originate in authentic user behaviors.
Location as a Mask
Is a troll a mask, or is a troll the one who wears the mask? Or, when anyone can wear a mask, who is behind the troll mask? These questions permeate the problematic issue of Russian trolling in the current media landscape. Geographical location online can become a mask since it is marked through multiple sociotechnical affordances online—automatic ones (e.g., Internet Protocol (or IP) address where the content is produced); user-created such as geotagging where users can tag their own location or the location of the post, or providing location in the text. Location has been prolifically analyzed to draw insights on war and conflict to map physical spaces and online spaces (see e.g., Siapera et al., 2015).
Page 45 →Analyzing tweets associated with Internet Research Agency–based accounts confirms, so far, that users adopt a mask to present themselves as authentic (Xia et al., 2019). Therefore, the question is not about the existence of Russian trolling as self-masking performance. Rather, we might ask how that masquerade is performed. In her book on cyberwar, Jamieson (2018) observed that Russian trolls wore masks by harnessing the power of impersonation: “Because the geographic location of the communicator is not evident to those viewing posts and tweets, in a single sitting, a troll in St. Petersburg could masquerade as a housewife in Harrisburg, Pennsylvania; a black nationalist in Atlanta, Georgia; and a disaffected Democrat in Ripon, Wisconsin. Accordingly, @TEN_GOP was not in Tennessee, as its inhabitants alleged, but continents away. Likewise, there were no longhorns named Bevo or Boris anywhere near the Heart of Texas account’s authors” (p. 12).
This excerpt specifies how trolls can masquerade by impersonating various prototypes embedded in the American public consciousness. The key to understanding such impersonations is the intentionality of the acting—its specific goal to influence or at the very least “muddy the waters” or create chaos. Jamieson’s (2018) examples reveal location-based, deceptive self-representations, or masks designed to generate a perception of authentic participation in various online spaces, such as social media, and hypertext.
Manipulating one’s IP address is a practice that has been attributed to generating fraudulent accounts online, which constituted around 3% of Twitter and 1.5% of Facebook accounts (Thomas et al., 2013). However, in news portal contexts, IP-based, location-based evidence of foreign activity in Lithuania’s online spaces has also been exposed.
User IP address concealment is treated in IP analysis as the blueprint mask in online spaces. Typically, IP is automatically assigned to a message (or any content) on the basis of a device’s parameters, through which the message is transmitted. To mask these parameters, IP requires intentional alteration. Analysis of news portal content in the Russian-language version of Delfi.lt in previous research has shown how IP concealment is prevalent online (Zelenkauskaite & Balduccini, 2017). To trace such intentionality about altering locations, Zelenkauskaite and Balduccini (2017) analyzed 1,304 stories published between 2015 February 15 and 2015 March 15 on the Delfi.lt news portal, together with all related comments. This sample consisted of 4,940 users who contributed portal with 34,038 comments, with 6.9 comments per user on average. The sample contained 14 content categories, as defined by the news portal. Geolocation analysis of the data showed the following distribution of the countries from which comments have originated: the highest percentage of comments was traced back to Page 46 →Lithuania (53.3%), followed by Russia (16.8%), then by cases whereby locations could not be identified (7.9%). These statistics further confirm the “mask” enacted through IP concealment.
It was also discovered that a location’s specificity was yet another indicator of Russian trolling. When scholars analyzed Twitter accounts associated with the IRA’s activities, they identified certain markers that revealed differences between typical Twitter users and IRA-orchestrated tweets (Zannettou et al., 2019). One difference identified was that IRA users included generic locations in their self-descriptions, while other types of users included specific ones (Zannettou et al., 2019). Other attributes of atypical posting in the news portals were found to involve users who post quickly—that is, by simulating the automated behaviors known as bot-based behaviors in social media (Zelenkauskaite & Balduccini, 2017). Indeed, there is a noticeable proportion of users who post very quickly when a news story is released and in response to multiple stories about similar topics synchronously. These behavioral traits indicate an orchestrated effort behind the accounts. Thus, accounts can function as masks.
Subversiveness of a Mask
A specific type of Russian trolling mask relates to propagandistic manipulation, according to which ill intentions must be successfully concealed, or rendered invisible to message recipients. At the heart of propagandistic manipulation is the continuous reality-testing (Lasswell, 1950). Choukas (1965) wrote: “It is the chief characteristic of propaganda to be elusive; characteristically, propagandists are secretive in their work, avoid the limelight as much as possible and seek the shelter of the shadow” (p. 10). However, both the propagandist and the Russian troll aim at influencing internet users through the messages that they promote, as Choukas (1965) has stated: “The function of propaganda agency is not to inform but to persuade” (p. 11). In other words, there is intentionality behind the persuasion process. Or, the persuader intends to craft messages that are “deliberately designed to influence opinions or actions of other individuals or groups with reference to predetermined ends” (Choukas, 1965, p. 13). Other definitions of propaganda as influence that were prevalent during World War II state that “the control of opinion” was exercised through “significant symbols, or, to speak more concretely and less accurately, by stories, rumors, reports, pictures and other forms of social communication” (Choukas, 1965, p. 14).
Yet another type of Russian trolling mask involves self-positioning as a Page 47 →disruptive actor who engages in a communicative process based on a counteraction. This counteraction in online commenting is observable when expectations for communicative norms are not fulfilled. Typically, people engage in communicative strategies to achieve a successful communication exchange. One of these strategies is impression management.
When describing impression management, Goffman (1959) emphasized the need to prevent of what he called the occurrence of incidents. The occurrence of incidents violates impression management in a communicative exchange by something that does not conform to a given expectation. Occurrence of incidents, when handling incidents, to diminish the “damage” can outsource to face-saving strategies. Face saving indicates the need for redress in situations that are potentially uncomfortable—for example, when they occur unexpectedly, as with incidents. In such scenarios, the ideal interaction is one where interlocutors typically collaborate to foster a civil dialogue. Face-saving strategies minimize the possibilities for conflict or confrontation. Thus, it can be argued that in democratic contexts, confrontation is expected, as each party presents and defends ideas in ways that are both civil and face saving.
Goffman (1959) further postulated that both participants and nonparticipants are included in interactions for the purpose of avoiding embarrassing incidents that originate in miscommunication. To avoid such incidents, three sets of strategies are employed: “a) the defensive measures that performers use to save their own shows; b) the protective measures that an audience, or other non-performers, use to assist performers to save their show; and c) the measures the performers must take to enable the audience, and other non-performers, to employ protective measures on the performers’ behalf” (Goffman, 1959, p. 212). These behavioral strategies are applicable to Russian trolling in online news comments. The presence of these strategies indicates the artificiality involved in the creation of Russian trolling masks. After all, masks represent constructed personas, or what Goffman (1959) described as dramaturgical characters that serve to disseminate disinformation.
Sabotage: Calling Out Russian Trolls
Russian troll call out sabotage examples can be considered as a form of defensive measure where face saving is subverted by the audiences who do not comply with script promoted by Russian trolling. Goffman (1959) identified performative face saving as a defensive measure enabling performers to Page 48 →save their own show. Defensive attributes and practices include dramaturgic loyalty (teammates must act as if they have accepted certain moral obligations), dramaturgic discipline (teammates must be on top of the performativity), and dramaturgic circumspection (constant awareness of performance).
Since these codified attributes can also apply to Russian trolling, the performance of Russian trolls can be sabotaged when an audience calls them out, as evidenced by multiple forms of such sabotage exemplified in the subsequent section. In other words, the sabotage can result from a violation of this tacit code. Violation occurs when expectations of audience cooperation remain unfulfilled. More specifically, an audience sabotages a Russian troll’s masquerade by acting in ways that are contrary to typical audience behavior expectations in specific online spaces. According to the ideal scenarios for Russian trolls’ success, such behavioral expectations include unconditional acceptance of Russian troll participation in online discussions. Such user acceptance can be demonstrated by playing along with trolls or cooperating with them by allowing them to propagate information.
Masks also represent tactics of self-presentation that are verified by audiences, who are presented with two choices: to accept masks at their face value by treating Russian trolls as legitimate participants of a given media ecosystem, or to call them out by sabotaging their performance and exposing them. An exposé of Russian trolling illustrates performance sabotage. Sabotage can counteract disinformation through user activities that do not conform to standardized communicative rituals—thus, breaking with the preestablished norms for avoiding embarrassing incidents. Consequently, disinformation can be nullified through the knowledge that a propagandistic act is taking place. It is when the act’s occurrence has become a known fact that one can choose to expose or ignore it.
Comments calling out Russian trolls found in analyzed platforms’ comments functioned as exposés. Many of these exposé frames allude to Russian trolls being paid by the government to comment online. Russian trolls, in these instances, were treated as mercenaries, paid by the Russian government—specifically, the Internet Research Agency, as exemplified below.
Breitbart Story 15, Example 2
I believe you are one of the Russian Internet Research trolls not yet pick-up. Nemesis will catch up with you soon. Take cover Dodger-even if there are 300 of you!
Page 49 →In some cases, Russian trolls were called out on the basis of linguistic features that provoked the suspicion that they were nonnative speakers of English:
Breitbart Story 15, Example 3
Hey Vlad, do yourself a favor and tell your handler you need some more English lessons before you can do a decent job of trolling. And lay off that WODKA!
Some exposé frames used a sarcastic tone:
Breitbart Story 15, Example 4
Give your handlers their money back, you’re not good enough to troll.
Yet others called out Russian trolls by demanding their permanent departure from the Breitbart comments section.
Breitbart Story 15, Example 5
For any trolls checking in here, it’s time to pull up your big boy pants and find a new hobby.
Others called them out through specific reference to typical Russian troll attributes, such as being paid and their intent to influence.
Breitbart Story 15, Example 6
[S]aid the troll being paid to influence the public narrative.
Similarly, another user posted:
Breitbart Story 6, Example 1
It is pretty clear the Russians trained you well. You ignore patriots who are trying to wake you up and you believe trolls and bots who tell you what you want to hear. And trump reinforces it. That is the problem. YOU ARE PART OF THAT PROBLEM.
Some comments included a variation of the line “I am not a Russian troll, like you are,” by generating opposition or through distancing.
Page 50 →Breitbart Story 4, Example 1
Little bit. but I am not fluent in Russian like you.!!!
Another user insinuated that Russian trolls are trained by an “apparatus.”
Breitbart Story 4, Example 2
I am not trained by the Russian troll apparatus.
Similarly, claims that Russian trolls not only exist but indeed exist here to sow discord are expressed by this user:
Breitbart Story 4, Example 3
Like, no duh! 90% of the trolls here alone are just here to sow discord -- Russian style.
Others called out the opposition by stating that there are also “Soros trolls.”
Breitbart Story 7, Example 1
Kinda ironic considering you can’t spell “you’re”. But keep trying Soros-troll!
Some referred specifically to foreign interference.
Breitbart Story 12, Example 1
Comment: Don’t bet against Donald Trump.
Response: Why do you foreign troll care?
In this exchange, the second comment calls out the user who posted pro-Trump post referring to them as “foreign troll.” Similarly, other users pointed out that in Breitbart there are Russian trolls and bots who act like regular citizens:
Breitbart Story 15, Example 7
Russian trolls and bots prefer Breitbart.
Similarly, another user perceives that in Breitbart comments there are a lot of posts by Russian trolls:
Page 51 →Breitbart Story 15, Example 8
Jesus, seems like there is a lot of russian trolls here now. By the way, how is the weather in St. Petersburg?
Some users stated that Russian trolls exist and were “here” on Breitbart.
Breitbart Story 15, Example 9
Yep, Breitbart is indeed a Russian asset. I come here to keep up on the latest Russian troll talking points. I am deadly serious about this. You have undoubtedly noticed how so many of them keep repeating the same tired stuff over and over. Variations on a theme—but it always comes down to Hillary, Obama, Soros and the DNC—even when Trump is crashing and burning.
The statement “Russian trolls are here” is repeated in the following comment.
Breitbart Story 15, Example 10
The Russian trolls are busy today trying to discredit the indictment. They get the ignorant and stupid so lathered up all they can say is STFU or call someone a commie.
Similarly, some users refer to Russian trolls by calling them “comrades” and referring to “rubles” in reference to Russian trolls being paid by their government:
Breitbart Story 15, Example 11
Only somebody with a name and avatar like yours could be a troll. Sorry comrade, hope you earn a few rubles posting on here. Tough life
In the New York Times sample, a user flagged a comment as a Russian troll post based on its content.
New York Times Story 4, Example 1
Ann
CaliforniaNov. 7
@trump basher-Me thinks this reads a lot like a Russian troll post. Sigh.;->
Page 52 →Some users were hesitant to publicize Russian troll exposés. Others argued that certain posts “might have been written” by Russian trolls, as the following comment exemplifies.
New York Times Story 5, Example 1
W.A. Spitzer
Faywood, NM September 20, 2018
Your comment reminds me that Russian trolls frequently post in NYT opinion section.
One user sarcastically called out another user’s posts as those authored by Russian trolls.
New York Times Story 6, Example 1
Bryce Ross
Bozeman Montana Nov. 15
How’s the winter in Russia this year? Pretty nice here in the US
Some users offered media literacy lessons on how trolling works.
Breitbart Story 8, Example 1
So many people don’t realize the Russians troll both sides. I doubt they even cared who won. The whole point is to divide and sow discord and ultimately diminish the American people’s faith in our own nation. Everyone who joins in with the us vs. them mentality is being played. Our own politicians do it too.
Yet others explicitly praised the New York Times for exposing Russian trolling. For example, the following post received 185 “Recommend” hits.
New York Times, Story 5, Example 2
S Norris
London September 20, 2018
This is indeed a remarkable article, detailed and containing very specific information I did not realise was actually out in the public domain. Cudos for the research. However, only in the last part does it address the shift in Republicans favourable views of Russia. Has there been any research into how or how much the russians have succeeded in influencing republican Page 53 →representatives in congress and the senate? It also does not address the NRA in any depth. Are these stories for a future date, I hope?
Some users provide not only media literacy frameworks, but also, broader explanations for why Russia and conservatives are compelling.
New York Times Story 5, Example 3
Kjensen
Burley Idaho September 20, 2018
In the eyes of Americans conservatives, and this includes Evangelical Christians, Putin’s Russia is seen as the last bastion of white Christian power. Putin has been cultivating this base for some time. Anti-gay laws, and promotion of the Russian Orthodox Church by Putin’s government, has endeared him to American religious conservatives. Franklin Graham has had nothing but effusive praise for Putin. And he is not alone. I would suggest that you read Malcolm Nance’s the Plot to Destroy Democracy, or David Korn’s book Russian Roulette. Additionally, this information is not new, as Chris Hedges exposed the Christian right’s fascist leanings and their hero worship of Putin in his book of American Fascists published in 2007. I don’t know whether we’re seeing the culmination of these efforts, but I certainly hope that enough of this information has been exposed to cause us to take a long skeptical look at American conservatives and where their movement come to.
Some users called out Russian trolling by commenting on the behavioral traits and language of Russian trolls.
New York Times Story 3, Example 1
Larry Dipple
New HampshireMarch 9, 2018
Page 54 →of Turkey. Interesting because Turkey doesn’t buy into that principle. Over 110,000 have been locked up after the coup attempt with only about 41,000 charged. That leaves over 60,000 detained without charges. So much for innocent until proven guilty.
These examples illustrate how Russian trolls embedded in news portal comment spaces have been caught and called out wearing propagandistic masks. Yet even if exposés occur in online spaces, they do not necessarily sabotage the performances of Russian trolls. Instead, such exposés illustrate that, due to the prevalence of Russian trolls in a given online forum, information posted there is generally unreliable. Such projections of impending online chaos can suppress democratic debate by inciting users to leave spaces permanently, to request websites shut down user commenting due to challenges of comment moderation, or to merely ridicule the seriousness of Russian trolling.
It is possible to interpret such sabotage through the lens of Goffman’s theory of performativity. According to Goffman (1959), however, the disloyalty of online community, exemplified here through callouts of Russian trolls, threatens potential backfire. In short, callouts can strengthen the opposition’s in-group solidarity. Even if some audience members were to expose Russian trolls, those who endorse other perspectives (e.g., the conviction that Russian trolling is unproblematic) could form an opposing coalition. The opponents could then amplify the disinformation originating from Russian trolls and realize what Goffman called the “dramaturgical discipline” ideal. From the perspective of teammates, dramaturgical discipline involves the cooperation that enables them to follow along with the scripted performance. Thus, as a concept, dramaturgical discipline permits Russian trolls to play the role of trolls consistently. Sustaining such role-playing over time is possible because they can always attract complicit supporters, despite the presence of online community members who expose them and thus sabotage their masquerades.
Yet for Russian trolls to be successful, they need to find the right topics to attract followers. Thus, dramaturgical circumspection is required—a quality defined as a constant (self-)awareness or prudence in communicative acts. Goffman (1959) stated that “the prudence is needed while staging the event, exploiting contingencies that are presented with them and opportunities that remain” (p. 218). He also specified requirements for the successful enactment of prudence, such as selection of loyal and disciplined team members and awareness of how much loyalty can be expected from the team. In other words, Russian trolling needs support from loyal followers,Page 55 → or collaborators, who succumb to the lure of disinformation messaging and treat it as an authentic discursive form. These loyal followers can also be mere believers of disinformation messaging overall, if parts of the messaging appeal to their values. Consequently, they become enablers of Russian trolling success. As Jamieson (2018) observed, such followers can also be paid operatives who create a critical mass, or the general public who enables the proliferation of ideas held by Russian trolls.
To target enablers, there is another tactic that can be utilized for the performative masking of Russian trolling that involves what Goffman (1959) called the circumspect performer. More specifically, this tactic requires “selecting the kinds of audiences that will give a minimum of trouble in terms of the show the performer wants to put on” (Goffman, 1959, p. 219). Thus, theoretically speaking, before casting a wide messaging net, Russian trolls should test out online spaces where contestation of ideas is minimal. Then, they can deploy tactics for persuading audiences to take sides on issues that have already been endorsed for online debate. Such tactics expose potential alternatives that appeal to that specific segment of the audience. In other words, by providing various justifications for Russian trolling, the ones that become amplified or the ones that audiences find most appealing become increasingly evident. After such testing, messages can be seeded through repetition across multiple platforms.
Another technique that supports the circumspect performer is ensuring that information remains closely related to the facts and that these facts remain minimal for the production of simple and succinct scripts. In other words, the shorter and simpler the script, the less risky (as in the case of alibis) and the more likely the performance is prone to withstand sabotage. While this technique allows for less error, Goffman (1959) cautioned that it could decrease audience interest and engagement. Thus, short and simple, persistently repeated messages have the greatest likelihood of success as tactics of persuasion and influence. For Russian trolling specifically, such messages need to address topics that appeal to public affect and should be duplicated.
Another aspect of visibility that performers must consider is the amount of information sources available to the audience during the interaction process. This concern is important because it allows the performers (i.e., masked Russian trolls) to adapt to situations depending on the audience type they face—examples are shown in the subsequent chapters referring to cracks in the society pertinent to each sociocultural context. In other words, different news portals will draw specific audiences—whether that portal is Breitbart, the New York Times, Gab, or Delfi.lt. Thus, issues need to be customized for each of these audiences.
Page 56 →News portals that waive user registration requirements for posting comments also enable successful online masquerades. In such cases, invisibility can be more readily negotiated by performers, whereas automation further facilitates such masquerades. More specifically, the creation of myriad bots that act autonomously and promote certain messages becomes possible. If the human team behind the bots has an orchestrated agenda, it can render Russian trolling effective due to its ability to dispatch messages on a global scale.
Protective Measures
Protective measures can also be variations of the Russian trolling mask. While such measures have been described as modes of impression management, they are usually implemented according to voluntary discretion—that is, of interlocutors who ask individuals to refrain from entering spaces to which they have not been invited. Once they are invited to a performance, they still need to maintain what Goffman (1959) called tactful inattention.
Where Russian trolls and news portals are concerned, the uninvited aspect of online audiences is rendered irrelevant by the general assumption that democratic deliberation invites all users to participate and that such discursive participants self-select. Consequently, performances in online spaces can be deliberately sabotaged by any actor. While some participants are more active than others, the nature of participation varies, depending on news story categories and participant (inter)activity levels, for which an indicator is the number of messages posted.
Performativity and Modus Operandi of a Propagandist Mask: Self-Sabotage
As previously mentioned, Russian trolling as online masquerade can be examined through the critical lens of performativity. If influence is considered crucial for the perception of Russian trolling, the performativity of anonymous and automated Russian trolls in online spaces must also be considered. Yet questions linger: How do online masks shift, and how are they adopted for discursive performance? How can the masked self be presented according to Goffman’s performance theory? After all, Russian trolls must present themselves within consistent frames that maintain the front through text, since message posting takes place online.
Page 57 →An example of performativity can be observed in examples where, users sarcastically called themselves Russian trolls to downplay the seriousness of Russian trolling. In such cases, self-sabotage was found to be another unmasking strategy, even if it is geared to obscure rather than uncover. An example of this strategy is confessing that one is a Russian troll (regardless of whether such confession is true). Self-sabotage geared to cause confusion regarding Russian trolling was expressed through some variation of the statement “I am a Russian troll.” This statement can exemplify deployment of the covert propaganda because such commenters establish their affiliation with the so-called opposition, or Russian trolls. At times, their comments exceed mere sarcasm, which indicates that they had adopted the “I am a Russian troll” narrative to demonstrate solidarity with Russian trolls in some instances. The manner in which this statement emerged showcases the push for the viral memetic circulation of the statement in a presumably ironic tone. The following statements exemplify such rhetorical maneuvers.
Breitbart Story 9, Example 1
That’s it, I am changing my screen name to “Russian Troll.”
Users resorted to sarcastic “I am a Russian troll” remarks when mocking the Russian trolling investigation.
Breitbart Story 15, Example 12
Seems like there are more Russian trolls here.
I am a russian bot. not troll 💂♀
The “I am a Russian troll” comment was used as a technique to discredit the face value of the investigation. In this case, users presented themselves as trolls.
Gab Example 1
#intelligence Report talks of #Russian paid #trolls on social media before the election to discredit #HRC & her campaign.
1. Where is my paycheck for hours of trolling?
2. HRC/Podesta did a much better job of discrediting themselves than any troll could!
3. I am a proud troll!
[image of seven trolls with colored hair]
Page 58 →This example illustrates how the user projects themselves as Russian troll, arguing that they are “proud trolls” and they deserve to be paid for their work and that Russian trolling is just a mere subcultural activity of which they are proud, thereby still downplaying Russian trolling.
On Delfi.lt, some users sarcastically called themselves Russian trolls. In so doing, they mocked the phenomenon of Russian trolling.
Delfi.lt Example by Anonymous Users 1
Headline: I am putin
Comment: I have been and I will be a troll. They will put me in the jail, I will troll from the jail, no problem :D
In this example, the user, who chose to be called “Putin,” insinuates that it is impossible to put an end to trolling.
“I am a Russian troll” evokes similarities with the #IAm movement whose hashtag was translated from the French Je suis Charlie. In her Slate article, Hess (2015) describes #IAm as the default mode of showing solidarity in the hashtag era. The movement originated in 2015, when gunmen targeted and killed staff members at the headquarters in Paris of the weekly satirical newspaper Charlie Hebdo. Since then, the movement’s hashtag was translated into multiple languages and began to include the names of the victims. For example, according to Hess (2015), the hashtag fractured into countless iterations, such as #JeSuisAhmed, in support of the Muslim police officer Ahmed Merabet. Then, it was used to show support in general when it was translated into multiple languages: #JeSuis, #IchBin, #IAm. As Hess (2015) observed, “This is now the standard opener for expressions of social media support” (para. 2).
In news portals, however, hashtags are not the typical sociotechnical means of content tagging or signaling. Thus, they do not exist within those contexts. Nevertheless, the “I am a Russian troll” line has been repeated across multiple news portals and contexts. Yet while the #JeSuis movement acquired momentum through both traditional and social media platforms, analyzed examples from news portals show how users have adopted it to emphasize their solidarity with allegedly falsely accused Russian trolls. Because “I am a Russian troll” insinuates that Russian trolls are victims, the statement is readable as a rallying call for support and solidarity. It also shows how frames that are popular in other contexts have been appropriated and reutilized to create new masks. Thus, the Je suis Charlie movement allows for the “I am a Russian troll” cross-referentiality, which, in turn, Page 59 →invites sympathy toward Russian trolls. Later in the book, in Chapter 4, similarly, through Russophobia frame, Russian trolls were found to position themselves as alleged victims to evoke sympathy.
Modus Operandi 1: The Troll Mask as Camouflage and Alter Ego
Constantly changing message flows in online spaces permits users to camouflage themselves within message clusters while enabling them to assume various roles. Russian trolls, in particular, if employing classical propaganda techniques, can assume at least two forms: the overt propaganda and covert propaganda. Choukas (1965) traced these propaganda techniques to World War II. Overt propaganda involves the propagandist’s infiltration of an audience to simulate agreement with the majority of members. The goal is infiltration, followed by further polarization. By contrast, covert propaganda techniques require the propagandist to side with the opposition.
Such treatment of propagandistic masks is useful where Russian trolls can join movements and present themselves as members of a generally informed citizenry or as merely opinionated online forum participants. Consequently, group membership enables Russian trolls to operate as masked users who infiltrate spaces to sway opinions. Russian troll infiltration case was recently identified in the conflict among groups within the Black Lives Matter movement (CNN video, Russian trolls exploit, 2018; Stewart et al., 2018; Zannettou et al., 2019). In such cases, infiltrating trolls can feign participation in social movements through the tactics of overt or covert propaganda. Assimilation into the general online population further enables them to deploy these propaganda tactics. For example, they can assimilate by impersonating activists and by pretending to endorse their values. Infiltrating trolls can also impersonate members of oppositional teams. Thus, by assuming both activist and oppositional team member roles, disinformation is enabled through the consolidation of control over messaging, as in examples of “I am a Russian troll.”
Furthermore, when referring to propaganda as a mask, Choukas (1965) stated that the mask represents the covert position of propagandists. He also claimed that unlike overt propaganda scenarios, whereby opponents are openly exposed or attacked, the covert propagandist assumes the role of the friend of an opponent, or—better yet—that type of propagandist assumes the roles of opponents. Thus, covert propaganda is understood to be fully orchestrated with complete objectivity, while no personal elements are included in its agenda. Moreover, covert propaganda involves using lies Page 60 →to create objective facts. Yet when such facts became verifiable, such “covert” propagandistic messages become evident lies. Then, the propagandist is compelled to change strategies. But before that need arises, the propagandist can freely circulate pushed agendas as objective truths, due to debate distortion or sabotage in the online public sphere.
Modus Operandi 2: “Fake It Till You Make It”
Through the symbology of masks, online trolling can be viewed as a staged performance, especially if it is orchestrated by external forces of influence. Yet its performative element implies that to be perceived as “real,” one must engage in a persistent performance. This persistent performance can be encapsulated in the dictum “fake it till you make it.” Goffman (1959) described the sustained performance accordingly: “Performers may even attempt to give the impression that their present poise and proficiency are something they have always had and that they have never had to fumble their way through a learning period” (p. 47).
While Goffman noted that performers are better off if they enter a performance site without doubts, faking can be a useful technique for persuading an audience that the performance they are about to witness is actually authentic. Goffman, then, specified examples of high-executive jobs and how applicants for them are hired on the basis of the qualities that they appear to embody rather than the skills that they actually possess. In the case of executive positions, job applicants land them because of these “quasi-inherent” or performed (in advance) qualities.
So, how does all this apply to Russian trolling? First, Russian trolling in the comments has been positioned through specific frames. These frames are conveniently packaged for the reader as lenses through which Russian trolling should be interpreted. For example, Russian trolls exist and Russian trolls do not exist are two major oppositional frames through which Russian trolling was treated online. Russian trolling, thus, requires a great deal of effort with “fake it till you make it.” The need of sustained positioning of a given idea suggests that there is no agreement on a given topic, such as Russian trolling existence.
Sustained performances in professional settings were found to involve degrees of signaling. Goffman (1959) observed that “fake it till you make it” in service occupations is accompanied by tangible signs such as the appearance of “cleanliness, competence, [and] integrity” (p. 26). Such signs, he argued, are related to the self-presentation. Thus, in online performance Page 61 →contexts, Russian trolls have to take stances by crafting targeted messages toward the groups and online spaces they intend to influence.
Additionally, the “fake it” aspect of trolling requires the adaptive selection of a presentation for maximizing approval of an issue related to disinformation. Such issues typically refer to sensitive topics that are relevant to a given group or are customized according to context. Then, to sustain that performance of authenticity, frames need to be repeated till audiences validate or accept them. Thus, performers encounter risks when adopting “fake it till you make it” strategies. However, performers can underperform or overperform when using given masks. Thus, a successful Russian troll needs to calibrate performance capacities. According to Goffman (1959), one calibration method involves setting the scenic parts of the expressive equipment. Thus, for disinformation to be effective, masks require constant adjustment as new topics emerge.
Goffman (1959) also claimed that there were preestablished fronts, or modes of self-presentation, that require particular performance acts for various roles. Users can fake authenticity by repeatedly employing such predesigned frames, or aforementioned fronts. Thus, specific positioning of Russian trolls through repeated frames can relate to fronts or the sustained presentation over time of a given idea as an authentic façade pertaining to a given matter. While typically sustained performance requires extensive effort, the current media landscape allows for automation. When the concept of preestablished fronts is applied to Russian trolling, a range of activities of self-presentation on the web can be automated.
Automation can be achieved through programming. Typically programming online is associated with bots. Gorwa and Guilbeault (2020) categorized bots based on structure (systems that are algorithmically or human-based), function (what bot’s task in the specific online space: e.g., to emulate accounts or communicate with others), and uses (how bots are employed). Based on programmed parameters, bots (short for “robots”) were originally designed to automate tediously repetitive online tasks. In such instances bots were launched in the early days of the web to perform tasks such as the systematic cleanup of Wikipedia entries to ensure their conformity to formatting requirements (Geiger, 2017). Bots are not unique in this way: Bots can exploit advantages that online environments provide for communication, described by Van Dijck and Poell (2013), such as programmability, popularity, connectivity, and datafication in social media.
While it is true that not all bots are malicious, Twitter, for example, outlined what constitutes prohibited activity by citing the following: “Malicious use of automation to undermine and disrupt the public conversation, like Page 62 →trying to get something to trend; Artificial amplification of conversations on Twitter, including through creating multiple or overlapping accounts; Generating, soliciting, or purchasing fake engagements; Engaging in bulk or aggressive tweeting, engaging, or following; Using hashtags in a spammy way, including using unrelated hashtags in a tweet (a.k.a. ‘hashtag cramming’)” (Roth, 2020, para. 9).
Automated actors such as bots in current online media systems can be programmed to accomplish lists of tasks: search users by keyword, account, or ID; follow and classify users based on predefined parameters such as user types, trends, and keywords; “like” content, based on predefined parameters, such as user types, trends, and keywords; tweet and mention users and keywords based on AI-generated content, fixed-template content, or cloned content from other users; retweet users and trending content, and mass tweet based on specific parameters; chat to (reply) or with other users; use pauses to mimic API or human expectations; and store information for later use (Daniel & Millimaggi, 2020).
These automated tasks enable Russian trolls that are programmed through bots, as argued by Im et al. (2020) to express themselves with simulated authenticity. Specific type of bots, sockpuppet bots, are known to be designed with the goal to fake identities; and are deployed to interact with other users online. These bot accounts are controlled manually; however, automatic control has been also detected (Gorwa & Guilbeault, 2020). To sustain a sense of authenticity, the bot programmer simply needs to scan constantly posted user-generated content and, in so doing, extract relevant aspects—be it from social media posts or news portal comments. Based on these extractions, the programmer can simulate “authentic” content for dispatch. Through machine-learning techniques, typically used for artificial intelligence applications and deep-learning, these texts can assume any types of masks. Such prepackaged messages are just as easy to dispatch online by automated means. In such instances, “fronts” can function as online platform properties. These fronts can represent, while also enable, different behavior types. At the same time, the signaling system of messaging can be simulated through location identifiers or the kind of self-representation that involves impersonating someone else.
However, through the concept of automation, bots can also contribute toward the rhetorical contouring of propaganda. In the specific context of Russian trolling, the discussion concerns bots that are designed to deliver specific messages. According to Ferrara et al. (2016), these are “social bots” that “automatically produce content, and interact with humans on social media” (p. 96). Thus, these bots simulate human online behaviors. SpecifyingPage 63 → bot typologies—for example, for advancing political agendas—becomes crucial for exposing and disambiguating the misconceptions about online information and communication ecosystems, where various actors coexist, as shown by Golovchenko et al. (2018). Such political bots were found to drive discussion about the shooting down of Malaysian Airlines Flight 17 in the Ukrainian war zone in 2014, even if it looked like a general public of “curators” or “involved” citizens, who tweeted about the incident (Golovchenko et al., 2018). However, a closer analysis of this tweet sample performed to detect bots revealed that a large number tweets were posted by bots acting on behalf of “citizens” and adopting impostor masks with a goal of a targeted circulation of centralized disinformation about the airplane shooting (Stukal et al., 2019).
Thus, the purported human users in this case had actually been masked automated bots that simulated active human engagement in information propagation, exemplifying the difficulty of unmasking online behaviors. Thus, automation through nonhuman means can be implemented through the use of automated bots, as an available option in the current media landscape.
Modus Operandi 3: Dramatic Self-Realization
Dramatic self-realization is another technique that enables Russian trolls to simulate the authenticity of self-presentation. This technique enables performers to appear spontaneous or authentic. For example, to facilitate discussions about the practice of dramatic self-realization, Hilton (1953) coined the term “calculated spontaneity.” This term refers to the need to achieve a conversational or spontaneous tone when reading a script. While this competence is crucial for professions, such as those involving presenters in TV or radio broadcasting, it is also applicable to online scenarios. Thus, dramatic self-presentation is acceptable or even desirable in certain situations, such as those involving TV, radio, or the internet. In fact, calculated spontaneity is a professional expectation for some occupations, especially those related to public speaking. For Russian trolling, however, strategic spontaneity is a tactic of influence.
Russian trolling as performance entails another strategic principle—what Goffman (1959) called expressive coherence, whereby “performers tend to foster an impression that their current performance of their routine and their relationship with their current audience have something special and unique about them” (p. 4). Consequently, once the mask is on, the expectationsPage 64 → of that mask must be fulfilled. On a similar note, Enli (2015) spoke of a concept of mediated authenticity and the craft of the construction of the authentic self. And even if that authenticity is highly constructed, what matters is how the general public perceives these self-presentations. In other words, if spectators perceive interactions as authentic, they become authentic.
In the social media world, a similar concept of strategic authenticity has been proposed as an instrumental logic wherein the value of authenticity is based on ensuring a loyal base of followers (Gaden & Dumitrica, 2015). Thus, the calculated spontaneity, or what Gaden and Dumitrica (2015) referred to as a strategic authenticity, dominant in the current social media world, allows for the self to act differently in various settings. In these settings, or what Marwick and boyd (2011) called context collapse, expectations of different audiences encourage the compartmentalization of behaviors. For example, in an academic setting, a professor could opt for a dramatic classroom entry, while in daily life, that same professor could behave with extreme modesty. In the case of Russian trolling, however, constructed authenticity allows trolls to perform in masks based on the specific expectations of authenticity within political spectra they want to target.
Multiple Faces for the Masks: Commenting User Typology
Multiple masks and different faces constantly appear in online news portal spaces. While this book’s objective is not the identification of real faces behind masks, it does explain how masks create online chaos regardless of the users who adopt them. To advance studies on online social influence, online user performance can be categorized by three types of behavioral data points, according to commenting user typology (CUT): content level (topic of the story category), user level (frequency of posting), and timing and location of postings (Zelenkauskaite & Balduccini, 2017). Additionally, online participation has been differentiated based on frequency—that is, through the identification of specific online behaviors, such as the hyperactive posting behavior known as “superposting” (Graham & Wright, 2014). Thus, CUT categories can be used to assign online participants to spaces where they repeatedly contribute and where Russian trolling callouts take place. In fact, active participation online was found to lead to a paradox where engagement in more of the political talk led to larger amounts of information spread, as shown by empirical evidence from private messaging (Rossini et al., 2020).
Page 65 →To uncover Russian trolling justification as a manifest phenomenon online, this book furthermore traced how Russian troll justification circulated across platforms. Although masks can be associated with individual users, they primarily represent prototypes of the Russian troll. Prototypical masks are considered in order to shed light on users’ commenting behavior, as viewed through the critical lens of Goffman’s performance theory. These prototypes are presented here as aggregates—in a form of frequently repeated comments that exemplify user commenting hyperactivity. Such online hyperactivity requires consideration of posting frequency, sustained over a specified time period. Frequency of posting was traced by taking into consideration the range and number of commented stories; and the content, style, language, and tone of posted comments.
Based on the premises of CUT framework introduced earlier, where commenting behaviors in online news portals are concerned, a typical user is expected to post several comments per story. For example, the Breitbart sample for all stories analyzed in this book totals 4,049 users, all of whom contributed by using public accounts. While an average posting comprised 4.2 comments per user, some users posted significantly more frequently than others. Of the 4,049 total users, 25 posted more than 50 times, generating a maximum total of 152 posts per user. Users are expected to be active in response to a wide range of news stories or to target specific ones. By reviewing all 13 news stories on Russian trolling published throughout 2018, variations in user posting have been identified. For example, while several users posted in response to one news story on Russian trolling, a noticeable percentage of users seemed to claim specialization in that topic. Specifically, such users were found to return to comment after the release of Russian trolling stories—even if that release had occurred several months earlier. Some of those users appear to be particularly dedicated toward Russian trolling stories; in some cases, the same users were found to comment on five separate Russian trolling stories.
This section specifies several techniques for unmasking trolls. The first asks whether news comments left to stories covering Russian trolls are public or private. This allows us to identify the degree to which users are “covering themselves” when discussing Russian trolling as a phenomenon. The second deals with automation. To assess whether users were striving to circulate repeated comments, the amount of duplicate comments were assessed. Such repetition can indicate an intent to circulate specific content, be it manually or via automation means. These two unmasking techniques are used by comparing news stories that covered Russian trolling alongside those that are unrelated to the Russian trolling, such as those on Breitbart specifically Page 66 →focusing on sports stories. Following this procedure, prototypes of frequent posters are analyzed and presented.
Finally, the masks exposed in the news portal comments are presented through text-based means. As discussed earlier, these masks are accompanied by tactics, such as calling out Russian trolls or impersonating them. These analyses highlight the dichotomy involved in the positioning of Russian trolls within a single news portal or across multiple ones. In some instances, users acknowledge the existence of Russian trolls and the fact that their primary objective is persuasion. In other instances, users comment aggressively to justify or to deny the existence of such trolls. Although these arguments are widely divergent, they are invariably expressed with noticeable frequency across news portals.
Visibility of the Masks Through Private Versus Public Posting
Users who posted on Russian trolling stories were analyzed according to automation and anonymity, the main tools of computational propaganda. Automation of posting and self-presentation through anonymity can provide masks for Russian trolling to masquerade online by merely adjusting user privacy settings that allow one’s comments to be visible or invisible. Thus, trolls can take advantage of the various levels of anonymity or identity-masking that news portals provide or use to create perceptions of a genuine public sphere debates. Thus, sociotechnical configurations of news portal comments matter. For example, the Lithuanian news portal Delfi.lt allows for anonymous posting. The majority of comments on the portal are anonymous, with no user identifiers other than IP address. Even if users register to post comments, they use social media screen names. Otherwise, they opt for Delfi.lt accounts that permit them to select any self-identifying screen name. Previous studies of Delfi.lt indeed found that anonymous IP posting dominated commenting on Delfi.lt (Zelenkauskaite & Balduccini, 2017). This shows how anonymity can be a tool that is utilized to project publics and what scholars call counterpublics—the movement that challenges the established status quo (Asen & Brouwer, 2001). Discussion regarding Russian trolling becomes a terrain to uncover the relationship between various types of publics and anonymity.
Let’s first look at media systems at play and how they allow for anonymity to be revealed. As described above, Delfi.lt account settings and user options do not enable greater transparency. They are, in fact, very similar to those employed by the New York Times or Breitbart. As for Gab, users create Page 67 →accounts, as they would for any social networking site. During the data collection period for this study, all Gab posts were publicly accessible. However, by using the third-party Disqus platform, Breitbart allows users not only to create accounts to comment on multiple sites but also to preserve the privacy of their accounts. The platform promotes such user options by stating: “Most importantly, by utilizing Disqus, you are instantly plugging into our web-wide community network, connecting millions of global users to your small blog or large media hub” (“Disqus, What is Disqus,” n.d., para. 1). Making accounts private prevents other users from accessing posts across multiple news stories when they open a specific user account.
The analyses referring to the individual user activity and anonymous vs. public user commenting practices presented in this section is exclusively based on Breitbart because it is the only news portal in the sample that uses Disqus, a third-party platform based on individual user accounts, rather than message-level presentation of the comments. These individual accounts provide a lens for understanding online masks, as users can choose to have their archives visible or invisible to the public. About 53% of comments, or 19,152 comments, were found to be sent from private or “masked” accounts in the analyzed sample of comments in response to Breitbart news stories on Russian trolling. These private accounts do not allow for a reader to see any other content associated with a given account or their frequency of posting.
To account for the typicality or atypicality of private commenting for stories related to Russian trolling, stories on unrelated topics were also collected, such as Breitbart’s sports stories on the South Korean 2018 Winter Olympics in Pyeongchang. This additional sample was collected with the expectation that users act similarly on both samples in terms of choosing anonymity in the posting. Twenty stories generated 4,554 comments. Of this total, 1,761 (or 39%) were posted privately. Yet Russian trolling and the sports story posting by users indicate vast differences in two ways: First, Russian trolling stories received more comments overall; second, they also had more comments from private accounts. This finding further supports the claim that masking is a major behavioral characteristic of Russian trolling news story commenters.
To identify the level of content circulation through masking, amounts of repeated posts or duplicate comments were assessed in both samples. In the analyzed sample that includes 13 Breitbart stories on Russian trolling and 37,137 comments, 7.6% (n = 2,851) were duplicate posts. By contrast, examination of Breitbart’s news stories on sports topics yielded far fewer duplicate comments, at 2% (n = 94), which shows how comments on Russian trolling were more likely to recirculate the same content, indicating Page 68 →either automated content circulation or repeated frames to try to convince someone of something, compared to the comments on the sports story, which had a limited number of duplicates.
Analysis of public comments also yielded divergent results in the analyzed samples. This part of the analysis excluded comments for which users had selected private commenting options. On average, 1.9 comments per user were publicly posted in response to sports stories. For these stories, 1,467 users posted a total of 2,793 public comments; 965 of these comments (or 66%) were posted by users who posted only one comment, and the maximum posted was 36 by one given user. However, the Russian trolling news story sample included 16,985 public comments posted by 4,050 users, with an average of 4.2 comments per user. In this same sample, 1,847 users (or 46%) posted one comment, while one user posted a maximum of 152. This shows that commenting for news stories that covered Russian trolling was much more active, with twice as many comments posted by a given user on average than for the sports stories. As conceptualized by CUT, these contrasts in commenting, together with the posting frequency for a given story at a steady rate of one comment per minute, point to automation or at least intentional intensity of the posting process.
By reviewing duplicates within comments’ sample for news stories that covered Russian trolling in the Breitbart sample, the following results were tabulated.
Table 1. Number of duplicates |
|||
---|---|---|---|
Type of comment |
Total comments |
Duplicates |
Percentage |
Private user comments |
19,152 |
2,613 |
13.6 |
Public user comments |
16,985 |
136 |
0.8 |
Total |
37,137 |
2,749 |
7.6 |
Results in table 1 indicate, for example, the significantly greater number of duplicates in private comments compared to their public counterparts. Timing is also relevant for the successful continuation of disinformation. If the audience is privy to only a brief performance, that fleeting glimpse diminishes the potential for embarrassment that results from exposed inconsistences within that performance. Posting frequency varied in commenting behaviors for Russian trolling and sports stories. Thus, staging of that which can be seen is another technique that Goffman has elaborated on. In other words, the performer must be cautious about the conditions under which the performance is to be staged. The same caution must be exercised for messages left in news comment spaces. For Russian trolls to succeed, it is crucial to send the right persuasive messages at the right time. Such Page 69 →messages need to follow news story cycles—that is, immediately after news story release—for maximum exposure. In cases where online trolling is a masquerade, timing is particularly critical. Thus, trolls need to latch onto relevant news portal stories the moment they are released. Interestingly, previous research on tactical commenting in online news portals found that there was a percentage of users who posted frequently right after news stories were published (Zelenkauskaite & Balduccini, 2017).
The following repeated comment sequence exemplifies such frequency. The comments were spawned by an anonymous user within four minutes in response to a Russian trolling story:
Breitbart Example 1
2018-02-16 19:13:54 Wired Sources @WiredSources
BREAKING: DOJ issues indictments against 13 Russian nationals and three Russian entities for election interference, organized anti-Trump ‘resist’ rallies. 964566099865030656?ref_src=twsrc%5Etfw&ref_url=https%3A%2F%2Fproxy.goincop1.workers.dev%3A443%2Fhttp%2Fredstatewatcher.com%2Farticle.asp%3Fid%3D117940
SO THE RUSSKIES WERE COLLUDING WITH GRANNY! BWHAHAHAHAHAHAHAHA
2018-02-16 19:14:44 Wired Sources @WiredSources
BREAKING: DOJ issues indictments against 13 Russian nationals and three Russian entities for election interference, organized anti-Trump ‘resist’ rallies. 964566099865030656?ref_src=twsrc%5Etfw&ref_url=https%3A%2F%2Fproxy.goincop1.workers.dev%3A443%2Fhttp%2Fredstatewatcher.com%2Farticle.asp%3Fid%3D117940 SO THE RUSSKIES WERE COLLUDING WITH GRANNY! HOHOHOHOHOHOHOHO
2018-02-16 19:17:33 Wired Sources @WiredSources
BREAKING: DOJ issues indictments against 13 Russian nationals and three Russian entities for election interference, organized anti-Trump ‘resist’ rallies 964566099865030656?ref_src=twsrc%5Etfw&ref_url=https%3A%2F%2Fproxy.goincop1.workers.dev%3A443%2Fhttp%2Fredstatewatcher.com%2Farticle.asp%3Fid%3D117940 SO THE RUSSKIES WERE COLLUDING WITH GRANNY! BWHAHAHAHAHAHAHAHAaa
2018-02-16 19:29:12 Wired Sources@WiredSources
BREAKING: DOJ issues indictments against 13 Russian nationals and three Russian entities for election interference, organized anti-Trump ‘resist’ rallies.
2018-02-16 13:24
GRANNY COLLUDED WITH THE RUSSKIES ON THE ELECTION article.asp?id=117938
Page 70 →The example above illustrates a sequence of repeated comments by a given user. Time stamps of the comments show a rapid reposting. The consistent frequency of the repeated comment indicates automation behind its propagation. Otherwise, it can be deduced that the posting user intended the comment to stand out among the others. In either case, it is worth noting that one of the comments in the sequence had been linked to a tweet from a suspended Twitter account. This finding illustrates how users can try to promote content through news portal comment sites after they had already violated other platform regulations and are subsequently blocked from them using them. Moreover, the RedStateWatcher article link directs to a page with story claiming that Russian trolling does not exist. That online story also vouched for the innocence of Donald Trump and the need to investigate his political opponents (e.g., Hillary Clinton). Public users posted the following duplicate comments:
Breitbart Example 2
2018-02-17 03:14:01 “There is no serious person out there who would suggest somehow that you could even rig America’s elections.” ~Barry Obama
2018-02-17 00:55:03 “There is no serious person out there who would suggest somehow that you could even rig America’s elections.” ~Barry Obama
For a story headlined, “Putin ‘Couldn’t Care Less’ About Russian Interference Claims,” these duplicate posts emerged:
Breitbart Example 3
2018-03-11 06:25:27 He doesn’t care, because he is SANE.
2018-03-11 01:49:41 He doesn’t care, because he is SANE.
2018-03-11 02:01:25 He doesn’t care, because he is SANE.
As seen in the examples above, although posting times vary, these comments were sent on the same day and in response to the same story. These examples illustrate the facility of duplicate concealment through private posting. In other words, private posting obscures the sequence of duplicated posts. Duplicate postings as a type of content memetic circulation have been found in other platforms such as 4chan, typically used to circulate hate and anti-Semitic content (Zelenkauskaite et al., 2020).
Page 71 →Facets of Commenting
Looking at specific users and the frequency of their posting patterns online to stories related to Russian trolling on Breitbart provides a more fine-grained insight on attack and defense interplay. Such a coexistence of attack and defense tactics (i.e., comments that support Russian trolling or are against it) may look like what is referred earlier to as perceived counterpublics. To identify what type of narratives are projected to discuss publics and counterpublics, the most frequent posters who intensely and repetitively posted in a condensed period of time were further analyzed.
Within the sample of users who posted more than 50 comments within a comparatively short time, atypical behavioral traits that included posting frequency were identified. For example, user 276975563 (users were assigned an arbitrary number to preserve anonymity) posted on two separate days and in response to two different stories. In the first instance, posting began on 16 February 2018 at 20:43:28 and ended on the same date at 22:34:06. During those two hours, the user generated 67 comments. One comment per minute appeared in the first 17 minutes before the posting rate decreased. Within the next 111 minutes, the user posted 67 times, for an average of approximately one comment every 2 minutes. The remaining two comments were posted in response to a different Russian trolling story nearly six months later, on 21 August 2018. For this later story, the user generated three comments within 2 minutes. According to the user’s self-description and overall activity via Disqus, the user is from a broad geographical location (“the Midwest”); joined the platform on 13 January 2018; and since that date produced 25,292 comments and registered 58,420 “likes” across the Disqus platform. Comments posted by the user include the following sequence:
Breitbart Example 4
20:43:28 Trump never colluded–EVER I hate liberals
20:44:13 He’s your leader.....for seven more years
20:44:28 TRUTH
20:45:12 Liberals are the true enemy. Pray to God one doesn’t get elected president. . . . or we’re all f­ucked
20:45:48 ((((((WAN)))))))
20:46:56 There are more beasts.....never forget
20:47:45 The FBI found 13 Russian f­ucks but couldn’t find one teenage boy after two alerts?
20:50:06 Chris Steele colluded with the Kremlin to meddle in our election!
Page 72 →The Frequent Poster
User 67599310 generated 152 comments, the greatest number of posts in the analyzed sample. These focused on three stories related to Russian trolling. User’s self-description and overall activity via Disqus reveals that the user did not provide a location; joined the platform on 2013 August 15; and since posted 10,085 times and registered 4,315 “likes” across the platform. Although the user had occupied platforms 5 years longer than user 276975563 in the first example, the user 67599310 generated approximately half the number of comments during the same time on the Breitbart news portal. This number indicates that the behavior of user 67599310 is atypical where posting frequency is concerned.
User 67599310 focused on two stories in the sample that cover Vladimir Putin and a video report on Russian troll indictment. All three commenting sessions occurred on three separate days when the news stories were released. Seven comments were posted in response to the first story, five of which were produced within five consecutive minutes at 5 p.m., and two within two consecutive minutes at 6 p.m. The greatest number of comments from this user (134) pertained to the 2018 February 16 Russian troll indictment story. The user continuously posted 130 comments from 19:27:34 on the story release date to 02:46:20 on 2018 February 17. Of these comments, the first four were posted within the first 4 minutes. The user resumed posting after 30 minutes, and within the next hour produced 33 comments. This commenting frequency can be calculated at approximately one comment every 2 minutes, a similar rate to that of user 276975563. Within the next hour, 48 comments were generated, or approximately more than one comment per 1.5 minutes. In the next hour (10 p.m.), the user’s comment count dropped to 24, or less than one comment every 2 minutes. At 11. p.m., the comment production rate further decreased.
Multi-Story Poster
User 2832674 was unique due to the 131 posts generated in response to five different stories related to Russian trolling. The user defended Russian trolls with comments like these:
Breitbart Example 5
2018-02-16 20:15:38 It’s possible some Russians did . . . but not Putin . . . He is as opposed to the Soros Globalist ilk as American Conservatives are . . . Page 73 →for different reasons of course. Russians are no longer ideologically “pure Communist”, as they once were.
2018-02-16 18:59:39 It’s nothing new that Russia would like to influence elections, politicians, even Charities or Foundations ran by ex-Presidents. Lets get on with more Indictments.
2018-02-16 21:38:24 Now I bunch of Ruskies who had fake twitter accIts . . . you have a vivid imaIation . . . i’ll give you that. Trump was vindiIed today . . . Did you miss that part?
Most of user’s 2832674 posts focus on two stories about Russian troll indictment released on 2018 February 16. In response to the 2018 February 20 story, the user left seven comments at 11 p.m. that were written within 6 minutes. This multi-story user was found to engage in frequent posting. In response to the 2018 February 16 Russian troll indictment story, the user posted 53 comments from 18:45:55 to 21:07:25. In the first hour, 21 comments were posted at an average rate of one comment every 3 minutes. Thirty-two comments emerged during the second hour.
In response to another 2018 February 16 story covering Russian trolling, the same user posted 62 comments between 9:28 p.m. and 11:52 p.m. In the first hour, 31 comments appeared at an average rate of two comments every 2 minutes. This rate continued throughout the second hour, in which 24 comments appeared.
The user also posted comments in response to news stories published through 2018 (e.g., on 20 February, 1 March, and 3 October). In response to the 2018 February 20 story, the user left seven comments at 11 p.m. that were written within 6 minutes.
Thus, it can be deduced that this user is yet another, what can be perceived as, hyperactive online commenter. Examination of user’s 2832674 self-description and overall activity via Disqus reveals that the user did not provide a location; joined the platform on 2010 April 29; and since produced 88,573 posts and registered 163,609 “likes” across the platform.
The Opposition Poster
It was discovered that within the same sample, another user, user 280308805, had registered on 2018 February 16 to post comments. This is the same date that user commented on a story about the indictment of Russian trolls. The user produced a total of 227 comments and registered 72 “likes.” In response to a specific story on indictment, the user left 53 comments between 20:18:08 and 21:40:21. This frequency can be calculated as one new post per 1.5 minutes within 82 minutes.
Page 74 →This user embodied the Russian trolling opposition in a form of a sequence of rebuttal comments that acknowledges Russian trolling existing. The used tone degraded political opposition (i.e., the Republicans). The following user posts were created in a sequence:
Breitbart Example 6
21:15:22 The kremlin has waited 75 years for the orange low-IQ anti-American racist bigot. The Kremlin become the number one power in the world.
21:17:30 The orange traitor has more ties to the Kremlin than he does to the United States.
21:19:06 Putin rubs his hands together when he sees Trump.
21:20:38 Putin rubs his hands together when he sees Trump. like a child predator sees a 12 yr old little girl lost in the woods.
21:23:40 The question is, why did the Russians want Trump to win the election so bad. Russians see Trump and his deplorables for what they are.. low-IQ uneducated Un-American bigots that can be used as tools to destroy the greatness of America.
21:24:30 Putin is counting on Trump and his deplorables to destroy America from within.
21:25:23 Trump destroys a bit of Irica daily..... as Putin commands.
21:26:40 American Patriots are gonna take the White House back from Putin’s MAGAts!
21:28:12 Deplorables, Why pussyfoot around with Donald Trump when the person you really want to lead the country is Vladimir Putin.
21:29:47 Putin is most displeased with the performance of his trained dog Trump.
Similarly, this user is critical of president Trump’s supporters by evoking their lack of literacy:
Breitbart Example 7
2018-02-16 20:56:16 You can tell the difference between the Russian Trolls here and the Trump sheep. The Russian trolls speak better English. Russians know the difference between there, their, and they’re.
Further, this user invoked the “cracks in society” discussed earlier by bringing into the online forum issues of racism that are not directly related to the topic of Russian trolling. These are some examples of the user’s posts.
Page 75 →Breitbart Example 8
2018-02-16 20:57:37 Fact is, Trump supporters couldn’t care less even if Trump gives the whole country to Russia. they are happy as long as Trump keeps throwing them pieces of red meat regarding their grievances against black and brown people.
2018-02-16 20:46:41 Trump supporters forefathers are the southern conservative confederate traitor trash who waged war on America to keep slavery. Old habits die hard.
The user brought in other “cracks in society” issues, such as the Ku Klux Klan and racism, although they are unrelated to the indictment of Russian trolls.
Breitbart Example 9
2018-02-16 21:12:11 the Republican party really is a big tent! It holds both the Kremlin and the KKK.
2018-02-16 21:06:08 Trumps horrid, evil supporters are so blinded by racist hatred and hyper-partisan idiocy that they don’t care that they are helping Russia rape our democracy.
It can be concluded that this Breitbart user is an opposing voice, as advocated by the anti-Russian-troll stance. Moreover, the user’s comments exemplify attack-based language. This user either represents an authentic “opposition” or due to unusual frequency of posting and focusing on contentious topics and degrading language can be considered as an example of a camouflaged siding with opposition to further stir chaos.
Discussion
Arguments that support Russian trolls or call them out not only create divisive chaos in online spaces but also prove the difficulty of neutralizing denial of the Russian trolling phenomenon. Identified user techniques included reactive strategies, such as rebuttal responses. Some users simply stated, “You are a Russian troll.” Or, depending on content type, they claimed that they are paid Russian trolls. These rhetorical strategies typically refer as to debunking—that is, the exposure or uncovering of something after it happened. Yet such rhetorical strategies are hardly effective. Instead, “prebunking,” according to inoculation theory, has been proposed as a more effectivePage 76 → alternative for combating misinformation (Cook et al., 2017). In other words, it is much more difficult to change entrenched attitudes.
While debunking and prebunking are both tactics that can be used toward fighting disinformation, they differ in timing. While debunking is post-factum strategy, prebunking allows for the possibility of preventing something from happening. Thus, Russian trolling might be effective when tapping into existing divisiveness and preexisting attitudes. Inoculation theory postulates that people can be inoculated against misinformation through preexposure to refuted versions of comments (Cook et al., 2017). The question lingers, however: To what degree can inoculation counteract disinformation? Based on evidence throughout this book, tapping into vulnerabilities and preexisting partisan divisiveness is a tactic for justifying Russian troll interference in online spaces.
News portal comment spaces resemble backstage areas where information is packaged. And posting news stories comments can be considered as a backstage performance. Goffman (1959) described such spaces as the front- and backstage areas of discourse: “The character staged in a theater is not in some ways real, nor does it have the same kind of real consequences as does the thoroughly contrived character performed by a confidence man; the successful staging of either of these types of false figures involves use of real techniques—the same techniques by which everyday persons sustain their real social situations” (p. 255).
While an actor who occupies a theater stage does not experience real consequences, in online scenarios, malicious self-staging can provoke the negative repercussions that typically accompany foreign influence (e.g., Russian troll exposés). And while unmoderated online comment spaces are open to all users for engaging in civil discussions, their undeniable backstage, elaborated on in Goffman (1959) as an element, renders user news commenting both authentic and vulnerable. Goffman (1959) described two situations on a societal level: leveling out of society through the idea of keeping guard of the front stage, for example, when institutions create spaces idea for everyone to be invited. This is authentic because it allows comments to be less filtered. Yet at the same time, anonymous posting invites behaviors that can exceed the scope of individual opinion. Additionally, commenting in online spaces such as news portals can be influenced by foreign governments.
Thus, this chapter records public ambivalence toward the construction of Russian trolling and its various interpretations. On the one hand, such interpretations involve the “noncooperative audience” described earlier—that is, users who call out Russian trolls. On the other hand, they involve recurring Russian troll denial frames that recall Soviet propaganda tactics. Such tacticsPage 77 → are adapted and converted into today’s digital propaganda techniques that operate according to the active measures. Interestingly, they are also known as the propaganda techniques that are closely related to the tactics of performance whereby the online performer achieves discursive objectives through repeated messaging and the self-concealment that anonymous posting enables.
Tactics of successful performance have been contextualized according to Goffman’s (1959) theory of defensive and protective measures. Although propaganda deployment tactics have multiplied, they still resemble these Goffmanian measures. The resemblance illustrates how disinformation, as government-influenced tactics, can be repositioned within the discourses of social anthropology. However, where foreign influence is concerned, such strategic behaviors for influencing public opinion are not mere performative acts. Performative acts in daily interactions, however, can shape perceptions of reality—especially if those acts do not appear contrived and if they manage to resemble typical discursive rituals in news portals’ comments (e.g., democratic debates). However, the complication involved in employing disinformation is the possibility that foreign governments can also appropriate them. In other words, foreign government operatives could use such measures to influence public perception by creating division and chaos, and by subverting the democratic debate process.
If Russian trolling is assumed to be a masquerade, the assumption explains the paradox concerning the invisibility of real “faces.” The assumption is especially relevant for online spaces where invisibility is inscribed in online technological affordances, such as the kind of anonymous posting that can be automated. A popular New York Times cartoon from 2000 illustrates this online invisibility with the motto “anyone on the web can be a dog” (Fleishman, 2000). While performativity encourages the staging of authentic self-presentation, it also enables users to hide beneath a created mask that can be crafted and circulated as authentic. In other words, the tricks of identity subversion show how Russian trolling can be constructed in online spaces.
Russian trolling, when centrally coordinated and ideologically orchestrated has the power to subvert, deconstruct, and obfuscate reality. Consequently, the impersonation of authentic online debaters can invalidate news portals’ comment spaces as legitimate forums for democratic debate. Instead of providing clarity, such impersonations generate public distrust. If that distrust escalates into the suspicion that public online spaces are being infiltrated by foreign agents, news portal comment readers can become paranoid. Specifically, claims that Russian trolls are omnipresent yet invisible Page 78 →generate a destabilizing sense of helplessness—the paranoia that Russian trolls are watching constantly.
And while identity performance through text in online spaces allows Russian trolls to remain masked like typical online participants, the same text, as callout comments, renders Russian trolls visible. Thus, text online involves a paradox of visibility: The text can hide or highlight specific facts but also people. Yet comments online are not equally exposed in terms of being found by the targeted audiences and remain invisible for the undesirable ones. In the case of Russian trolls, they want to be seen by the people who allow them to amplify their arguments, but they do not want to be called out by the opposition. What measures do online commenters take to be seen? And since visibility is key in what Davenport and Beck (2001) called the attention economy, what kind of information do Russian trolls need to conceal to retain their performative masks?
The concept of Russian trolling as a masquerade involves acknowledging the possibility that trolls can wear masks and that Russian trolls, in particular, are actors whose performances are based on the requirements of given masks. Moreover, if it is assumed that Russian trolls are Russian government operatives, it can only be hypothesized how the mask functions as the fluctuating barrier between real and performed identities. Thus, we revert to the fundamental questions regarding how identity can be performed and the performative elements that can be optimized in online spaces. After all, the “real” identity in online spaces is tenuous—a construct that can be inferred only through sociotechnical information fragments, such as location, frequency and timing of posting, and intentionality embedded in posted comments.
By contrast, however, the mask can be created through multiple comment types and by employing technological affordances that allow for anonymity and automation of online spaces. These include visibility and the content propagation features of social media or news portal comments. Whether a troll is a performed identity or a person paid to serve as an online actor, the troll’s messages will be archived along with the plethora of other news portal comments. Thus, it becomes evident that the online mask comes to life when performative packaging is fully prepared and the troll’s performance is carefully pre-scripted.
The concept of performative packaging can be explained by comparing techniques for masking to those of classical propaganda, discussed more in detail in Chapter 2. More specifically, the Russian troll mask is analogous to the propagandist mask whose wearer simulates authentic news commenting behaviors. Russian trolling masks come to life through overt and hiddenPage 79 → propagandist techniques. As discussed earlier, Choukas (1965) specified different types of propaganda, such as covert and overt. Choukas (1965) argued that while the hidden, or covert, propagandist adopts the mask of an opponent’s ally—or, better yet, impersonates an actual member of the opposition, the overt propagandist addresses the opponent as you while producing such demands as “You have to do this” or “You must think about this.”
By contrast, on infiltrating the opponent’s camp, Choukas (1965) argued that the covert propagandist becomes an opposition member who addresses the others within the group with first-person plural pronouns (e.g., we, us, our, ours). Examples of such propaganda techniques can be found in Russian troll callout comments. Complications arise when such comments are read as infiltrated texts, and they generate the question, Who are the opponents of Russian trolling? Thus, the possibility of infiltration creates more uncertainty rather than greater understanding about what happens in online spaces where anyone can be a Russian troll and where online chaos is a constant source of public anxiety.
Russian trolls have been detected behind online locations, IPs, and anonymous posting masks—all of which are used to construct online identities. The mask is supported by the online platforms that allow the troll to craft identities in controlled ways—namely, by highlighting specific parameters while de-emphasizing others, given that an online profile can be created without providing a real name or user photograph—certainly without explaining or justifying one’s personal beliefs. Such user information privacy is central to the democratic principles that govern the online public sphere. As discussed earlier, online news portals typically allow anonymous posting. According to democratic ideals, the wider the range of voices and the greater the multiplicity of viewpoints, the more fulfilling is democratic deliberation. Yet democratic deliberation can also be challenged by the anonymity that enables users to conceal their identities. And those users concealed behind masks could very well be paid workers who disseminate the kind of propaganda that can undermine democracies.
Summary
Invisibility with malicious intent can fuel the antidemocratic processes that take place through the seeding of chaos and uncertainty about what constitutes truth. While such processes go against the construction of clarity, they endorse turbulence and opacity. Given the number of tasks that bots can perform in online spaces, Russian trolling is easily included among them.
Page 80 →Chaos is the desired outcome of Russian trolls whose goal is to sway online public opinion. Currently, public opinion can be influenced by local actors who have the resources to do so. At the same time, such influence can be activated in any nation’s public sphere if that nation has an active, public online presence. Such an active presence enables persuasive efforts to appear genuine. The tactics for influencing public opinion have been discussed in the contexts of masking and self-presentation. Through online news portals, any nation can influence another nation’s political arena by replicating the content sets used in classical propaganda. Yet online news portal comments and social media, rather than traditional mass media, provide online spaces for the individualized messaging that promises authenticity. However, such projected authenticity masks foreign government operatives. Ultimately, however, it is not crucial to distinguish these various types of comment senders. Rather, it is more urgent to acknowledge that the goal of such persuasive efforts is to instill mistrust or to seed unresolvable doubt—all of which leads to chaos.
Chaos creates an unprecedented state of uncertainty where government and intelligence community are compelled to act on something intangible, ephemeral, and dynamic. Online spaces invite all users to join the “public debate”—especially where societally urgent topics like Russian trolling are concerned. Chaos is ephemeral, since one comment can diminish the significance of another. It is also dynamic, since there is no clear evidence for its source. After all, access to comments can be changed by creating “attention” patterns, and messaging can be manipulated through the technological affordances of news portals. Chaos is also intangible, since online spaces lack materiality.