Page 1 of 2 12 LastLast
Results 1 to 15 of 16

Thread: How to spot a bot

  1. #1 | Top
    Join Date
    Jul 2006
    Posts
    127,633
    Thanks
    36,344
    Thanked 20,958 Times in 16,309 Posts
    Groans
    0
    Groaned 18,578 Times in 17,195 Posts
    Blog Entries
    5

  2. The Following User Groans At evince For This Awful Post:

    USFREEDOM911 (01-12-2019)

  3. The Following 2 Users Say Thank You to evince For This Post:

    christiefan915 (01-12-2019), ThatOwlWoman (01-13-2019)

  4. #2 | Top
    Join Date
    Jul 2006
    Posts
    127,633
    Thanks
    36,344
    Thanked 20,958 Times in 16,309 Posts
    Groans
    0
    Groaned 18,578 Times in 17,195 Posts
    Blog Entries
    5

    Default

    #BotSpot: Twelve Ways to Spot a Bot
    Some tricks to identify fake Twitter accounts


    @DFRLabFollow
    Aug 28, 2017


    “Bots” — automated social media accounts which pose as real people — have a huge presence on platforms such as Twitter. They number in the millions; individual networks can number half a million linked accounts.
    These bots can seriously distort debate, especially when they work together. They can be used to make a phrase or hashtag trend, as @DFRLab has illustrated here; they can be used to amplify or attack a message or article; they can be used to harass other users.
    At the same time, many bots and botnets are relatively easy to spot by eyeball, without access to specialized software or commercial analytical tools. This article sets out a dozen of the clues, which we have found most useful in exposing fake accounts.
    First principles
    A Twitter bot is simply an account run by a piece of software, analogous to an airplane being flown on autopilot. As autopilot can be turned on and off, accounts can behave like bots and like human users at different times. The clues below should therefore be viewed as indicators of botlike behavior at a given time, rather than a black-or-white definition of whether an account “is” a bot.



    A cryptic post from the @AutoShakespeare poetry bot.
    Not all bots are malicious or political. Automated accounts can post, for example, poetry, photography or news, without creating any distorting effect.



    What the bot sees… A post from @cloudvisionbot.
    Our focus is therefore on bots which masquerade as humans and amplify political messaging.
    In all cases, it is important to note that no single factor can be relied upon to identify bot-like behavior. It is the combination of factors which is important. In our experience, the most signifcant three can be called the “Three A’s”: activity, anonymity, and amplification.
    1. Activity
    The most obvious indicator that an account is automated is its activity. This can readily be calculated by looking at its profile page and dividing the number of posts by the number of days it has been active. To find the exact date of creation, hover the mouse over the “Joined …” entry.


    Screenshot of @Sunneversets100, taken on August 28, and showing the exact creation date. Account archived on January 13, 2017, and again on August 28, 2017, showing the change in posts over that period.
    The benchmark for suspicious activity varies. The Oxford Internet Institute’s Computational Propaganda team views an average of more than 50 posts a day as suspicious; this is a widely recognized and applied benchmark, but may be on the low side.
    @DFRLab views 72 tweets per day (one every ten minutes for twelve hours at a stretch) as suspicious, and over 144 tweets per day as highly suspicious.
    For example, the account @sunneversets100, an amplifier of pro-Kremlin messaging, was created on November 14, 2016. On August 28, 2017, it was 288 days old. In that period, it posted 203,197 tweets (again, the exact figure can be found by hovering the mouse over the “Tweets” entry).
    This translates to an average of 705 posts per day, or almost one per minute for twelve hours at a stretch, every day for nine months. This is not a human pattern of behavior.
    2. Anonymity
    A second key indicator is the degree of anonymity an account shows. In general, the less personal information it gives, the more likely it is to be a bot. @Sunneversets100, for example, has an image of the cathedral in Florence as its avatar picture, an incomplete population graph as its background, and an anonymous handle and screen name. The only unique feature is a link to a U.S.-based political action committee; this is nowhere near enough to provide an identification.
    Another example is the account @BlackManTrump, another hyperactive account, which posted 89,944 tweets between August 28, 2016 and December 19, 2016 (see archive here), an average of 789 posts per day.


    Screenshot of an archive of the @BlackManTrump profile. Note at bottom left the creation date, and at top right the date on which the archive was created.
    This account gives no personal information at all. The avatar and background are non-specific, the location is given as “USA” and the bio gives a generic political statement. There is thus no indication of what person lies behind the account.
    3. Amplification
    The third key indicator is amplification. One main role of bots is to boost the signal from other users by retweeting, liking or quoting them. The timeline of a typical bot will therefore consist of a procession of retweets and word-for-word quotes of news headlines, with few or no original posts.
    The most effective way to establish this pattern is to machine-scan a large number of posts. However, a simpler, eyeball identification is possible by clicking on the account’s “Tweets and replies” bar and scrolling down the last 200 posts. The number 200 is largely arbitrary and is designed to give a reasonable and manageable, large sample; researchers who have more time and tougher eyeballs can view more.
    As of August 28, for example, 195 of @Sunneversets100’s last 200 tweets were retweets, many of them from Kremlin outlets RT and Sputnik:


    Screenshot of the @Sunneversets100 timeline, taken on August 28, showing the series of Sputnik retweets. Note that the account seems not to have posted since late April. If the tweets-per-day count is recalibrated to April 30, it rises to 1,210 posts per day.
    Showing one more degree of sophistication, most of @BlackManTrump’s posts until November 14 appeared to be retweets with the telltale phrase “RT @” removed:


    Posts from @BlackManTrump in November 2016. Note that each tweet starts with a username and colon, suggesting that these are retweets from which the “RT @” has been removed.
    Thus both @BlackManTrump and @Sunneversets show clear botlike behavior, combining very high activity, anonymity, and amplification.
    As a caveat, it should be noted that @BlackManTrump was silent from November 14 to December 13, 2016; when it resumed posting, it was at a far lower rate, and with a higher proportion of apparently authored tweets. It would therefore be entirely correct to say that it behaved like a bot until mid-November, but not that it is a bot now.
    Another amplification technique is to program a bot to share news stories direct from selected sites without any further comment. Direct shares are, of course, a standard part of Twitter traffic (readers are more than welcome to share this post, for example), and are not suspicious in themselves; however, an account which posts long strings of such shares is likely automated, as in this account opposed to U.S. President Donald Trump, identified in July:


    Screenshot from @ProletStrivings on August 28; note how the posts simply replicate the headlines they share, without comment. Account archived on August 28.
    4. Low posts / high results
    The bots above achieve their effect by the massive amplification of content by a single account. Another way to achieve the same effect is to create a large number of accounts which retweet the same post once each: a botnet.
    Such botnets can quickly be identified when they are used to amplify a single post, if the account which made the post is not normally active.
    For example, on August 24, an account called @KirstenKellog_ (now suspended, but archived here) posted a tweet attacking U.S. media outlet ProPublica (propublica.com).


    Profile page for @KirstenKellog_, showing the number of tweets and followers, and the one visible post. Archived on August 24, 2017.
    As the above image shows, this was a very low-activity account. It had only posted 12 times; 11 of them had already been deleted. It had 76 followers, and it was not following any accounts at all.
    Nevetherless, its post was retweeted and liked over 23,000 times:


    The tweet, showing the number of likes and retweets. Archived on August 24, 207.
    Similarly, the following day, another apparently Russian account posted an almost identical attack, and it scored over 12,000 retweets and likes:


    The follow-up attack. Archived on August 25, 2017. By August 28, the retweets and likes had topped 20,000.
    This account is just as idle, having posted six tweets, the earliest on August 25, and it followed five other accounts:


    The follow-up attacker’s profile page.
    It is beyond the bounds of plausibility that two such idle accounts should be able to generate so many retweets, even given the use of hashtags such as #FakeNews and #HateGroup. This disparity between their activity and their impact suggests that the accounts which amplified them belong to a botnet.
    5. Common content
    The probability that accounts belong to a single network can be confirmed by looking at their posts. If they all post the same content, or type of content, at the same time, they are probably programmed to do so.
    In the suspected botnet which amplified @KirstenKellog_, for example, many of the accounts shared identical posts such as this:






    Left to right: Identical retweets from “Gail Harrison”, “voub19” and “Jabari Washington”, who also amplified @KirstenKellog_.
    Sometimes, bots share whole strings of posts in the same order. The three accounts below are part of the same anti-Trump network identifed in July:






    Left to right: identical posts, in the identical order, by @CouldBeDelusion, @ProletStrivings and @FillingDCSwamp, on July 26. Note also the way in which the posts copy verbatim the headlines of the articles they share.
    On August 28, the same three accounts shared identical posts in identical order again; @ProletStrivings added a retweet to the mix:






    Left to right: Screenshots of the profiles of @CouldBeDelusion, @FillingDCSwamp and @ProletStrivings, showing the identical order of shares. Note also the text “Check out this link” in each first tweet, a likely marker of another auto-shared post. Screenshots and archives made on August 28.
    Such identical series of posts are classic signs of automation.
    6. The Secret Society of Silhouettes
    The most primitive bots are especially easy to identify, because their creators have not bothered to upload an avatar image to them. Once called “eggs”, from the days when the screen image for an account without an avatar was an egg, they now resemble silhouettes.
    Some users have silhouettes on their accounts for entirely innocuous reasons; thus the presence of a silhouette account on its own is not an indicator of botness. However, if the list of accounts which retweet or like a post looks like this…


    Screenshot of the list of retweets from an @AtlanticCouncil post which was subject to a particularly blatant and hamfisted bot spike on August 28.
    … or if an account’s “Followers” page begins to look like the meeting place for the Secret Society of Silhouettes…


    Screenshot of the Followers page for Finnish journalist @JessikkaAro, after an unexpected bot visit on August 28.
    …it is a certain sign of bot activity.
    7. Stolen or shared photo
    Other bot makers are more meticulous, and try to mask their anonymity by taking photos from other sources. A good test of an account’s veracity is therefore to reverse search its avatar picture. Using Google Chrome, right-click on the image and select “Search Google for Image”.


    Searching on Google Chrome for the photo of “Shelly Wilson”, a suspected bot.
    Using other browsers, right-click the image, select “Copy Image Address”, enter the address in a Google search and click “Search by image”.




    In either case, the search will show up pages with matching images, indicating whether the account is likely to have stolen its avatar:


    In the case of “Shelly Wilson”, a number of accounts in the same network actually used the image, confirming that they were fakes:






    8. Bot’s in a name ?
    A further indicator of probable botness is the handle (account name starting with “@”) that it uses. Many bots have handles which are simply alphanumeric scrambles generated by an algorithm, such as these:


    Twitter handles of some of the bots which retweeted the @KirstenKellog_ tweet. Note the only apparent name, @ToddLeal3, which will be discussed below.
    Others have handles which appear to give a name, but it does not match the screen name:






    Left to right: “Sherilyn Matthews”, “Abigayle Simmons”, and “Dorothy Potter”, whose handles call them “NicoleMcdonal”, “Monique Grieze”, and “Marina”, respectively.
    Yet others have typically male names but female images (an occurrence which appears far more common among bots than a female handle with a male image, perhaps to target male users)…






    Three more accounts in the same botnet: “Todd Leal”, “James Reese” and “Tom Mondy”, archived on August 24 and 28, 2017.
    … or male handles, but female names and images…






    Left to right, “Irma Nicholson”, “Poppy Townsend” and “Mary Shaw”, whose handles proclaim them to be David Nguyen, Adrian Ramirez and Adam Garner.
    … or something different entirely.


    “Erik Young,” a woman who loves Jesus, from the same net.
    All these indicate that the account is a fake, impersonating someone (often a young woman) to attract viewers. Identifying the type of fake, and whether it is a bot, will depend on its behavior.
    9. Twitter of Babel
    Some bots are political, and only ever post from one point of view. Others, however, are commercial, and seem hired out to the highest bidder regardless of the content. Most of their posts are apolitical; but they, too, can be used to boost political tweets.
    Such botnets are often marked by extreme diversity of language use. A look at the retweets posted by Erik Young, the “woman who loves Jesus,” for example, shows content in Arabic and English, Spanish, and French:






    A similar look at posts from the anonymous and imageless account @multimauistvols (screen name “juli komm”) shows tweets in English…


    … Spanish…


    … Arabic…


    … Swahili (according to Google Translate)…


    … Indonesian…


    … Chinese…


    … Russian…


    … and Japanese.


    In real life, anyone who has mastered all those languages probably has better things to do than advertising YouTube videos.
    10. Commercial content
    Advertising, indeed, is a classic indicator of botnets. As noted above, some botnets appear to exist primarily for that purpose, only occasionally venturing into politics. When they do, their focus on advertising often betrays them.
    A good example is the curious net of bots which retweeted a political post from an account, @every1bets, usually devoted to gambling.
    The retweeters claimed a variety of identities, as this listing shows:


    But they all tended to post a high proportion of commercials.






    Accounts which largely show retweets like this, especially if they do so in multiple languages, are most probably members of commercial botnets, hired out to users who want to amplify or advertise their posts.
    11. Automation software
    Another clue to potential automation is the use of URL shorteners. These are primarily used to track traffic on a particular link, but the frequency with which they are used can be an indicator of automation.
    For example, one recently-exposed fake account called “Angee Dixson”, which used the avatar image of German supermodel Lorena Rae, shared a large number of far-right political posts. Every one was marked with the URL shortener ift.tt:


    This is a type of software produced by a company called ifttt.com, which allows users to automate their posts according to a number of criteria — for example, retweeting any post with a given hashtag. A timeline which is full of ift.tt shorteners is therefore likely to be a bot.
    Other URL shorteners can also indicate automation, if they occur repeatedly throughout the timeline. The shortener ow.ly, for example, is attached to social media manager HootSuite; some bots have been known to post long strings of ow.ly shares from websites, indicating likely automation. Twitter’s own TweetDeck facility allows users to embed a variety of URL shorteners, such as bit.ly or tinyurl.com.
    Yet again, the use of such shorteners is part and parcel of online life, but an account which obsessively shares news articles using the same shortener should be assessed for other indications that it is a bot.
    12. Retweets and likes
    A final indicator that a botnet is at work can be gathered by comparing the retweets and likes of a particular post. Some bots are programmed to both retweet and like the same tweet; in such cases, the number of retweets and likes will be almost identical, and the series of accounts which performed the retweets and likes may also match, as in this example here:




    Left, retweets, and right, likes, of the second account to attack ProPublica.com
    In this example, the variation between the number of retweets and likes is just 11 responses — a difference of less than 0.1 percent. Exactly the same accounts retweeted and liked the tweet, in the same order, and at the same time. Across a sample of 13,000 users, this is too unlikely to be a coincidence. It indicates the presence of a coordinated network, all programmed to like, and retweet, the same attack.
    Conclusion
    Bots are an inseparable part of life on Twitter. Many are entirely legitimate; those which are not legitimate tend to have key characteristics in common.
    The most common indicators are activity, anonymity, and amplification, the “Three A’s” of bot identification; but other criteria also exist. The use of stolen images, alphanumeric handles, and mismatched names can reveal a fake account; so, too, can a slew of commercial posts, or posts in a wide range of languages.
    What is most important, however, is awareness. Users who can identify bots themselves are less likely to be manipulated by them; they may even be able to report the botnets and have them shut down. Ultimately, bots exist to influence human users. The purpose of this article is to help human users spot the signs.

  5. The Following User Groans At evince For This Awful Post:

    USFREEDOM911 (01-12-2019)

  6. The Following 3 Users Say Thank You to evince For This Post:

    guno (01-13-2019), ThatOwlWoman (01-13-2019), Tranquillus in Exile (01-12-2019)

  7. #3 | Top
    Join Date
    May 2018
    Posts
    30,844
    Thanks
    11,894
    Thanked 8,522 Times in 6,611 Posts
    Groans
    382
    Groaned 2,080 Times in 1,911 Posts
    Blog Entries
    1

    Default

    Thanks, Natasha.

  8. The Following User Says Thank You to Jack For This Post:

    Irish Exit (01-12-2019)

  9. #4 | Top
    Join Date
    Aug 2017
    Posts
    8,497
    Thanks
    796
    Thanked 3,180 Times in 2,409 Posts
    Groans
    376
    Groaned 244 Times in 225 Posts

    Default

    Bots are only conservatives. Democrats would never dream of having such a thing.

  10. #5 | Top
    Join Date
    Dec 2017
    Location
    Flyover Country
    Posts
    5,568
    Thanks
    3,383
    Thanked 3,184 Times in 2,211 Posts
    Groans
    222
    Groaned 160 Times in 155 Posts

    Default

    Quote Originally Posted by evince View Post
    How to spot a bot
    Pssst. I'll teach you how to spot a twat. Look in a mirror!

  11. #6 | Top
    Join Date
    Jul 2006
    Posts
    127,633
    Thanks
    36,344
    Thanked 20,958 Times in 16,309 Posts
    Groans
    0
    Groaned 18,578 Times in 17,195 Posts
    Blog Entries
    5

    Default

    Quote Originally Posted by Kacper View Post
    Bots are only conservatives. Democrats would never dream of having such a thing.
    we don't need to cheat


    we have the voter advantage


    all we have to do is get people to vote and stop you and the Russians from cheating us.


    Democracy works when evil cant tamper with it

  12. The Following User Groans At evince For This Awful Post:

    USFREEDOM911 (01-13-2019)

  13. The Following User Says Thank You to evince For This Post:

    ThatOwlWoman (01-13-2019)

  14. #7 | Top
    Join Date
    Jul 2006
    Posts
    127,633
    Thanks
    36,344
    Thanked 20,958 Times in 16,309 Posts
    Groans
    0
    Groaned 18,578 Times in 17,195 Posts
    Blog Entries
    5

  15. The Following User Groans At evince For This Awful Post:

    USFREEDOM911 (01-13-2019)

  16. #8 | Top
    Join Date
    Dec 2006
    Posts
    38,489
    Thanks
    1,315
    Thanked 4,083 Times in 3,166 Posts
    Groans
    6
    Groaned 259 Times in 240 Posts

    Default

    what do we do about globalists though?
    Morality is a set of attitudes and behaviors which facilitate voluntary, cooperative and mutually beneficial relationships. --AssHatZombie

    Obamagate is Operation Crossfire Hurricane

    "AssHat rocks and is fun to have around." -- Damocles

    https://qanon.pub <--- qanon project

  17. #9 | Top
    Join Date
    Jul 2006
    Posts
    127,633
    Thanks
    36,344
    Thanked 20,958 Times in 16,309 Posts
    Groans
    0
    Groaned 18,578 Times in 17,195 Posts
    Blog Entries
    5

  18. The Following User Groans At evince For This Awful Post:

    USFREEDOM911 (01-13-2019)

  19. #10 | Top
    Join Date
    Dec 2006
    Posts
    38,489
    Thanks
    1,315
    Thanked 4,083 Times in 3,166 Posts
    Groans
    6
    Groaned 259 Times in 240 Posts

    Default

    Quote Originally Posted by evince View Post
    what do you suggest shit swallower
    ignoring their stupidity.
    Morality is a set of attitudes and behaviors which facilitate voluntary, cooperative and mutually beneficial relationships. --AssHatZombie

    Obamagate is Operation Crossfire Hurricane

    "AssHat rocks and is fun to have around." -- Damocles

    https://qanon.pub <--- qanon project

  20. #11 | Top
    Join Date
    Jul 2006
    Posts
    127,633
    Thanks
    36,344
    Thanked 20,958 Times in 16,309 Posts
    Groans
    0
    Groaned 18,578 Times in 17,195 Posts
    Blog Entries
    5

    Default

    Quote Originally Posted by AssHatZombie View Post
    ignoring their stupidity.
    outline why you think a world working together is bad MPBF?


    because you are a chaos want to be king who in reality is merely an internets cockroach

  21. The Following User Groans At evince For This Awful Post:

    USFREEDOM911 (01-13-2019)

  22. #12 | Top
    Join Date
    Dec 2006
    Posts
    38,489
    Thanks
    1,315
    Thanked 4,083 Times in 3,166 Posts
    Groans
    6
    Groaned 259 Times in 240 Posts

    Default

    Quote Originally Posted by evince View Post
    outline why you think a world working together is bad MPBF?


    because you are a chaos want to be king who in reality is merely an internets cockroach
    soveriegn nation states have always worked together.

    globalism could have been done well, but the oligarchs running it now have mass murder, economic enslavement, and totalitarianism as their guiding principle.

    putting all free people out of work with offshore slavery from despotic regimes is just a bad idea.
    Morality is a set of attitudes and behaviors which facilitate voluntary, cooperative and mutually beneficial relationships. --AssHatZombie

    Obamagate is Operation Crossfire Hurricane

    "AssHat rocks and is fun to have around." -- Damocles

    https://qanon.pub <--- qanon project

  23. #13 | Top
    Join Date
    Dec 2006
    Posts
    38,489
    Thanks
    1,315
    Thanked 4,083 Times in 3,166 Posts
    Groans
    6
    Groaned 259 Times in 240 Posts

    Default

    Quote Originally Posted by evince View Post
    we don't need to cheat


    we have the voter advantage


    all we have to do is get people to vote and stop you and the Russians from cheating us.


    Democracy works when evil cant tamper with it
    but your voters are illegaly here.
    Morality is a set of attitudes and behaviors which facilitate voluntary, cooperative and mutually beneficial relationships. --AssHatZombie

    Obamagate is Operation Crossfire Hurricane

    "AssHat rocks and is fun to have around." -- Damocles

    https://qanon.pub <--- qanon project

  24. #14 | Top
    Join Date
    Jul 2006
    Posts
    127,633
    Thanks
    36,344
    Thanked 20,958 Times in 16,309 Posts
    Groans
    0
    Groaned 18,578 Times in 17,195 Posts
    Blog Entries
    5

  25. The Following User Groans At evince For This Awful Post:

    USFREEDOM911 (01-13-2019)

  26. #15 | Top
    Join Date
    Jul 2009
    Posts
    96,221
    Thanks
    6,674
    Thanked 27,659 Times in 21,999 Posts
    Groans
    2,251
    Groaned 2,222 Times in 2,118 Posts

    Default

    they post under the name "evince"......

Similar Threads

  1. X Marks the Spot
    By Konono in forum Current Events Forum
    Replies: 0
    Last Post: 04-08-2016, 08:45 PM
  2. Can you spot the lie?
    By Legion Troll in forum Current Events Forum
    Replies: 0
    Last Post: 11-27-2015, 01:11 PM
  3. Can you spot the fake?
    By Legion Troll in forum Off Topic Forum
    Replies: 10
    Last Post: 11-23-2015, 05:51 PM
  4. Can you spot the nightjar?
    By Primavera in forum Off Topic Forum
    Replies: 7
    Last Post: 02-23-2014, 10:06 AM

Bookmarks

Posting Rules

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •