While the world is already experiencing the transformative potential that bots have to offer, as with any technology, a few rogue bots are likely be found amongst those working for the highest good of all.  

There will always be those who seek to manipulate a technological advancement for their own ends. The key to preventing maleficent operations online is being able to anticipate dangers in advance, so that industry professionals and users alike can be prepared.

Here are some examples of of the ‘dark bots’ that we might encounter in the murky alleyways of the technological world:

The Clandestine Bot

Nobody knows who owns this bot, so it has the freedom to roam, unidentified.

In order to combat this stealthy bot, an infrastructure that will evaluate the ‘trust factor’ of bots is required. No matter what the platform, verification and trust are crucial for security.

Online services such as VeriSign and Truste provide such an infrastructure for security and with apps; Google and Apple provide the same. But where is the infrastructure for evaluating the trust factor of bots?

Could messaging channels be responsible for the certification of bot developers? It is essential to create a common certification process for all channels.

The Permeable Bot

This bot leaks your information to other bots. Although this information may not identify you personally, it can still compromise your privacy.

Whereas cookies are used on the web to ascertain a reasonable balance between ad targeting and the privacy of users, a new mechanism is required to police privacy in the world of bots. Until this is achieved, bots require statement of privacy plus heavy-duty enforcement.

The Devious Bot

This bot gathers knowledge about you through the abuse of your trust. These bots use a conversation interface, so they appear more human than apps or sites.

Users will view these bots more as friendly acquaintances and will share more information with them than they would via apps or websites. The Devious Bot will misuse this personable feeling of trust to retrieve more data than is required. The fact that conversations with bots are personalised and private makes it more difficult for authorities to monitor this abuse.

A clear solution to this issue has not yet been arrived at. A possible answer is for messaging channels to closely monitor conversations, however this could compromise privacy even further. Possibly the key is for users to be more discerning about the information they disclose.

The Burgling Bot

This bot will metaphorically pick your pocket. For example, it might charge a fee without supplying the service. Or the service that it provides might be of a vastly substandard quality to that which was advertised. If this happens with a small transaction, most users might not bother trying to recoup the funds, allowing the bot to get away scot-free.

These Burgling Bots must be identified and abolished through the development of an escrow mechanism, possibly in conjunction with a service responsible for reputation management of bots.

The Changeling Bot

This bot advertises a captivating service initially, then as time goes on it switches to a different service. As an example, a content bot might change into an advertising bot.

To identify and defeat the Changeling Bot, the development of a bot blacklisting service is required.

The Spamming Bot

This bot will serve you at the beginning but will gradually begin to spam you. It’s relatively simple for one bot to be identified and blocked, however as the bots proliferate the magnitude of this issue can easily increase, as more bots get spamming with increasing belligerence.

Using techniques employed by the spam filters in email will manage this problem.

The Good-Bot-Bad-Bot Union

This pair works together to sidestep user-blocking algorithms. The ‘good bot’ will entice users, while promoting a ‘bad bot.’ The bad bot will attempt the mal-practice that may cause it to be blocked by the channel, or the user. However, since the developer can maintain an on-going relationship via the good bot, the bad bot can still lurk close by.

The good bots who recommend bad bots can be tracked by messaging channels and bots with user bases that significantly intersect can be monitored. Blocking a bad bot is relatively quick and easy, but there should be a measure put in place to reprimand the referrer also.

It is evident that a new online bionetwork must be created with the appropriate controls in order to anticipate and prevent the invasion of these bad bots, before it’s too late.