Advanced chatbots such as ChatGPT have shown us that the technology is indispensable. The trick is keep them reined-in …
When the pioneer developer of advanced AI from Google quits to warn the world about the dangers of AI automation, it is time for the world to take notice of the potential excesses of the technology that humans are capable of bringing onto ourselves.
With the super acceleration of global digitalization driving an incessant need for generative AI capabilities, the tendency and temptation to cede too much autonomy to bots will be strong, and this is where Amit Yossi Siva Levi, co-founder and CTO, Immue, (a bot detection firm) shares with CybersecAsia.net his views on straddling the two sides of the bot technology coin safely.
CybersecAsia: The malicious use of bot technology has evolved fast and furious: can you share your insider perspectives on this dark side of the technology?
Amit Yossi Siva Levi (AYSL): Part of that evolution is due to the fact that it is a hot area in the cybercriminal community. And part of it is just that it makes sense, for two reasons: impact and ROI.
- Impact: A bot dramatically expands the scale and speed at which someone can work. It automates pesky parts of an attack, greatly increasing efficiency — meaning more thought can go into continual improvements and new tricks (which are often, in time, incorporated into future bots).
- ROI: A bot maker can use the bot themselves in attacks that steal data or items. Or they can sell bots they have created to other criminals for a steady income. They can also rent out the use of their bot on a subscription basis. One way and another, it creates great returns on investment.
Due to both of these factors, it is worthwhile for criminals to make better and more effective bots. It used to be that DDOS attacks were popular, and now you see that kind of “scale attack mindset” being expressed through bots instead.
You can use a bot for practically anything nasty. Steal items, scrape details, create accounts, attempt logins with username/email and passwords — you name it, they can build it. And they, or someone else, can use it at scale.
CybersecAsia: What is the state of good-vs-bad bot development now, in your experience?
AYSL: I think what is most alarming right now is how sophisticated the developers are becoming. There is a big community around bots now, and it has matured. There is a lot of experience and knowledge, as well as creativity and drive. By and large, it is a helpful community: they share advice, suggest fixes, and share code snippets.
The code kiddies I met back when I was a teenager myself are all grown up now, and some of them are still in this line of work. The more ROI bot makers get, the more gets invested back into the community and bot creation. It is sort of like a successful startup field in that way.
Of course, the anti-bot companies and solutions are evolving as well, and they are getting smarter, so the arms race continues: however, for a lot of the bot creators (the good ones) that is a feature rather than a bug. They find the race exhilarating. I admit, I do too.
CybersecAsia: In what direction do you see bot abuse evolving in the near future?
AYSL: I think the main direction I see coming is the move toward multi-purpose bots. It used to be that a bot creator would write a program for something specific. The bot would be for account takeover (ATO) — and often, for one part of the ATO attack. Or it may be for creating fake reviews or whatever. But it only did one job.
Now, I am seeing a trend towards making bots that you can use for lots of things, which is great for user experience on the criminal side. I think that trend is going to continue and become the new norm, at least for a while.
Of course, the more this happens, the more likely it is to draw attention from the bot detection and defense side.
CybersecAsia: What are some of the greatest lessons about bot attacks that defenders and AI administrators need to know?
AYSL: That bots are not just about bots. Whatever fraud your company is struggling with, it probably starts with bots or has bots in the mix somewhere. If you miss the bot part of the picture, or if you silo your bot detection separately from the rest of your fraud-fighting work, you are going to miss out on what is really going on.
It is easiest to see this with ATO; you could be worrying about the transactions that go through because of ATO. But the attack actually started days or weeks earlier, with a credential stuffing bot attack — which looked like it had no real consequence at the time. But it was not really that the fraudsters did nothing; it was that they were preparing for the later attack.
The same pattern happens with other kinds of fraud as well. You need to understand, identify, and include bots as part of your fraud-fighting strategy.
As someone who spends so much time hanging out on the Dark Net, I feel that the challenge presented by bots is growing. But the attitude of the fraud-fighting and cybersecurity community is evolving quickly, and we are increasing in not only technological sophistication but awareness of bots and what they can do and why it matters.
The more clearly they see the problem bots represent, the more confident I am that the solutions we find will be powerful, cutting edge, and effective.
CybersecAsia thanks Amit for sharing his thoughts on the duality of bot evolution.