AI in Telecoms Security

Opportunities and Risks
Written by Neil Anderson

Since the launch of OpenAI’s Chat GPT-1 in June 2018, innovation in the AI landscape has been accelerating, with more use cases and workloads coming within the capability of the new AI models we see being released on what seems like a daily basis.

The advent of Large Language Models (LLMs) has made AI more accessible to ordinary users than ever before in a spectacular fashion. The advent of agent-based or “agentic AI” opens the door to AI models that can operate autonomously, further increasing the utility of the technology, while the continuing development of older AI and machine learning techniques, such as artificial neural networks, has similarly brought additional benefits and capabilities.

In the field of cybersecurity—and especially for telecommunications, an arena where threats multiply by the minute—the promise of advanced AI agents capable of detecting and automatically interdicting threats in real time is especially tantalizing. But, like many great leaps forward in technology, these advances carry both opportunities and risks.

Let’s start with the opportunities. Threat actors are continually adapting their attacks to new environments, typically with the objective of either monetizing malicious access to systems or disrupting them. For a critical service like telecommunications—often thought of as Critical National Infrastructure (CNI)—protecting against these threats is of paramount importance.

New AI capabilities can help telecoms firms in many ways, including fraud detection, network monitoring, anomaly detection, threat identification, research, and automated response to intrusions. Many cybersecurity firms are seeking to leverage new AI capabilities and have announced the availability of security-focused chatbots, such as Streaming Defense’s SDAIX, with the objective of assisting humans in the threat detection and incident response role.

So, you can imagine a scenario where an advanced machine learning algorithm detects an anomaly associated with malicious activity, and hands off to an agent which automatically remediates the threat and then flags it to the Security Team for further analysis. This is exactly the promise of Streaming Defense’s Attack Operations Theater, which combines machine learning, real-time visibility across the network, and privately trained LLMs to deliver the ability for security staff to interdict security threats as they happen.

AI assistants are often competent programmers and have a broad range of knowledge that can be enhanced through retrieval augmented generation (RAG) and fine-tuning to enhance the model’s knowledge and capabilities. They are also capable of identifying vulnerabilities in code faster than human testers and, crucially, don’t get bored doing so.

To pick a more fun example, imagine an AI agent with voice capabilities that can be used to waste scammers’ time, such as Virgin Media’s Daisy, which has been frustrating scammers since last year.

Is it therefore fair to say that with the new AI capabilities being released onto the market, the possibilities are limited only by our imagination in providing new security capabilities? In fact, doesn’t this mean the end of the deployment of complex security point products that all too often fail to deliver on their promises?

Well, sadly no. As with all new technologies, the adoption of AI into the cybersecurity role comes with its challenges. The most immediately obvious challenge is that you increase the attack surface of your organization. It is possible that an attacker will be able to get access to your AI and manipulate it into providing sensitive data, compromising your security posture, or simply take it offline, which may leave you without the resources to respond to a future intrusion.

Another major consideration is the fact that even with paid subscriptions, many AI companies will take any data put into the AI model and use it for future training. This lack of data sovereignty is compounded by the attraction of cloud-based AI systems since there is no requirement to spend money on costly AI infrastructure. This often proves to be a false economy as you lose sovereignty of any data put into the model. This lack of sovereignty poses major risks to any business, but especially to telecoms firms which are typically subject to additional regulatory attention.

Generic models such as Chat GPT, Copilot, and others have a very broad range of knowledge, but a poor level of expertise. They are easily biased by their training data, which is becoming scarcer now that most publicly available data has already been consumed, and AI companies are increasingly resorting to distillation—the process of training AI models on the outputs of earlier models—to make progress. This has the advantage of producing a much more efficient model at the expense of real expertise. Deepseek’s R1 model is a great example of both these problems; it promises extremely impressive performance on cheaper hardware than its competitors but at the cost of a lower level of expertise, with the inclusion of intentional bias toward Chinese government positions on certain matters.

Even models trained in the standard way are subject to “hallucinations” and will sometimes invent answers or guess if they do not have the correct answer. This is a particular problem when these models are being used by non-experts—a given model’s responses are often highly plausible, which could lead a non-expert user to rely on answers that are wrong. In specialist businesses, such as telecoms or cybersecurity, this could have disastrous results. This has led to the creation of new disciplines, such as prompt engineering, which are intended to drive better results from AI models. While this increases the utility of the technology, it creates new skills gaps as these functions are typically highly technical and as a new discipline, people with these skills are relatively rare.

Based on these potential risks, how can you take advantage of the opportunities presented by AI to help secure your business? There are several methods this author would recommend.

Firstly, have a clear strategy. If your business already has an overall AI strategy, whatever you choose to implement for security should align with the business strategy. You should have a good idea of exactly you want to achieve with AI, preferably aligned with an organizational threat model which can be used to derive security use cases (or more simplistically, things you want to prevent) that can then be used to identify opportunities for AI enhancement. This allows you to start small and expand. For example: don’t get rid of your Security Operations Centre and fire all your analysts in favour of an AI-based approach. Instead start the migration to an Attack Operations Theatre that combines real-time visibility, machine learning, and AI to enhance the capabilities of your security analysts, allowing them to detect threats across your network and the networks of your customers that they otherwise would simply never have seen.

It is also critical to maintain sovereignty of the data you put into AI models and to understand how your data and that of your customers will be used by the vendor. This has two main benefits: it reduces your exposure to supply chain attacks, and it reduces the risk that a breach of your AI provider will lead to regulatory action against your business. Using private models such as SDAIX is a great way to achieve this, with the added benefit that you can also fine-tune and customize the model with your own documentation and knowledge. By doing so, you deepen the knowledge of the model and reduce the likelihood of it improvising or inventing answers. Using a private local model also reduces the potential attack surface of your AI operations, and you can even put the model behind an airgap to minimize the threat of an attacker gaining access to your AI remotely.

Finally, consider the impact on your staff: do they require training? Do you need to bring in new skill sets to work with AI systems? Are there opportunities to reduce headcount in other areas? How will adopting AI in this way make your employees’ lives easier, and how will it improve your customers’ security? All these aspects should be considered and an effective communication plan developed to describe your implementation and the benefits it will bring.

There are great opportunities to leverage AI to improve telecommunications security, and by working with the strengths of AI systems while being aware of their weaknesses and the risks they can present, you can effectively use AI to improve your business’s security posture. The key to achieving this is implementing an AI security strategy that fits with the rest of your business. By aligning your AI implementation with a known set of security use cases, you will ensure that your security personnel can make an immediate impact.

AI is a new frontier in cybersecurity, and by leveraging its high pace of innovation in the right way, you can have something to shout about and bring benefits both to the field of cybersecurity and that of telecommunications.

Sidebar:
Next-generation Cybersecurity Solutions from Streaming Defense
Streaming Defense is redefining cybersecurity by transforming traditional Security Operations Centers (SOCs) into proactive Attack Operations Theaters (AOTs). Our cutting-edge AI-driven solutions provide real-time threat detection, network-wide visibility, and automated response, ensuring businesses stay ahead of evolving cyber threats.

Designed for organizations facing sophisticated attacks, our platform offers wire-speed protection, continuous exposure management, and full-stack security insights, enabling security teams to neutralize threats before they escalate.

Whether securing critical infrastructure, operational technology (OT) environments, telecom networks, or supply chains, Streaming Defense delivers unparalleled threat visibility, automated containment, and compliance enforcement.

By integrating advanced machine learning, private LLMs, and behavioral analytics, we empower businesses to see, contain, and eliminate cyber threats in real time, at wire-speed—without disruption.

AUTHOR

More Articles

Committed to GrowthPeach County, Georgia

Committed to Growth

Peach County, Georgia

After featuring Peach County, Georgia in July 2017, Business in Focus checked back in this month to hear the latest news and developments from this industry-focused rural community, which is still...

read more