← Other Blogs

The architecture gap: why your security gateway and your training program have never shared a single data point

Your security gateway logs every threat targeting your employees. Your training platform runs on a quarterly calendar. These two systems were built for different buyers, measured by different metrics, and were never designed to exchange data — and that gap is where incidents happen.
Security awareness
Social engineering
Human risk management
Written By:
Natalia Bochan

TL;DR

Most enterprise security stacks run two systems designed to address human risk: a gateway that detects threats targeting employees, and a training platform that tries to change employee behavior. These systems were built separately, bought separately, and measured by completely different metrics. As a result, they have never shared a single data point — and that gap produces specific, predictable failures every week. This post names the gap, explains why it exists, and gives you four questions to test whether your current stack has it.

The architecture gap: why your security gateway and your training program have never shared a single data point

A targeted phishing campaign hits your organization. Over three weeks, the same employee receives fourteen social engineering attempts, each one more tailored than the last. Your gateway catches and blocks twelve of them. The behavioral signals are clear: this user is actively targeted, probably by a persistent threat actor who has already done reconnaissance.

Your training platform knows none of this.

That employee is enrolled in the same annual security awareness module as everyone else. Their training record shows 100% completion. No alert was raised. No intervention was triggered. The gateway's signal — high targeting frequency, specific social engineering vector — never reached the training system.

When the fifteenth attempt gets through and a credential is compromised, the incident report will show that training was current and compliant. But the training program never failed. It was never told there was a risk.

This is the architecture gap. It exists in the vast majority of enterprise security stacks right now. And it is not a configuration issue. It is a structural consequence of how these two categories of software came to exist.

Two systems built for two different organizations

Secure email gateways and web proxies were built for the security operations center. The buyers were security engineers and CISOs. The success metric was threats blocked.

Security awareness training platforms came from the compliance and HR side of the organization. The buyers were compliance directors and legal teams. The success metric was training completed at minimum, simulations reported at best.

In most organizations, these were separate budget lines, separate vendor relationships, and separate renewal cycles. The CISO who owned the gateway often had no visibility into the learning management system. The compliance team that managed training had no access to gateway logs. There was no shared data model and no organizational reason for either system to ask for what the other one knew.

Look at what each system logs and the incompatibility becomes structural. A gateway logs threat events keyed to message identifiers, sending domains, IP addresses, and verdicts. A training platform logs completion records, simulation click rates, and post-course quizz scores keyed to employee identifiers. These two schemas have no native common joint key. Connecting them requires agreeing on a shared user identity layer and a consistent definition of what constitutes a risk signal — work that most organizations have never done.

According to a report by Blink Ops, 72% of organizations report that security and operational data remain siloed, and the majority state that siloed data directly slows incident response and degrades security posture. (Blink Ops, 2024)

The gap is not a bug. It is the predictable output of two product categories that were never designed around a shared definition of what they are collectively trying to accomplish.

What falls through the gap

The gap produces three specific failures that appear regularly in any organization running a conventional two-system stack.

The threat event that generated no training response. Your gateway blocked six phishing attempts targeting a specific user this week. Your training platform assigned that user the same module as the rest of the department. The gateway signal — high targeting frequency, specific attack vector — never triggered a training response. The user is being actively pressured by an attacker. Their training program is running on a quarterly calendar.

The high-risk user your training platform has never identified. Security operations teams develop a working knowledge of behavioral outliers: users who repeatedly trigger alerts, interact with suspicious senders, or show patterns that suggest elevated susceptibility. That knowledge lives in the SIEM and in gateway logs. It does not automatically flow into the training platform. That user has a clean training record. Under the training platform's model, they are not high risk. Under the gateway's signal model, they are one of the most exposed people in the organization. These two assessments have never been reconciled.

The individual risk score you cannot complete. A meaningful per-user risk score requires input from two sources: detection data and training data. Detection data captures what threats targeted a user and what they clicked on. Training data captures how they respond to interventions and which behaviors change. Without both, any score is incomplete. Most organizations report a training score and call it a risk score. The two are not the same.

What "closed loop" actually means

The term gets used frequently enough in human risk management marketing that it has started to lose precision. Most platforms that use it are describing something narrower: data sharing within their own product suite. If you buy training modules and a simulation tool from the same vendor, those products share data. That is useful. It is not a closed loop in the architectural sense.

A genuine closed-loop human risk management platform treats detection signals and training outcomes as two inputs to a single behavioral model. A threat event updates a user's behavioral profile. That profile triggers a training response calibrated to the actual threat pattern. The training outcome feeds back into the risk score. The cycle runs continuously, without manual handoff between systems.

That is the architectural standard any platform claiming to manage human risk should be required to meet.

Four questions to test whether your stack has the gap

These four questions are sufficient for a first-pass diagnostic.

Can your gateway automatically trigger a training event for a user who was targeted or clicked? If this requires manual SOC involvement, the gap is present.

Does your training platform receive external threat signals from your detection tools — not only simulation results? If the platform's data on a user consists entirely of module completion and phishing simulation performance, it has no visibility into real-world threat exposure.

Does a click on a real phishing attempt update the same employee profile as a click on a simulation? If producing that combined view requires manually exporting and joining data from two systems, the gap is present.

Can you generate a single risk score per user that draws on both gateway and training data, in near real time? If the answer is that this requires a quarterly reporting exercise, it is not a risk score. It is a retrospective.

Most organizations that work through these questions honestly find the gap in at least two of them. That is not a failure of either system in isolation. It is a design problem that predates the category of human risk management — one that requires a platform built from the ground up around a shared behavioral model, not two legacy systems connected at the edges.

For a deeper look at what a closed-loop architecture requires at the data layer — including the four signal types that need to cross the boundary between detection and training — we'vre working on a more detailed guide. Reach out if you´d like a personalized consultation on it.

On what regulations now require from training programs beyond completion rates, see our cybersecurity training compliance requirements post.

Content
Act now before attackers do
Unify deepfake simulations, personalized training, and risk analytics into a single platform that builds measurable defense.
Talk to an expert