Leading cybersecurity investors back Zepo’s $15M seed round to stop social engineering attacks. See why.

One call to a vendor. 15,661 records exposed. The Ericsson breach shows where security awareness ends.

A single vishing call to a third-party vendor gave attackers access to Ericsson customer data for five days — and Ericsson wasn't notified for seven months. The gap most security awareness programs don't cover is vendor employees. Here's how to start bridging it.
Timeline showing the Ericsson vishing breach: 5 days of attacker access in April 2025, 196 days before Ericsson was notified by the vendor
TL;DR

A single vishing call to a third-party service provider gave attackers access to Ericsson customer data for five days in April 2025 — and Ericsson itself wasn’t notified for seven months. The breach exposed 15,661 records including Social Security numbers, financial account details, and medical information. Most security awareness programs protect an organization’s own employees. Almost none extend to vendor populations. That is the gap attackers are finding.

One call to a vendor. 15,661 records exposed.

In April 2025, an attacker called an employee at one of Ericsson’s third-party service providers. The call used social engineering over voice — a vishing attack. It worked. Between April 17 and April 22, the attacker accessed systems holding names, Social Security numbers, financial account numbers, and medical information belonging to 15,661 people.

The vendor discovered the intrusion on April 28. Ericsson wasn’t notified until November 10, 2025, nearly seven months later. The forensic review wrapped up February 23, 2026. Ericsson publicly disclosed the breach on March 9, 2026. (Source: The Register, March 10, 2026, theregister.com)

This wasn’t a sophisticated technical exploit. It was a phone call.

Why vendors are the target

Attackers don’t go after the hardest target. They find the path of least resistance inside a trusted relationship.

Third-party vendors often carry the same data access as the organizations they serve. A service provider handling customer records, billing, or support functions processes sensitive data under the primary organization’s name, but manages that data under its own security posture. The gap between the two is where breaches happen.

Third-party breaches now represent 30% of all data breaches globally, up 100% year-over-year according to the Verizon Data Breach Investigations Report 2025. (Source: Verizon DBIR 2025, verizon.com) When a breach originates at a vendor, the average cost climbs to $4.91 million compared to a $4.44 million global average. (Source: IBM Cost of a Data Breach 2025, ibm.com)

The economics are clear. Vendors are higher-value targets than they appear from the outside.

The security awareness program that stops at the door

Most organizations run a security awareness training program. Many of those programs are well-designed, cover phishing simulations, include regular cadence, and produce completion metrics that look good in a board report.

They cover the organization’s own employees.

Vishing attacks have increased 442% between the first and second half of 2024. (Source: CrowdStrike, cited in Secureframe Data Breach Statistics 2025, secureframe.com) The rise is directly tied to why voice-based social engineering is increasingly effective: most security awareness training is built around email phishing, not phone calls. Vendor employees receive even less.

The structural problem is that phone calls operate differently from email. There is no link to hover over, no sender address to inspect, no time to reflect. Voice creates urgency and perceived authority in real time. Security programs that train primarily around email recognition don’t build the reflexes that a vishing call requires. And that gap is wider for vendor employees who sit outside the organization’s security culture entirely.

Extend that problem to vendor populations — employees who received less training, less simulation, and less security culture investment — and the gap becomes a structural exposure. An attacker who gets through doesn’t need to break a firewall. They call.

Seven months of silence

The Ericsson timeline has a detail that matters beyond the breach itself: the seven-month notification gap.

The vendor discovered the breach in April. Ericsson learned about it in November. That’s 196 days during which Ericsson customers had no way to protect themselves — no credit freeze, no password change, no fraud monitoring.

That gap is not unusual. The median disclosure delay for third-party breaches is 73 days. (Source: Black Kite, cited in Industrial Cyber, March 2026, industrialcyber.co) Many breach contracts specify notification windows in general terms. Few specify consequences for missing them or require vendors to include the primary organization as an active stakeholder in incident response.

When a vendor is breached, the primary organization finds out when the vendor decides to tell them. For 196 days, Ericsson had no information it could act on.

What the supply chain boundary actually means for human risk

Here is the reframe worth carrying out of this case.

Human risk management programs that stop at the organizational boundary reduce human risk only for the people inside that boundary. Every vendor, partner, and service provider that handles your data is also a human risk exposure. Their employees receive phone calls. They get social engineering attempts. They make decisions under pressure.

Whether those decisions happen inside your security culture or outside it depends entirely on what you’ve required of your vendors. Most organizations haven’t required much.

Behavior is the control that technical security cannot replace. A vendor employee who can recognize a vishing call and escalate correctly is a better control than any policy requiring them to do so.

Bridging the security gap at the supply chain boundary

If vendors are a behavioral risk exposure, the question is how to extend security capability outward, not just how to write better contracts.

The organizations making progress here don’t treat vendor security as a compliance checkbox. They treat it as a genuine extension of their own human risk posture. That shift changes what they look at during vendor reviews, what conversations they have at onboarding, and how they think about incidents when they happen.

A few areas where the gap is most bridgeable:

Behavioral readiness, not just completion rates. A vendor employee who completed an annual phishing awareness course is not the same as one who has practiced responding to a voice-based social engineering scenario. The metric worth asking about is exposure to realistic simulation, not completion of training modules — and whether that includes voice-based scenarios specifically.

Shared visibility into risk. Most third-party risk processes produce a point-in-time risk score at contract signing and then go quiet. The Ericsson timeline — 196 days between discovery and notification — reflects what happens when there is no ongoing shared visibility. Closer relationships with key vendors include regular security reviews, not just contractual language.

Incident response as a shared process. A vendor that handles sensitive data on your behalf should have you named as an active stakeholder in their incident response plan, with defined communication timelines. The alternative is finding out in month seven.

Notification timelines grounded in reality. For EU-relevant vendors, NIS2’s 72-hour initial notification window provides a useful reference point for what “timely” means in practice. For others, the conversation is worth having before a breach happens rather than after.

None of this requires a vendor to have the same security program as your organization. It requires a shared understanding of where their exposure ends and yours begins — and an acknowledgment that, from an attacker’s perspective, that boundary doesn’t exist.

The boundary is a policy decision, not a technical one

The Ericsson breach didn’t require a zero-day vulnerability. It required a phone call and a vendor employee who wasn’t prepared for it.

The organizations that reduce exposure at the vendor level are the ones that treat vendor human risk as their own, not as a third-party’s responsibility. They extend behavioral security requirements into procurement. They verify, not assume.

Security culture doesn’t travel through a contract by default. It travels through deliberate extension.

Related reading

If you’re reviewing your third-party risk posture alongside emerging AI-enabled social engineering threats, our Human Risk and GenAI 2026 ebook covers vishing, AI voice cloning, and shadow AI risks across enterprise environments.

Written by:

Natalia Bochan

Always stay up to date

ZEPO INTELLIGENCE
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.