CRQ Champions | Issue 1
SAFE Signal
Mar 2, 2026
Letter From the Editors: Welcome!
Welcome to the inaugural issue of the CRQ Champions Newsletter! We’re excited you’re here, and we’re confident the insights ahead will make you glad you are too.
Amid the constant flow of information in the field of cyber risk quantification, we seek what’s clear, relevant, and actionable, and that’s exactly what you’ll find here. Each edition of this newsletter is designed to make it easy to stay sharp in CRQ and we’ll avoid the jargon, hype, or endless pages to get to the insight. We’ll deliver practical tools, timely guidance on emerging standards, and stories from experts facing common challenges, so you do more than just learn, you apply.
Our goal is simple: separate signal from noise and connect numbers to human decisions, because excellence in this field isn’t defined by math alone, it’s established by measurably improving choices under uncertainty.
We’re committed to making your time spent here invaluable. Let’s get to work.
FAIR in Plain English: The Only Framework Explanation You’ll Ever Need
The Three Questions That Change Everything
What if every cybersecurity decision your company makes comes down to three questions: What’s the bad thing that could happen? How often will something bad happen? And when it does happen, how bad could it be?
For years, we’ve relied on red-yellow-green heat maps and high-medium-low ratings to make security decisions. But boards, executives, and engineers are all asking the same question: “What does ‘high risk’ actually mean for our business?” Those color-coded approaches worked when cybersecurity was simpler, but today’s leaders need tools that translate risk into business language.
That’s where FAIR (Factor Analysis of Information Risk) comes in. FAIR provides the framework for thinking systematically about risk. When you pair FAIR with Monte Carlo simulation, you can run thousands of scenarios to show the full range of what could happen. With AI tools and public data sources, this approach is now accessible to anyone willing to ask the right questions and start applying answers in real-world risk analysis, where insight turns into impact. Don’t let fear of making mistakes or the desire for perfection keep you from taking that first step. The key is moving beyond theory and putting these tools to work.
- Monte Carlo simulation : turning uncertainty into a probability distribution you can see, measure, and act on
What’s the Bad Thing That Could Happen?
Start simple: “What risks could actually hit our organization?”
Don’t overthink this. Several excellent reports have already done the heavy lifting for you. The Verizon Data Breach Investigations Report analyzes tens of thousands of real security incidents every year. ENISA’s Threat Landscape Report does the same for European organizations. Cyentia Institute publishes data-driven research on information security risks. All of these reports categorize actual incidents that happened to real organizations.
Here’s what these reports consistently show: Third-party and supply chain incidents are surging. Ransomware remains a dominant threat across most industries. Social engineering and credential theft continue to be primary attack vectors. These aren’t theoretical risks – they’re patterns emerging from thousands of real incidents affecting organizations just like yours.
Think of it like checking crime statistics before buying a house. You’re not trying to predict every possible break-in. You’re looking at what actually happens in neighborhoods like yours. Cyber attacks follow patterns, and these reports have mapped them out.
How Often Could It Happen?
Once you know what to worry about, ask: “How often does this actually happen?”
Here’s where FAIR gets practical. Take data breaches – companies have to report them publicly, and those industry reports we mentioned track these patterns across sectors and company sizes. The frequencies are already published and analyzed for you, giving you a real foundation to work from without having to do the research yourself.
But FAIR doesn’t stop at simple division. Instead of saying “breaches happen once every 5 years” like it’s written in stone, you work with ranges that reflect reality: “anywhere from once every 2 years to once every 10 years, with our best estimate being every 5 years.” When you pair this with Monte Carlo simulation, you can run thousands of scenarios within that range and see what the distribution of outcomes actually looks like.
Those industry reports consistently show that stolen credentials, phishing, and vulnerability exploitation are the dominant attack vectors. These aren’t just statistics to put in a PowerPoint – they’re your starting point for understanding what might actually hit your organization and how often.
The key insight here: You don’t need perfect data to get started. You need reasonable ranges based on what’s happening to companies like yours, and you can refine those estimates as you gather more information.
How Bad Could It Be?
The final question: “If this happens to us, what’s the damage?”
FAIR breaks this into the Six Forms of Loss and FAIR-MAM extends them into standardized, business-relevant dimensions of materiality. Instead of guessing one huge number, estimate ranges for different types of impact:
Direct costs (hit your organization immediately):
- Productivity loss – staff or systems can’t function normally
- Response costs – investigation, containment, and recovery work
- Replacement costs – repairing, reinstalling, or replacing systems and data
Indirect costs (come from outside reactions):
- Fines and judgments – regulatory penalties and legal settlements
- Reputation damage – lost customers, trust erosion, stock hits
- Competitive advantage loss – stolen trade secrets or strategic information
We actually know more about incident costs than you might think. Those same industry reports track financial impacts across different types of incidents, company sizes, and sectors. The data shows that costs can vary wildly depending on your situation – you might face anywhere from thousands to millions in direct costs, and that’s before you factor in reputation damage, customer churn, and productivity hits.
This is where Monte Carlo simulation becomes especially valuable because it can explore thousands of combinations of these cost variables. What if an incident happens during your busiest season versus a slow period? What if it hits your most sensitive data versus something less critical? What if your incident response team executes flawlessly, versus if everything goes wrong? Instead of picking one number and hoping it’s right, you get to see the full range of what’s possible.
The beauty is that you can start with publicly available data and industry benchmarks, then adjust for your organization’s specific circumstances. As you gather more information about your own environment, you refine the estimates and get more accurate results.
Ready to Start
You just learned what risk professionals use every day, and it’s not advanced mathematics or expensive software. It’s asking three fundamental questions and working with ranges instead of pretending you can predict the future with false precision.
Pick one risk scenario that keeps you up at night and ask yourself: “Based on what’s happening to companies like ours, how often might this hit us?” Then follow up with: “If it did happen, what’s the realistic range of damage we might see?” Don’t aim for perfection right out of the gate. Work with reasonable estimates, think in ranges, and start small.
A few years ago, this level of analysis required specialists and big consulting budgets. Now you have comprehensive public reports, AI tools to help with analysis, and frameworks like FAIR to guide your thinking. The building blocks are sitting right there, waiting for you to use them.
Every expert started exactly where you are right now: curious and asking questions. The gap between where you are today and where you want to be isn’t technical complexity or advanced degrees. It’s simply taking that first step and trying the approach on a real problem.
Your very next security decision is your chance to test this out. Instead of relying on gut feelings or letting fear drive the conversation, use the framework. Start today, keep it simple, and watch how much clearer your security decisions become. Cybersecurity isn’t about having perfect information. It’s about making better decisions with the information you actually have. And you already know enough to begin.
Fireside Chat: Where We’ve Failed, Where We’ve Succeeded
What happens when two long-time risk practitioners sit down with no script and swap stories? In this candid conversation, Tony Martin-Vegue and Zach Cossairt share the real wins and painful lessons that shaped their approach to cyber risk quantification. It is part storytelling, part therapy, and all about learning what actually works in the field.
We keep it informal, just two colleagues chatting, but the insights are practical and immediately useful for anyone building or improving a CRQ program. If you have ever wondered, “Am I the only one making these mistakes?”, this is your chance to hear how others stumbled, recovered, and found success.
Books mentioned in this conversation:
- The Failure of Risk Management: Why It’s Broken and How to Fix It by Douglas W. Hubbard
- Presilience: How to Navigate Risk, Embrace Opportunity, and Build Resilience by Dr. Gavriel Schneider
How Did I Do It?
Peer Benchmarking in 60 Seconds
When I don’t have perfect data, I turn to peer benchmarks. Take MFA adoption. According to Okta, about 66% of workforce users have MFA enabled Okta. My client’s number was 40%.
That gap matters. In FAIR terms, lower MFA adoption raises threat event frequency because attackers have more opportunities to succeed.
Here is how I framed it for leadership:
“Our MFA gap means attackers have about twice as many doors open compared to peers. Closing it will not eliminate risk, but it will reduce how often we should expect successful attempts.”
By tying a control gap to frequency, leaders see exactly why it matters and what to do next.
Closing Thought: Why is this Newsletter Important?
Cyber risk quantification is still finding its footing, and that’s very exciting. Yes, the space is crowded with conflicting opinions and steep learning curves, but our collective knowledge sharing can push it forward faster. That’s why this newsletter exists: to cut through the clutter, bridge gaps in understanding, and make CRQ more accessible and actionable for everyone.
Why now? Because we’re at an inflection point where CRQ is moving from theory to practice, and together, we have the chance to shape how it takes root. Being part of this movement is meaningful. It’s about moving past bystander mode and stepping into a shared identity: the community that will define how CRQ is practiced going forward. People like us don’t wait on the sidelines – we shape the future together.
Now is the time to lean in, learn with one another, and help push this innovation from the edges of the field to the center of how risk decisions are made.
Brought to you by:
- Zach Cossairt : Zach leads Safe Security’s Risk Advisory practice, helping organizations blend automated risk modeling with human-centered strategies. A former U.S. Navy Submariner, he managed mission-critical data systems before building and scaling the cyber risk quantification program at Equinix. He holds a B.S. in Security and Quantitative Risk Analysis from Penn State and an M.A. in Behavioral Economics from The Chicago School of Professional Psychology.
- Shreya: Shreya is a social media strategist and data-driven researcher with experience across cybersecurity, consulting, and consumer technology. She has worked in analytics and research roles and was selected for McKinsey & Company’s Next Generation Women Leaders program. An alumna of Miranda House (CS) and IIT Madras (Data Science & Engineering), she’s passionate about using data and storytelling to drive impact at scale.
- Tony Martin-Vegue: Tony is a security and technology risk advisor with 25+ years of experience helping Fortune 500 and high-growth companies build and scale quantitative risk programs. Author of the upcoming From Heatmaps to Histograms (Apress, 2026), he’s a frequent speaker at FAIRcon, SIRAcon, RSA, and ISACA events. His work bridges analytics and decision-making to advance practical, effective risk management. Learn more at heatmapstohistograms.com.
SUBSCRIBE TO WEEKLY BLOG NEWSLETTER
RELATED POSTS
Blog
Feb 27, 2026
The Credit Score for AI: Introducing the AURA Framework
Blog
Feb 25, 2026