cybersecuirty
The New Guardians: Cybersecurity in an Open World | B Red Magazine

The New Guardians: Cybersecurity in an Open World

Nation-states, criminal networks, and AI-powered tools are reshaping digital threats faster than our defenses can keep up. The answer isn’t more walls — it’s smarter openness. Here’s what the evidence says, and what it demands of all of us.

There’s a peculiar kind of tension at the heart of modern cybersecurity. On one hand, we’re told that information wants to be free — that openness fuels innovation, accountability, and progress. On the other, every piece of openly available data, from corporate infrastructure details to employee email conventions, can be reverse-engineered into an attack vector by anyone patient enough to look.

This isn’t a new problem. But it’s gotten significantly sharper. The world of 2025 looks nothing like the cyber landscape of even a decade ago. The attackers are more organized, better funded, and increasingly assisted by the same artificial intelligence tools that security professionals are scrambling to adopt. And the targets — hospitals, water systems, election infrastructure, small businesses that don’t even know what a threat model is — are everywhere.

At B Red Magazine, we’ve been watching this space closely, not just as a technology story, but as a governance story, an equity story, and frankly, an American story. Because when you start pulling on the threads, what you find isn’t just a technical gap. You find institutional inertia, political fragmentation, and a troubling disconnect between who gets protected and who gets left exposed.

How We Got Here: A Brief, Uncomfortable History

For a long time, cybersecurity was largely treated as an IT department problem — a back-office concern for system administrators who wore badges on lanyards and talked in acronyms. The broader public assumed that someone, somewhere, had it handled.

That assumption started cracking in the early 2010s, when breaches at major retailers and financial institutions began making front-page news. Then came the 2016 election interference investigations, which brought nation-state hacking into mainstream political conversation. Then ransomware attacks against hospitals. Then critical infrastructure intrusions. Each incident widened the aperture of what “cybersecurity” actually meant.

What we’re dealing with now, according to a 2024 holistic review published in an open-access scientific journal, is a threat landscape that spans “nation-state actors, criminal groups, and opportunistic attackers all exploiting increasingly connected digital infrastructure and AI-enabled tools.” [Emerging Trends in Cybersecurity, 2024]. That’s a sentence that sounds clinical until you start mapping it onto actual events — a ransomware gang shutting down a hospital network in the Midwest, a foreign intelligence service quietly reading sensitive government emails for months, a small business owner in Ohio losing her company’s entire customer database to a phishing email.

The threat landscape didn’t just expand. It democratized downward — and not in a good way.

The Data Problem Nobody Wants to Talk About

Here’s something that rarely makes headlines: despite the massive volume of academic research on cyber risk, we actually know very little in a rigorous, empirical sense about how often incidents occur, how much they cost, and what interventions actually work.

A systematic review published in 2022 tells a story that should give every policymaker pause. Researchers screened 5,219 peer-reviewed cyber studies and found just 79 unique, reusable open datasets. [Cyber Risk and Cybersecurity: A Systematic Review of Data Availability, 2022]. Think about that ratio for a moment. Thousands of published studies, and only 79 datasets that could actually be put to systematic use. The authors didn’t mince words: they identified what they called “a lacuna in open databases that undermine collective endeavours to better manage this set of risks.”

Part of this is a legal paradox hiding in plain sight. In many countries — including the United States — companies are legally required to report data breaches to regulatory authorities. That sounds like progress. Except the resulting data is “usually not accessible to the research community,” according to the same review. We’re generating compliance paperwork that vanishes into regulatory silos, never to inform public understanding or independent analysis. The obligation exists; the openness does not.

The downstream effects of this are real and serious. Without good data, insurers can’t accurately price cyber risk. Policymakers can’t design effective interventions. Researchers can’t benchmark which defenses actually work. And perhaps most consequentially, companies can’t learn from each other’s mistakes in any systematic way — because those mistakes are quietly buried in confidentiality agreements and regulatory submissions nobody reads.

The same 2022 review noted that “many companies still underestimate their cyber risk,” citing earlier research from 2020. That underestimation isn’t simply corporate negligence — it’s a rational response to an environment where there’s very little public signal about what the actual baseline of harm looks like. If you don’t know what’s normal, you can’t know how exposed you are.

What this means for American organizations: The fragmented U.S. reporting landscape — where breach notification requirements vary by state, by sector, and by the type of data involved — makes the data problem even worse domestically. There is no single, comprehensive public view of what’s happening to American organizations at a national level. That’s a governance gap masquerading as a compliance framework.

The Open-World Paradox: When Transparency Becomes a Weapon

The instinct to fix the data problem by making more information open runs straight into a genuinely uncomfortable counter-argument: open information also helps attackers.

A 2022 qualitative study published in Oxford Academic’s Cybersecurity journal examined what types of open-source intelligence (OSINT) are most useful to malicious actors planning cyber-attacks. [Accessible from the Open Web, 2022]. The findings aren’t shocking if you’ve spent any time thinking about this — job postings reveal technology stacks, LinkedIn profiles map organizational charts, GitHub repositories expose code structures — but the systematic nature of the analysis is sobering. Everything an organization puts out into the open, for entirely legitimate reasons, can be assembled by a patient adversary into something dangerous.

This is the open-world paradox, and it doesn’t resolve cleanly. The same transparency that makes organizations accountable to customers, employees, and regulators also makes them legible to threat actors. The same open research infrastructure that allows security professionals to share threat intelligence also allows criminal groups to study defensive techniques and route around them.

There’s no clean answer to this. But there is a more sophisticated way of thinking about it — one that moves past the binary of “open is good, closed is safe” and asks instead: open to whom, for what purpose, under what governance conditions?

That reframing is, in our view at B Red Magazine, one of the most important conceptual shifts needed in how Americans think and talk about cybersecurity. The question isn’t whether to share information. It’s how to build the trust infrastructure that makes sharing both possible and safe.

AI: The Guardian That Cuts Both Ways

No honest account of today’s cybersecurity landscape can avoid the AI question, even if the AI question doesn’t have clean answers yet.

A 2024 paper in Frontiers in Computer Science applied what’s called a “risk society” framework — borrowed from sociological theory — to analyze how public discourse has been processing the intersection of AI and cybersecurity. [AI and Cybersecurity: A Risk Society Perspective, 2024]. The themes they identified are worth sitting with: the global nature of AI risks, their pervasive influence across multiple sectors, the alteration of public trust, the individualization of risk, and — this one matters — “the uneven distribution of AI risks and benefits.”

On the defensive side, AI genuinely helps. Machine learning models can flag anomalous network behavior at speeds no human analyst could match. Natural language processing can scan threat intelligence feeds and surface relevant signals from noise. Automated response systems can isolate compromised endpoints faster than attackers can pivot. These capabilities are real, and they’re being deployed by sophisticated organizations right now.

But the same paper is clear about the flip side: AI also “amplifies attack capabilities, generates novel vulnerabilities, and unevenly distributes risk across societies.” Phishing emails that used to be detectable by their awkward grammar are now grammatically flawless, personalized, and generated at scale. Voice and video deepfakes are being used in business email compromise schemes to impersonate executives. AI tools lower the technical barrier for conducting attacks, meaning the pool of capable threat actors is getting larger, not smaller.

The uneven distribution point is one we don’t discuss enough in mainstream tech coverage. Large, well-resourced organizations — major banks, government agencies, Fortune 500 companies — are building AI-enhanced security operations centers. Small businesses, nonprofits, local government agencies, and underserved communities are largely not. The same technological shift that improves security at the top of the resource distribution is expanding the vulnerability gap at the bottom. This isn’t an abstract equity concern. It’s a structural weakness in national cyber resilience, because adversaries specifically target the weakest links in interconnected systems. If you can’t get into Bank of America, you try the small accounting firm that does their vendor payroll.

We’ve written before at B Red Magazine about how AI is quietly reshaping the spaces we inhabit — and cybersecurity is one of the most consequential of those spaces, even when it’s invisible to us.

Who’s Actually Guarding the Gate?

So who are the “new guardians” the title of this piece promises? That framing is intentional, and slightly provocative, because the honest answer is: it’s not who you might expect.

The 2024 holistic threat landscape review is explicit that effective cybersecurity “requires not only technical controls but also governance, information-sharing between public and private sectors, and incentives for transparency rather than secrecy about incidents.” [Emerging Trends in Cybersecurity, 2024]. That’s a very different picture from the lone genius in a hoodie that pop culture has handed us as the face of cybersecurity.

The new guardians are plural. They include:

Regulators who enable, not just require. The current model in much of the U.S. treats breach reporting as a compliance checkbox. What the evidence calls for is a regulatory framework that collects standardized incident data and makes at least aggregated, anonymized versions of it publicly accessible — not just filed away. Countries like the UK and members of the EU are ahead of the U.S. on aspects of this, though the EU’s own record on making GDPR breach data truly open for research is also imperfect.

Organizations that share, not just patch. There are existing structures for this — Information Sharing and Analysis Centers (ISACs) for various sectors, CISA’s threat intelligence sharing programs — but uptake is uneven and the cultural reluctance to disclose incidents remains a significant barrier. The 2022 systematic review’s recommendation for “improved information flow between public and private actors” isn’t just good policy theory; it has direct implications for insurability, risk pricing, and collective learning. The financial sector has done this better than most — partly because regulators forced the issue, partly because the interconnectedness of financial systems creates strong self-interested incentives to share.

Researchers who build common infrastructure. The finding that only 79 usable datasets exist across thousands of studies points to an opportunity, not just a gap. Building standardized, open, rigorously maintained cyber incident databases — with appropriate privacy protections — is a public good that benefits everyone from insurance actuaries to national security planners. It requires sustained funding and institutional commitment, neither of which is glamorous, but both of which matter.

Civil society that broadens the conversation. This is perhaps the least conventional item on the list. The 2024 AI and cybersecurity paper found that public discourse about these issues “predominantly favors a functionalist and solutionist perspective,” marginalizing “ordinary individuals and non-Western voices.” That bias isn’t just an academic concern. It shapes which risks get prioritized, which populations are seen as stakeholders, and which solutions get funded. A cybersecurity policy debate that only includes enterprise CISOs, government contractors, and Silicon Valley voices will consistently underestimate the risks faced by small businesses, rural communities, hospitals operating on thin margins, and the global majority who are increasingly connected but minimally resourced.

The American Gap: Real and Consequential

We want to be specific about the U.S. context here, because it’s easy for this kind of analysis to slide into vague global generalities.

The United States has more regulatory frameworks governing cybersecurity than almost any other country. Sector-specific rules cover healthcare (HIPAA), financial services (GLBA, various banking regulators), defense contractors (CMMC), critical infrastructure (NERC CIP for energy), and more recently, public companies (SEC disclosure rules). That is not nothing.

But the result is a patchwork that creates compliance burdens without always generating the public benefit of shared data. Organizations spend money on meeting their reporting obligations to specific regulators, but those reports don’t automatically flow into any system where they can inform national-level analysis, academic research, or cross-sector learning. The 2023 state-of-the-art review in Elsevier’s journal noted the global trend of “increased connectivity and reliance on digital infrastructure” as a core driver of escalating risk — and the U.S., with its deeply interconnected financial, healthcare, energy, and communications systems, is particularly exposed to cascading failures. [Cyber Security: State of the Art, 2023]

There’s also a resourcing reality that doesn’t get enough attention in policy circles. CISA, the primary federal civilian cybersecurity agency, operates on a budget that is a rounding error compared to the scale of the systems it’s meant to protect. State and local governments — which run election systems, water treatment facilities, and emergency services — are largely on their own. The new guardians model only works if there’s actual capacity behind it, not just mandate.

Practical Steps: What Actually Moves the Needle

Given everything above, here’s what the evidence actually supports — not as a comprehensive policy blueprint, but as a grounded set of directions for organizations and individuals trying to figure out where to start:

For organizations of any size: The single highest-return investment in most cases is still basic hygiene — multi-factor authentication, patching, backups, and staff training. The research consistently finds that most successful attacks exploit known, patchable vulnerabilities or human behavior, not exotic zero-days. Sophisticated AI-powered defenses don’t help if someone clicks on a phishing link because they weren’t trained to recognize one. Building the habit of basic security — like building the habit of financial resilience through preparation rather than reaction — compounds over time.

For understanding risk: Engage with whatever sector-specific information sharing exists for your industry. ISACs, CISA advisories, and sector-specific threat intelligence feeds are underutilized by mid-market and smaller organizations. You don’t need to build a security operations center to benefit from shared threat intelligence — you just need to actually read what’s being shared.

For policymakers: The most impactful single change would be creating mechanisms for breach data collected under existing regulatory frameworks to be made available, in appropriately anonymized and aggregated form, for research and public policy analysis. This doesn’t require new reporting burdens. It requires treating the data that’s already being collected as a public asset rather than a compliance archive.

For everyone: The AI and cybersecurity paper’s point about broadening who participates in these conversations is important and actionable. Cybersecurity shouldn’t be a conversation that happens only between enterprise technology leaders and government officials. Local businesses, community organizations, journalists, and ordinary citizens all have stakes in how these systems are governed, and their perspectives tend to surface concerns — about access, equity, proportionality — that purely technical conversations miss.

The Shape of What’s Coming

Here’s what the research actually supports as a forward-looking picture, without inflating the uncertainty into drama:

The threat landscape will continue to evolve faster than available data can capture it. The 2022 systematic review’s description of cyber risk research as “still in its infancy because of the dynamic and emerging nature” of these risks wasn’t a dismissal of progress — it was an honest acknowledgment that the domain moves fast. AI will accelerate this dynamic in both directions. Defenders will get better tools; attackers will get better tools; the advantage will probably continue to shift somewhat based on who has more resources and who has sharper incentives to use them.

The structural changes that matter most aren’t technical. They’re institutional: whether breach data becomes genuinely open, whether public-private information sharing actually reaches small and mid-sized organizations, whether AI risk benefits get distributed more equitably, and whether the people designing cybersecurity governance systems start listening to a broader range of voices about what’s actually at stake.

None of that is inevitable. But all of it is possible, and the evidence for why it matters is now substantial enough that the conversation has shifted from “should we do this” to “how do we actually get there.”

At B Red Magazine, that’s the conversation we want to be part of — not the one that treats every new attack as evidence of inevitable digital doom, and not the one that treats every new security product as a solution, but the serious, evidence-grounded, human-scaled discussion about how we govern the open, connected, increasingly AI-permeated world we actually live in.

Key Takeaways

The cyber threat landscape is broader, more sophisticated, and more consequential than most public conversations reflect — spanning nation-state actors, organized crime, and opportunistic attackers all operating in an environment that AI is rapidly reshaping. The most significant gap isn’t technical; it’s a structural shortage of open, standardized incident data that constrains evidence-based policy, insurance modeling, and collective learning across organizations. Openness carries genuine dual-use risk — the same transparency that enables accountability and research also creates attack surface — and the governance response needs to be more nuanced than either “share everything” or “lock it down.” AI amplifies both defensive and offensive capabilities, but its benefits and risks are distributed unequally, with better-resourced organizations pulling ahead and smaller, more vulnerable ones falling further behind. The new guardians aren’t heroes in hoodies; they’re the regulators, researchers, companies, and civil-society voices who are quietly building the data infrastructure, trust frameworks, and governance mechanisms that collective security in an open world actually requires.

Frequently Asked Questions

Why does the U.S. have so many cybersecurity regulations but still such poor public data on incidents?

The U.S. regulatory framework for cybersecurity is sector-specific and fragmented — different rules apply to healthcare, financial services, energy, and defense, administered by different agencies. Each creates reporting obligations, but the resulting data typically flows into regulatory silos rather than any unified, publicly accessible system. That means the compliance cost is real but the collective benefit of shared learning is largely unrealized. Fixing this doesn’t require new mandates — it requires treating the data already being collected as a public resource.

Is AI making cybersecurity better or worse overall?

Both, genuinely. AI enhances detection, automates responses, and helps security teams process threat intelligence at scale. It also enables more sophisticated, personalized attacks at lower cost and technical barrier. The net effect depends significantly on who has access to which tools — and right now, the resource gap between large organizations and small ones is widening, which is a structural concern for overall national resilience.

What does “open-source intelligence” (OSINT) have to do with cyber attacks?

OSINT refers to publicly available information — job postings, social media profiles, company websites, technical documentation, GitHub repositories, and more. Attackers can systematically aggregate this information to map organizational structures, identify technology in use, find personnel worth targeting, and design more convincing social engineering attacks. Everything a company puts into the public domain for legitimate business reasons can become part of an adversary’s reconnaissance. This is why “security through obscurity” has its limits, but it’s also why organizations should be thoughtful about what technical details they expose unnecessarily.

How should a small business approach cybersecurity with limited resources?

Start with basic hygiene: multi-factor authentication on all accounts, regular patching and updates, offline or cloud backups tested for restoration, and at minimum one round of phishing awareness training for staff. Most successful attacks exploit basic gaps, not exotic vulnerabilities. Then look at free resources — CISA offers free tools and guidance specifically for small and medium-sized organizations. If your industry has an ISAC (Information Sharing and Analysis Center), joining it gives you access to sector-specific threat intelligence you’d otherwise never see. None of this is glamorous, but the evidence consistently shows it moves the needle more than one-off technology purchases.

What does “risk society” mean in the context of cybersecurity, and why does it matter?

The “risk society” concept, originally developed by sociologist Ulrich Beck, refers to a social condition where the distribution of risks becomes as central a political question as the distribution of wealth. Applied to cybersecurity, it draws attention to who bears the burden of digital threats — often not the organizations best positioned to defend against them — and questions who gets to define what’s acceptable risk. It’s a useful corrective to purely technical framings, because it surfaces questions about equity, voice, and governance that those framings tend to leave out. The 2024 Frontiers paper that used this lens found that mainstream media discourse about AI and cybersecurity consistently underrepresents ordinary people and non-Western perspectives, which has real implications for whose concerns actually shape policy.

Are data breaches actually getting more common, or does it just seem that way?

This is genuinely hard to answer with precision — which is itself part of the problem this article describes. Reporting requirements have expanded significantly over the past decade, meaning more incidents now surface publicly that previously would have been quietly handled. At the same time, the attack surface has grown substantially as more systems go online and interconnect. The honest answer from the research is that cyber risk research is still catching up to the pace of the threat environment, and available data likely understates both the frequency and cost of incidents. What we do know is that the range of impacted sectors has widened considerably, and the types of organizations affected include many — hospitals, local governments, schools — that have limited capacity to respond.

Stay Ahead of What’s Changing

At B Red Magazine, we track the stories that sit at the intersection of technology, governance, and everyday American life — because the issues shaping our digital world rarely announce themselves as the important ones until it’s too late to ignore them. Explore more in our Technology section, and if this kind of in-depth, evidence-grounded coverage matters to you, consider following us for more.