In partnership with

STAT WORTH SHARING:

China's government banned OpenClaw from state bank computers on March 11 — the same week local governments were offering $289,000 subsidies to build on it. That's not a contradiction. It's what governance failure looks like in real time.

If someone on your leadership team needs to see this, forward it their way.

TL;DR:

The White House released its national cyber strategy on March 6 — five pages, six pillars, and a clear signal that the private sector will be expected to play a more active role in cyber defense. The same day, OpenAI launched Codex Security, an AI agent that scans codebases for vulnerabilities autonomously — finding nearly 800 critical issues across 1.2 million commits in beta. And China's government told state banks and agencies to remove OpenClaw from office devices, even as Chinese consumers were adopting it by the millions and local governments were offering subsidies to build on it.

Development 1: The White House Has a New Cyber Strategy. It's Five Pages.

What Happened

On March 6, the Trump administration released "President Trump's Cyber Strategy for America," alongside an Executive Order on combating cybercrime and fraud. At five pages it's the shortest national cyber strategy in a decade — six pillars, light on implementation detail, and notable for two things your organization should be paying attention to.

First, the strategy envisions a meaningful private sector role in identifying and disrupting adversary networks. It stops short of authorizing companies to conduct offensive operations, but it creates a clear expectation of closer collaboration — particularly for critical infrastructure operators and defense contractors. Second, it signals deregulation, specifically calling for "common-sense" harmonization of cybersecurity compliance requirements. The National Cyber Director indicated the SEC's 2023 cyber incident disclosure rule may be revisited.

The accompanying EO creates a dedicated operational cell to target transnational cybercriminal organizations and directs the AG to prioritize prosecutions of cyber-enabled fraud.

Why It Matters

The SEC disclosure signal affects every public company. If the rule gets revised, public companies will have more discretion over what and when they report after a breach. Boards need to understand that reduced reporting requirements don't reduce breach risk — they change what's visible afterwards.

"Adversary vendors" language has real supply chain implications. The strategy explicitly calls for removing adversary-linked technology from critical sector supply chains. If you operate in energy, finance, healthcare, or telecom, vendor reviews against that language are not optional anymore.

Private sector cyber collaboration is coming whether you're ready or not. The government will be reaching out to critical infrastructure operators. It's better to have a policy framework for that engagement before the call comes than after.

Enterprise / SMB: Review your third-party vendors against the strategy's adversary vendor language — especially any infrastructure touching sensitive data or government contracts. If you're in a regulated sector, this is the year to get ahead of that audit.

→ One action this week: If you fall under the SEC cyber disclosure rule, document your materiality criteria now — regardless of whether the rule changes. Ambiguity in that definition is a liability either way.

Development 2: OpenAI Wants to Be Your Security Researcher

What Happened

On March 6, OpenAI launched Codex Security in research preview — an AI agent that autonomously scans codebases, validates vulnerabilities in sandboxed environments, and proposes fixes. Available free for the first month to ChatGPT Pro, Enterprise, Business, and Edu customers.

The beta numbers are hard to ignore. Across 1.2 million commits in 30 days, it found 792 critical vulnerabilities and 10,561 high-severity findings in widely-used open-source projects including OpenSSH, GnuTLS, Chromium, and PHP. Fourteen CVEs were assigned. False positives dropped more than 50% over the course of the beta — in one case, noise was cut by 84%.

One honest limitation: Codex Security currently runs through the ChatGPT interface, not inside IDEs or CI/CD pipelines. That matters because security that requires developers to leave their workflow tends not to happen consistently. Whether OpenAI builds those integrations post-preview is the question that determines its enterprise value.

Why It Matters

AI-assisted development is creating code faster than security teams can review it. This is OpenAI's direct answer to that problem — using the same capabilities generating the code to find the vulnerabilities in it.

Alert fatigue has killed automated security tooling adoption at many organizations. Cutting false positives by 50%+ is the single most important improvement a scanner can make. Teams that abandoned automated tools because of noise should take a second look.

OpenAI entering AppSec is a competitive signal, not just a product launch. Legacy vendors like Snyk and SonarQube now have a well-resourced competitor with a different cost model and a ChatGPT distribution advantage.

Enterprise / SMB: If you're a ChatGPT Enterprise or Business customer, the research preview is free for a month. Run it against one active codebase. The cost of finding out whether it's useful is zero.

One action this week: Activate the Codex Security research preview and run one scan. If the signal-to-noise ratio holds, you have a case for adding it to your security review process before AI-generated code becomes a larger share of your codebase.

Does your team have a security review process for AI-generated code? I'm collecting data on how organizations are handling this — hit reply with a one-liner on where you are with it. Even "we don't yet" is useful.
FROM OUR PARTNERS

A comprehensive guide for addressing the tax talent crisis

A labor shortage in tax is driving the need for a new skill set: one that blends technical tax knowledge with digital fluency.

Automation, AI and data-driven insights now define the role of tax professionals.

This new era of tax is not simply about adopting new tools, it’s about reshaping the skill set and mindset required to thrive in this field. Check out this guide for actionable insights into how to cultivate these skills with your team. See how advanced technologies can help bridge the tax tech gap to increase efficiency, ensure compliance, and drive better decision-making.

Development 4: China Banned the AI Agent Its Employees Couldn't Stop Downloading

What Happened

On March 11, Bloomberg reported that Chinese authorities sent notices to state-owned enterprises and government agencies — including the country's largest banks — instructing them not to install OpenClaw on office devices. Employees who had already installed it were told to report to supervisors for security checks and possible removal. At some institutions, restrictions extended to personal phones connected to company networks.

The technical concerns are documented. China's CERT flagged OpenClaw for "extremely weak default security configuration," vulnerability to prompt injection via malicious web content, and several disclosed flaws that can result in credential theft. The Ministry of Industry and Information Technology published security guidelines the same week.

Here's the context that makes this worth your attention: the ban landed in the middle of a mass adoption frenzy. OpenClaw's adoption in China was so rapid it earned a consumer nickname — "raising lobsters." Tencent, Alibaba, Baidu, and MiniMax all launched compatible tools. Local governments in Shenzhen and Wuxi were offering subsidies of up to 2 million yuan to startups building on the platform. Premier Li Qiang mentioned AI agents in the government's annual work report, calling for their "large-scale commercial application."

The central government was banning it while local governments were subsidizing it. Same week. Same tool.

Why It Matters

This is the BYOA problem at national scale. Employees adopting OpenClaw faster than policy could respond is the same dynamic playing out in enterprises everywhere. China made it a headline because it happened at state banks — but the underlying dynamic is universal.

The CERT's concerns apply everywhere. Broad data access, external communication capability, persistent memory — these are the same characteristics Western security researchers flagged months ago. The tool hasn't changed. Only the jurisdiction did.

Adoption pressure and security governance on separate tracks is a policy failure, not a technology problem. China's situation is an extreme example of what most organizations are managing in smaller ways right now.

Enterprise / SMB: The question isn't whether to allow or ban agent tools — it's whether you have a defined evaluation process before employees make that call independently. If you don't have one yet, someone on your team has already made the decision for you.

→ One action this week: Ask your team — informally, over Slack or email — what AI tools they're using that IT didn't approve. The answer will tell you whether you need a policy conversation or a governance conversation. They're different problems.

💡 FINAL THOUGHTS

Security leaders are getting clearer on what AI risk looks like. The harder problem is building governance that moves at the same pace as adoption — without becoming a reason to slow it down. All three stories this week sit on that same fault line. The technology is not waiting for the frameworks to catch up.

If someone in your organization needs to be reading this brief, it's probably the person making AI tool decisions without a security lens. Forward it their way

How helpful was this week's email?

Login or Subscribe to participate

We are out of tokens for this week's security brief.

- Hashi

Follow the author:

Keep Reading