Feed aggregator
Sustaining open source in the age of generative AI
Open source has always evolved alongside shifts in technology.
From distributed version control and CI/CD, from containers to Kubernetes, each wave of tooling has reshaped how we build, collaborate, and contribute. Generative AI seems to be the newest wave and it introduces a tension that open source communities can no longer afford to ignore.
AI has made it simple to generate contributions. It has not however made the necessary review process simpler.
Recently, the Kyverno project introduced an AI Usage Policy. This decision was not driven by resistance to AI. It was driven by something far more practical: the scaling limits of human attention.
Where this conversation began
Like many governance changes in open source, this one didn’t begin with theory. It began with a Slack message.
“20 PRs opened in 15 minutes ?”
What followed was a mixture of humor, curiosity, and a familiar undertone many maintainers recognize immediately as discomfort.
“Were they good PRs?”
“Maybe they were generated by bots?”
“Are any of them helpful or are mostly they noise?”
One maintainer captured the sentiment perfectly:
“Just seeing this number is discouraging enough.”
Another jokingly suggested we might need a:
“Respect the maintainers’ life policy.”
Behind the jokes was something deeply real. Our Maintainers and our project at large were feeling the weight of something very new, very real, and clearly on the verge of changing how open source projects like ours will be maintained.
The maintainer reality few people see
Modern AI tools are extraordinary productivity amplifiers.
They generate code, documentation, tests, refactors, and design suggestions in seconds. But while output scales infinitely, review does not. The bottleneck in open source has never been code generation.
It has always been human cognition.
Every pull request, regardless of how it was produced must still be:
- Read
- Understood
- Evaluated for correctness
- Assessed for security implications
- Considered for long-term maintainability
- More often than not, commented on, questioned, or simply clarified
- Viewed by more than one set of eyes
- Merged
In open source, there is always a human in the loop. That human is typically a maintainer, a reviewer, or a combination of both.
When low-effort or poorly understood AI-generated PRs flood a project, the burden of validation shifts entirely onto the humans who bear the majority of the weight in this loop. Even the most well-intentioned contributions become costly when they lack clarity, context, demonstrated understanding, and ownership.
Low-effort AI contributions don’t just exhaust maintainers, they quietly tax every thoughtful contributor waiting in the queue.
AI boomers, AI rizz, and the reality of change
We’re currently living through a fascinating cultural split in the developer ecosystem.
On one side, we see what might playfully be called “AI boomers” otherwise known as those folks deeply skeptical of AI, hesitant to adopt it, or resistant to its growing presence in development workflows. While it might be hard to believe, there are many of these people working in and contributing to open source software development.
On the other side, we see contributors with undeniable “AI rizz.” These are enthusiastic adopters of AI eager to automate, generate, accelerate, and experiment with AI and AI tooling in the open source space and everywhere else possible.
Both reactions are understandable.
Both are human.
But history has taught us something consistent about technological change:
Projects, like businesses, that refuse to adapt rarely remain relevant.
It’s become clear that AI is not a passing trend. It is a structural shift in how software is created. Resisting it entirely is unlikely to be sustainable and blindly embracing it without guardrails is equally risky.
AI as acceleration vs. AI as substitution
Open source contributions have traditionally served as one of the most powerful learning engines in our industry. Developers deepen expertise, explore systems, build portfolios, and give back to the communities they rely on.
But it seems that the arrival of AI has changed how many contributors produce work. The unfortunate thing is that this hasn’t happened in a globally productive way, rather it has happened in a way that undermines the one thing that a meaningful contribution requires:
Understanding.
Using AI to bypass understanding is not acceleration. It’s debt for both the contributor and the project.
Superficially correct code that cannot be explained, reasoned about, or defended introduces risk. It also deprives contributors of the very growth that open source participation has historically enabled.
Across open source communities, we’re hearing the same message shared with AI touting contributors: AI can amplify learning but it cannot replace learning.
Ownership still matters — perhaps more than ever
During an internal discussion about AI-generated contributions, Jim Bugwadia, Nirmata CEO and Kyverno founder, made a deceptively simple observation about what needs to happen with AI generated and assisted contributions:
“Own your commit.”
In a world of AI-assisted development, that idea expands naturally.
If AI helped generate your contribution, you must also own your prompt and whatever is generated by it.
Ownership means:
- Understanding intent
- Verifying correctness
- Taking responsibility for outcomes
- Standing behind the change
AI can generate output but it can’t and shouldn’t assume accountability. The idea of having a human in the loop isn’t something that can or should ever be only Maintainer facing. To be fair, this concept must be Contributor facing too.
Disclosure as trust infrastructure
Transparency has always been foundational to open source collaboration.
AI introduces new complexities around licensing, copyright, provenance, and tool terms of service. Legal frameworks are still evolving, and uncertainty remains a defining characteristic of this space.
Disclosure is not about tools or bureaucracy.
Disclosure is about accountability. It is trust infrastructure.
Requiring contributors to disclose meaningful AI usage helps preserve:
- Transparency
- Reviewer trust
- Licensing integrity
- Contribution clarity
- Responsible authorship
This approach aligns with guidance from the Linux Foundation and discussions across the broaderCNCF community, both of which acknowledge that AI-generated content can be contributed provided contributors ensure compliance with licensing, attribution, and intellectual property obligations.
When AI meets open source: Kyverno’s approach
Kyverno is not a hobby project. Our project is used globally, in production, across organizations ranging from startups to enterprise-scale companies. Adoption continues to grow, and the project is actively moving toward CNCF Graduation.
Kyverno itself exists to create:
- Clarity
- Safety
- Consistency
- Sustainable workflows
All through policy as code.
In this case, we are applying the same philosophy to something new: AI usage.
If policy as code provides guardrails and golden paths in platform engineering, then we should be considering how to provide similar guidance in the AI-assisted development space.
Developers can’t sustainably leverage AI within open source ecosystems if projects fail to define the appropriate expectations for them to keep in mind as they develop.
AI-friendly does not mean AI-unbounded
There is an important distinction emerging across open source communities: Being AI-friendly does not mean accepting unreviewed AI output.
Maintainers themselves are often enthusiastic adopters of AI tools and rightly so. Across projects, maintainers are using AI to:
- Accelerate repetitive tasks
- Improve documentation
- Generate scaffolding
- Explore design alternatives
One emerging pattern is the use of AGENT.md-style configurations, designed to guide how AI tools interact with repositories and project conventions.
Kyverno is actively exploring similar approaches. The goal is not simply to manage AI-assisted contributions, but to improve their quality at the source.
Discomfort, growth, and privilege
AI is forcing open source communities to confront unfamiliar challenges:
- Scaling review processes
- Defining authorship norms
- Navigating licensing uncertainty
- Re-thinking contributor workflows
Discomfort is inevitable. But as Jim often reminds our team:
“Discomfort in newness is typically a sign of growth.”
The pressure to navigate these new challenges and answer these pressing questions is not a burden. Raising to this challenge is a privilege. It means:
- Our project matters
- The ecosystem is evolving
- We’re participating in shaping the future
A shared challenge across open source
Kyverno’s AI policy work was informed by thoughtful discussions and examples across the ecosystem. We dove into a variety of projects, each reflecting different constraints and priorities for us to keep in mind as we embark on our own journey.
Moving forward, what matters most, is that communities and community members from different projects and industries around the globe engage deliberately with these questions rather than simply responding reactively to the tooling.
Open source sustainability increasingly depends on shared governance patterns, not isolated experimentation.
An invitation to the ecosystem
AI is not going away, nor should it.
The question is not whether AI belongs in open source. The question is how we integrate it responsibly.
Sustainable open source in the AI era requires:
- Human ownership
- Transparent authorship
- Respect for reviewer time
- Context-aware contributions
- Community-driven guardrails
AI is a powerful tool. But open source remains at its core, a human system.
While AI changes the tools and accelerates output, it does not change the responsibility.
Acknowledgements and influences
Kyverno’s AI Usage Policy was shaped by the openness and thoughtfulness of many communities and leaders, including:
- Ghostty
- KubeVirt
- Linux Foundation working groups
- QEMU maintainers
- Mitchell Hashimoto’s writings on AI adoption
Open source benefits enormously when governance knowledge is shared. Thanks to everyone who has already shared and to those who will help us continue to adapt our AI policies as we grow our project.
Jailbreaking the F-35 Fighter Jet
Countries around the world are becoming increasingly concerned about their dependencies on the US. If you’ve purchase US-made F-35 fighter jets, you are dependent on the US for software maintenance.
The Dutch Defense Secretary recently said that he could jailbreak the planes to accept third-party software.
SBOMscanner 0.10 Release
Announcing the AI Gateway Working Group
The community around Kubernetes includes a number of Special Interest Groups (SIGs) and Working Groups (WGs) facilitating discussions on important topics between interested contributors. Today, we're excited to announce the formation of the AI Gateway Working Group, a new initiative focused on developing standards and best practices for networking infrastructure that supports AI workloads in Kubernetes environments.
What is an AI Gateway?
In a Kubernetes context, an AI Gateway refers to network gateway infrastructure (including proxy servers, load-balancers, etc.) that generally implements the Gateway API specification with enhanced capabilities for AI workloads. Rather than defining a distinct product category, AI Gateways describe infrastructure designed to enforce policy on AI traffic, including:
- Token-based rate limiting for AI APIs.
- Fine-grained access controls for inference APIs.
- Payload inspection enabling intelligent routing, caching, and guardrails.
- Support for AI-specific protocols and routing patterns.
Working group charter and mission
The AI Gateway Working Group operates under a clear charter with the mission to develop proposals for Kubernetes Special Interest Groups (SIGs) and their sub-projects. Its primary goals include:
- Standards Development: Create declarative APIs, standards, and guidance for AI workload networking in Kubernetes.
- Community Collaboration: Foster discussions and build consensus around best practices for AI infrastructure.
- Extensible Architecture: Ensure composability, pluggability, and ordered processing for AI-specific gateway extensions.
- Standards-Based Approach: Build on established networking foundations, layering AI-specific capabilities on top of proven standards.
Active proposals
WG AI Gateway currently has several active proposals that address key challenges in AI workload networking:
Payload Processing
The payload processing proposal addresses the critical need for AI workloads to inspect and transform full HTTP request and response payloads. This enables:
AI Inference Security
- Guard against malicious prompts and prompt injection attacks.
- Content filtering for AI responses.
- Signature-based detection and anomaly detection for AI traffic.
AI Inference Optimization
- Semantic routing based on request content.
- Intelligent caching to reduce inference costs and improve response times.
- RAG (Retrieval-Augmented Generation) system integration for context enhancement.
The proposal defines standards for declarative payload processor configuration, ordered processing pipelines, and configurable failure modes - all essential for production AI workload deployments.
Egress gateways
Modern AI applications increasingly depend on external inference services, whether for specialized models, failover scenarios, or cost optimization. The egress gateways proposal aims to define standards for securely routing traffic outside the cluster. Key features include:
External AI Service Integration
- Secure access to cloud-based AI services (OpenAI, Vertex AI, Bedrock, etc.).
- Managed authentication and token injection for third-party AI APIs.
- Regional compliance and failover capabilities.
Advanced Traffic Management
- Backend resource definitions for external FQDNs and services.
- TLS policy management and certificate authority control.
- Cross-cluster routing for centralized AI infrastructure.
User Stories We're Addressing
- Platform operators providing managed access to external AI services.
- Developers requiring inference failover across multiple cloud providers.
- Compliance engineers enforcing regional restrictions on AI traffic.
- Organizations centralizing AI workloads on dedicated clusters.
Upcoming events
KubeCon + CloudNativeCon Europe 2026, Amsterdam
AI Gateway working group members will be presenting at KubeCon + CloudNativeCon Europe in Amsterdam, discussing the problems at the intersection of AI and networking, including the working group's active proposals, as well as the intersection of AI gateways with Model Context Protocol (MCP) and agent networking patterns.
This session will showcase how AI Gateway working group proposals enable the infrastructure needed for next-generation AI deployments and communication patterns.
The session will also include the initial designs, early prototypes, and emerging directions shaping the WG’s roadmap.
For more details see our session here:
Get involved
The AI Gateway Working Group represents the Kubernetes community's commitment to standardizing AI workload networking. As AI becomes increasingly integral to modern applications, we need robust, standardized infrastructure that can support the unique requirements of inference workloads while maintaining the security, observability, and reliability standards that Kubernetes users expect.
Our proposals are currently in active development, with implementations beginning across various gateway projects. We're working closely with SIG Network on Gateway API enhancements and collaborating with the broader cloud-native community to ensure our standards meet real-world production needs.
Whether you're a gateway implementer, platform operator, AI application developer, or simply interested in the intersection of Kubernetes and AI, we'd love your input. The working group follows an open contribution model - you can review our proposals, join our weekly meetings, or start discussions on our GitHub repository. To learn more:
- Visit the working group's umbrella GitHub repository.
- Read the working group's charter.
- Join the weekly meeting on Thursdays at 2PM EST.
- Connect with the working group on Slack (#wg-ai-gateway) (visit https://slack.k8s.io/ for an invitation).
- Join the AI Gateway mailing list.
The future of AI infrastructure in Kubernetes is being built today, join up and learn how you can contribute and help shape the future of AI-aware gateway capabilities in Kubernetes.
New Attack Against Wi-Fi
It’s called AirSnitch:
Unlike previous Wi-Fi attacks, AirSnitch exploits core features in Layers 1 and 2 and the failure to bind and synchronize a client across these and higher layers, other nodes, and other network names such as SSIDs (Service Set Identifiers). This cross-layer identity desynchronization is the key driver of AirSnitch attacks.
The most powerful such attack is a full, bidirectional machine-in-the-middle (MitM) attack, meaning the attacker can view and modify data before it makes its way to the intended recipient. The attacker can be on the same SSID, a separate one, or even a separate network segment tied to the same AP. It works against small Wi-Fi networks in both homes and offices and large networks in enterprises...
Admission Controller 1.33 Release
How AI Assistants are Moving the Security Goalposts
AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.
The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.

The OpenClaw logo.
If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your entire digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.
Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done.
“The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.”
You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.
“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”

Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox.
There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations.
Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys.
With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic.
“You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.”
O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications.
WHEN AI INSTALLS AI
One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines.
A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rogue instance of OpenClaw with full system access installed on their device without consent.
According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile.
“On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update.
“This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.”
VIBE CODING
AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.

The Moltbook homepage.
Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.
Moltbook’s creator Matt Schlict said on social media that he didn’t write a single line of code for the project.
“I just had a vision for the technical architecture and AI made it a reality,” Schlict said. “We’re in the golden ages. How can we not give AI a place to hang out.”
ATTACKERS LEVEL UP
The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period.
AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication.
“One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.”
“This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.”
For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network.
“By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.”
BEWARE THE ‘LETHAL TRIFECTA’
This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out.
“I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.”
One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.

Image: simonwillison.net.
“If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025.
As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review.
The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity.
“The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.”
DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said.
“The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”
Friday Squid Blogging: Squid in Byzantine Monk Cooking
This is a very weird story about how squid stayed on the menu of Byzantine monks by falling between the cracks of dietary rules.
At Constantinople’s Monastery of Stoudios, the kitchen didn’t answer to appetite.
It answered to the “typikon”: a manual for ensuring that nothing unexpected happened at mealtimes. Meat: forbidden. Dairy: forbidden. Eggs: forbidden. Fish: feast-day only. Oil: regulated. But squid?
Squid had eight arms, no bones, and a gift for changing color. Nobody had bothered writing a regulation for that. This wasn’t a loophole born of legal creativity but an oversight rooted in taxonomic confusion. Medieval monks, confronted with a creature that was neither fish nor fowl, gave up and let it pass...
Anthropic and the Pentagon
OpenAI is in and Anthropic is out as a supplier of AI technology for the US defense department. This news caps a week of bluster by the highest officials in the US government towards some of the wealthiest titans of the big tech industry, and the overhanging specter of the existential risks posed by a new technology powerful enough that the Pentagon claims it is essential to national security. At issue is Anthropic’s insistence that the US Department of Defense (DoD) could not use its models to facilitate “mass surveillance” or “fully autonomous weapons,” provisions the defense secretary Pete Hegseth ...
Claude Used to Hack Mexican Government
An unknown hacker used Anthropic’s LLM to hack the Mexican government:
The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.
[…]
Claude initially warned the unknown user of malicious intent during their conversation about the Mexican government, but eventually complied with the attacker’s requests and executed thousands of commands on government computer networks, the researchers said...
CoreDNS-1.14.2 Release
Israel Hacked Traffic Cameras in Iran
Multiple news outlets are reporting on Israel’s hacking of Iranian traffic cameras and how they assisted with the killing of that country’s leadership.
The New York Times has an <a href="https://www.nytimes.com/2026/03/01/us/politics/cia-israel-ayatollah-compound.html"<article on the intelligence operation more generally.
Hacked App Part of US/Israeli Propaganda Campaign Against Iran
Wired has the story:
Shortly after the first set of explosions, Iranians received bursts of notifications on their phones. They came not from the government advising caution, but from an apparently hacked prayer-timing app called BadeSaba Calendar that has been downloaded more than 5 million times from the Google Play Store.
The messages arrived in quick succession over a period of 30 minutes, starting with the phrase ‘Help has arrived’ at 9:52 am Tehran time, shortly after the first set of explosions. No party has claimed responsibility for the hacks...
Uncached I/O in Prometheus
Do you find yourself constantly looking up the difference between container_memory_usage_bytes, container_memory_working_set_bytes, and container_memory_rss? Pick the wrong one and your memory limits lie to you, your benchmarks mislead you, and your container gets OOMKilled.
You're not alone. There is even a 9-year-old Kubernetes issue that captures the frustration of users.
The explanation is simple: RAM is not used in just one way. One of the easiest things to miss is the page cache semantics. For some containers, memory taken by page caching can make up most of the reported usage, even though that memory is largely reclaimable, creating surprising differences between those metrics.
NOTE: The feature discussed here currently only supports Linux.
Prometheus writes a lot of data to disk. It is, after all, a database. But not every write benefits from sitting in the page cache. Compaction writes are the clearest example: once a block is written, only a fraction of that data is likely to be queried again soon, and since there is no way to predict which fraction, caching it all offers little return. The use-uncached-io feature flag was built to address exactly this.
Bypassing the cache for those writes reduces Prometheus's page cache footprint, making its memory usage more predictable and easier to reason about. It also relieves pressure on that shared cache, lowering the risk of evicting hot data that queries and other reads actually depend on. A potential bonus is reduced CPU overhead from cache allocations and evictions. The hard constraint throughout was to avoid any measurable regression in CPU or disk I/O.
The flag was introduced in Prometheus v3.5.0 and currently only supports Linux. Under the hood, it uses direct I/O, which requires proper filesystem support and a kernel v2.4.10 or newer, though you should be fine, as that version shipped nearly 25 years ago.
If direct I/O helps here, why was it not done earlier, and why is it not used everywhere it would help? Because direct I/O comes with strict alignment requirements. Unlike buffered I/O, you cannot simply write any chunk of memory to any position in a file. The file offset, the memory buffer address, and the transfer size must all be aligned to the logical sector size of the underlying storage device, typically 512 or 4096 bytes.
To satisfy those constraints, a bufio.Writer-like writer, directIOWriter, was implemented. On Linux kernels v6.1 or newer, Prometheus retrieves the exact alignment values via statx; on older kernels, conservative defaults are used.
The directIOWriter currently covers chunk writes during compaction only, but that alone accounts for a substantial portion of Prometheus's I/O. The results are tangible: benchmarks show a 20–50% reduction in page cache usage, as measured by container_memory_cache.
The work is not done yet, and contributions are welcome. Here are a few areas that could help move the feature closer to General Availability:
Covering more write paths
Direct I/O is currently limited to chunk writes during compaction. Index files and WAL writes are natural next candidates, although they would require some additional work.
Building more confidence around directIOWriter
All existing TSDB tests can be run against the directIOWriter using a dedicated build tag: go test --tags=forcedirectio ./tsdb/. More tests covering edge cases for the writer itself would be welcome, and there is even an idea of formally verifying that it never violates alignment requirements.
Experimenting with RWF_DONTCACHE
Introduced in Linux kernel v6.14, RWF_DONTCACHE enables uncached buffered I/O, where data still goes through the page cache but the corresponding pages are dropped afterwards. It would be worth benchmarking whether this delivers similar benefits without direct I/O's alignment constraints.
Support beyond Linux
Support is currently Linux-only. Contributions to extend it to other operating systems are welcome.
For more details, see the proposal and the PR that introduced the feature.
MCP security: Implementing robust authentication and authorization
Europe Under Pressure
Manipulating AI Summarization Features
Microsoft is reporting:
Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters….
These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated...
Scaling organizational structure with Meshery’s expanding ecosystem
As a high velocity project and one of the fastest-growing projects in the CNCF ecosystem, Meshery’s increasing scale and community contributions necessitates this recognition, which requires a revision to its governance and organizational structure that better aligns with the scale of its growing complexity and community contributions. To best serve its expansive ecosystem, Meshery maintainers have opted to partition the numerous GitHub repositories into two distinct organizations: github.com/meshery for the core platform and github.com/meshery-extensions for extensions and integrations.
This post explains the rationale behind the shift, outlining the proposed governance structure, setting expectations around support, and describing project mechanics, drawing inspiration from other successful CNCF projects.
Rationale for Repository Partitioning
The decision to partition repositories aims to improve project structure, manageability, scalability, and community engagement.
Project architecture
Meshery is a highly extensible, self-service management platform. Every feature is developed with extensibility in mind, as is evident by the ubiquity of extension points throughout Meshery’s architecture.
Modularity and focus
Separating the core platform from extensions allows the Meshery core team to concentrate on maintaining and enhancing the primary platform, which includes critical components like Meshery Operator and MeshSync. Extensions, such as adapters for specific cloud native technologies, can be developed and maintained independently by community contributors or specialized teams. This modularity ensures that the core platform remains robust and focused.
Project scalability
With support for over 300 integrations and counting, managing everything under one GitHub organization has become impractical. A separate organization for extensions simplifies permission management, contribution processes, and release cycles, making the ecosystem more scalable.
- Community ownership and maintenance: Projects within meshery-extensions are generally initiated, developed, and maintained by members of the community, rather than the core maintainers. This allows the ecosystem to scale beyond what the core team can directly support.
- Clearer support expectations: Distinguishing between the core and extensions makes it clear that projects in meshery-extensions have different maintenance levels, release cadences, and support guarantees compared to the core components. This clarifies that users are relying on community support for these specific integrations.
Community engagement
By providing a dedicated space for extensions, Meshery encourages community contributions, as developers can create and maintain extensions without needing deep involvement in the core platform’s development. With this approach, meshery-extensions fosters a vibrant ecosystem around Meshery by providing a designated, community-centric space for extensions, integrations, and tooling, keeping the core project focused and manageable while enabling broad community participation.
- Incubation and experimentation: The separate organization acts as an incubator for new ideas, providers, or tooling related to Meshery. Projects can start here and, if they gain significant traction and stability, will be considered for migration or closer integration with the core project.
- Ecosystem growth: Part of Meshery’s power lies in its ability to manage any infrastructure via Providers, Models, Adapters, and its other extension points. Since there are countless APIs and services, meshery-extensions serves as the place where the community can build and share Providers for less common cloud services, specific SaaS platforms, or even internal company APIs, without needing official endorsement or maintenance from the core maintainers.
Governance Structure
The new structure allows for different governance models and maintainer structures for community projects compared to the core project. Meshery can adopt a governance model that balances control over the core platform with flexibility for extensions, drawing from its existing governance and the Kubernetes’ SIG model.
Core Platform (github.com/meshery)
- Governance: Governed by the core Meshery maintainers, as outlined in the project’s governance document. Roles include contributors, organization members, and maintainers, with clear processes for becoming a maintainer (e.g., nomination, voting by existing maintainers).
- Responsibilities: Maintainers review, approve, and merge pull requests, manage releases, and ensure the platform’s stability and alignment with CNCF standards.
- Decision-making: Decisions are made through consensus among maintainers, with regular meetings and transparent communication via Slack and community forums.
Extensions (github.com/meshery-extensions)
- Governance: Each extension may have its own maintainers and a lighter governance structure to encourage innovation. A review process by the core team ensures extensions meet quality and compatibility standards.
- Maintainer selection: Extension maintainers can be nominated by community members or self-nominated, with approval from the core team based on contribution history and technical expertise.
- Autonomy: Extension teams have autonomy over their development processes, provided they adhere to Meshery’s code of conduct and integration guidelines.
Oversight and Coordination
- Steering committee: A steering committee, composed of core maintainers and representatives from active extension teams, oversees cross-organization alignment, resolves conflicts, and approves new extensions.
- Transparency: Both organizations maintain open communication with public meeting minutes, discussion forums, and regular updates to the community.
Delineated support expectations
Support expectations differ between the core platform and extensions to reflect their distinct roles and maintenance models.
Core platform
- Full support: The core team provides regular updates, bug fixes, and feature enhancements, ensuring stability for critical components like Meshery Operator and MeshSync.
- Documentation: Comprehensive guides, such as installation instructions and CLI usage, are maintained (Meshery Documentation).
- Community support: Active engagement through Slack, forums, and weekly newcomer meetings to support users and contributors.
Extensions
- Variable support: Core team-maintained extensions receive robust support, while community-maintained ones may have limited support.
- Clear labeling: Documentation should indicate the support level (e.g., “Official” vs. “Community”) for each extension.
- Integration support: The core platform provides stable APIs and extension points, ensuring compatibility, with guidelines for developers (Meshery Extensions).
Project mechanics
Managing two organizations involves distinct development, testing, and integration processes to ensure a cohesive ecosystem.
Development process
- Platform: Follows a structured release cycle with stable and edge channels. Changes undergo rigorous review to sustain stability. Notify platform extenders and system integrators of upcoming changes in the underlying framework to ensure time is afforded to maintain compatibility.
- Extensions: Operate on independent release cycles, allowing rapid iteration. Developers use Meshery’s extension points to integrate with the core platform, following contribution guidelines.
Integration testing
- Compatibility testing: Extensions are tested against multiple core platform versions to deliver compatibility, using guidance for verifying compatibility between core platform and extensions.
- Automated pipelines: GitHub Actions automate testing and snapshot generation, as seen in extensions like Helm Kanvas Snapshot.
- Performance testing: Meshery’s performance management features can be used to benchmark extensions, ensuring they meet efficiency standards.
Documentation and resources
- Comprehensive guides: Documentation covers core platform usage, extension development, and integration (Meshery Docs). The Newcomers’ Guide and MeshMates program aid onboarding (Meshery Community).
- Catalog and templates: Meshery’s catalog of design templates includes extension configurations, and promoting best practices (Meshery Catalog).
- Community resources: Weekly meetings, Slack channels, and the community handbook provide ongoing support.
Reflections on other projects
Meshery’s expansion strategy draws inspiration from successful models within the Cloud Native Computing Foundation (CNCF), like Argo, Crossplane, and Kubernetes. These projects demonstrate effective approaches to decentralized governance and focused development through the separation of core and community-contributed components.
Meshery aims to emulate Crossplane’s model of maintaining a clear distinction between its core platform (github.com/crossplane) and community contributions (github.com/crossplane-contrib). This separation allows third-party developers to extend Crossplane’s capabilities without affecting the core’s stability, a model that supports Meshery’s approach to fostering innovation while maintaining a reliable core.
Similarly, Meshery Extension teams operate with autonomy over their development processes, provided they adhere to Meshery’s core component frameworks and integration guidelines. This mirrors Argo’s model (github.com/argoproj-labs), where projects function independently but align with broader project goals.
Kubernetes provides a robust model for decentralized governance through its use of github.com/kubernetes for core components and github.com/kubernetes-sigs for Special Interest Groups (SIGs). Each SIG acts as a mini-community with its own charter, leadership, and processes, all while aligning with overarching project goals, as outlined in theKubernetes Governance. Meshery’s extension organization can adopt a similar structure, enabling extension teams to operate autonomously within defined guidelines.
Meshery umbrella expands
See the current list of repositories under each organization: meshery org repos and meshery-extensions org repos.
By partitioning repositories into github.com/meshery andgithub.com/meshery-extensions, Meshery is taking a strategic step towards the overarching goal of improved modularity, scalability, and community engagement.
By adopting a governance structure that balances control and flexibility, delineating clear support expectations, and implementing robust project mechanics, Meshery can effectively manage its growing ecosystem. Drawing inspiration from graduated projects, Meshery is poised to remain a leading CNCF project—empowering collaborative cloud native management.
Introducing the UX Research Working Group
Prometheus has always prioritized solving complex technical challenges to deliver a reliable, performant open-source monitoring system. Over time, however, users have expressed a variety of experience-related pain points. Those pain points range from onboarding and configuration to documentation, mental models, and interoperability across the ecosystem.
At PromCon 2025, a user research study was presented that highlighted several of these issues. Although the central area of investigation involved Prometheus and OpenTelemetry workflows, the broader takeaway was clear: Prometheus would benefit from a dedicated, ongoing effort to understand user needs and improve the overall user experience.
Recognizing this, the Prometheus team established a Working Group focused on improving user experience through design and user research. This group is meant to support all areas of Prometheus by bringing structured research, user insights, and usability perspectives into the community's development and decision-making processes.
How we can help Prometheus maintainers
Building something where the user needs are unclear? Maybe you're looking at two competing solutions and you'd like to understand the user tradeoffs alongside the technical ones.
That's where we can be of help.
The UX Working Group will partner with you to conduct user research or provide feedback on your plans for user outreach. That could include:
- User research reports and summaries
- User journeys, personas, wireframes, prototypes, and other UX artifacts
- Recommendations for improving usability, onboarding, interoperability, and documentation
- Prioritized lists of user pain points
- Suggestions for community discussions or decision-making topics
To get started, tell us what you're trying to do, and we'll work with you to determine what type and scope of research is most appropriate.
How we can help Prometheus end users
We want to hear from you! Let us know if you're interested in participating in a research study and we'll contact you when we're working on one that's a good fit. Having an issue with the Prometheus user experience? We can help you open an issue and direct it to the appropriate community members.
Interested in helping?
New contributors to the working group are always welcome! Get in touch and let us know what you'd like to work on.
Where to find us
Drop us a message in Slack, join a meeting, or raise an issue in GitHub.
- Slack: #prometheus-ux-wg
- Meetings: We meet biweekly, currently on Wednesdays at 14:00 UTC (subject to change depending on contributor availability).
- GitHub: prometheus-community/ux-research
On Moltbook
The MIT Technology Review has a good article on Moltbook, the supposed AI-only social network:
Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.
“Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. “Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.”...

