Feed aggregator
A Surprising Amount of Satellite Traffic Is Unencrypted
Here’s the summary:
We pointed a commercial-off-the-shelf satellite dish at the sky and carried out the most comprehensive public study to date of geostationary satellite communication. A shockingly large amount of sensitive traffic is being broadcast unencrypted, including critical infrastructure, internal corporate and government communications, private citizens’ voice calls and SMS, and consumer Internet traffic from in-flight wifi and mobile networks. This data can be passively observed by anyone with a few hundred dollars of consumer-grade hardware. There are thousands of geostationary satellite transponders globally, and data from a single transponder may be visible from an area as large as 40% of the surface of the earth...
DDoS in September
Policy Server 1.29.2 Patch Release
Kyverno vs Kubernetes policies: How Kyverno complements and completes Kubernetes policy types
Originally posted on Nirmata.com on October 1, 2025
How Kyverno extends and integrates with Kubernetes policies
With the addition of ValidatingAdmissionPolicy and MutatingAdmissionPolicy in Kubernetes, do you still need Kyverno? This post answers the question by providing ten reasons why Kyverno is essential even when you are using Kubernetes policy types.
Introduction
Prior to Kyverno, policy management in Kubernetes was complex and cumbersome. While the need for Policy as Code was clear, initial implementations required learning complex languages and did not implement the full policy as code lifecycle.
Kyverno was created by Nirmata, and donated to the CNCF in November 2020. It rapidly gained popularity due to its embrace of Kubernetes resources for policy declarations, it’s easy-to-use syntax, and breadth of features that addressed all aspects of policy as code.
Recently, Kubernetes has also introduced native policy types which can be executed directly in the Kubernetes API server. This move validates that policies are a must have for Kubernetes, and now allows critical policies to be executed directly in the API server.
The Kubernetes API server is a critical resource that needs to be extremely efficient. To safely execute policies in the API server, the Kubernetes authors chose CEL (Common Expressions Language) to embed logic in policy YAML declarations. In addition to a familiar syntax, CEL programs can be pre-compiled and execution costs can be pre-calculated.
With these changes in Kubernetes, the Kyverno has also evolved to stay true to its mission of providing the best policy engine and tools for Kubernetes native policy as code.
Kyverno now supports five new policy types, two of which, ValidatingPolicy and MutatingPolicy, are extensions of Kubernetes policy types ValidatingAdmissionPolicy and MutatingAdmissionPolicy, respectively.
NOTE: I will use the term “Kubernetes Policies” to refer to ValidatingAdmissionPolicies and MutatingAdmissionPolicies.
Here is a summary of the Kyverno policy types:
- ValidatingPolicy: This policy type checks if a resource’s configuration adheres to predefined rules and can either enforce or audit compliance. This policy type is an extension of the Kubernetes ValidatingAdmissionPolicy.
- ImageValidatingPolicy: A specialized validating policy that verifies a container image’s signatures and attestations to ensure its integrity and trustworthiness.
- MutatingPolicy: This policy type modifies a resource’s configuration as it’s being created or updated, applying changes like adding labels, annotations, or sidecar containers. This policy is an extension of the Kubernetes MutatingAdmissionPolicy.
- GeneratingPolicy: This policy creates or clones new resources in response to a trigger event, such as automatically generating a NetworkPolicy when a new Namespace is created.
- DeletingPolicy: This policy automatically deletes existing resources that match specific criteria on a predefined schedule, often used for garbage collection or enforcing retention policies
So, when should you choose to use Kyverno policies vs the Kubernetes policy types? The right answer is that if you believe that declarative Policy as Code is the right way to manage Kubernetes configuration complexity, you will need both!
As you will see below, Kyverno provides critical features that are missing in Kubernetes policies and also helps with policy management at scale.
1. Applying policies on existing resources
When new policies are created, they need to be applied on existing resources. Kubernetes Policies only apply on resource changes, and hence policy violations on existing resources are not reported.
Kyverno applies policies, including Kubernetes Policies types, on all resources.
2. Reapplying policies on changes
Like code, policies change over time. This can be to adapt to updated or new features, or to fix issues in the policy. When a policy changes, it must be re-applied to all resources. Kubernetes Policies are embedded in the API server and not reapplied when the policy changes.
3. Applying policies pff cluster (shift-left)
Providing feedback to developers as early as possible in a deployment pipeline is highly desirable and has tangible benefits of time and cost savings. The Kyverno CLI can apply Kyverno and Kubernetes Policy types in CI/CD and IaC pipelines.
4. Testing policy as code
Like all software, policies must be thoroughly tested prior to deployment. Kyverno provides tools for testing Kyverno and Kubernetes policy types. You can use the Kyverno CLI for unit tests, and Kyverno Chainsaw for e2e behavioral tests.
5. Reporting policy results
Kyverno provides integrated reporting, where reports are namespaced Kubernetes resources and hence via the Kubernetes API and other tools to application owners. Kyverno reports are generated for both Kyverno and Kubernetes policy types.
6. Managing fine-grained policy exceptions
Kyverno allows configuring policy exceptions to exclude some resources from policies. Kyverno exceptions are Kubernetes resources making it possible to view and manage via the Kubernetes API using standard tools.
Kyverno exceptions can specify an image, so you can exclude certain containers in a pod while still applying the policy to other containers. Exceptions can also declare specific values that are allowed. Exceptions can also be time-bound, but adding a TTL (time to live).
These powerful capabilities allow enforcing policies and then use exceptions to exclude certain resources, or even parts of a resource declaration.
7. Complex policy logic
Kubernetes policies are designed for simple checks, and can only apply to the admission payload. This is often insufficient, as policies may need to look up other resources or even reference external data. These types of checks are not possible with Kubernetes policies. Additionally, Kubernetes MutatingAdmissionPolicies cannot match sub-resources and apply to a resource.
Kyverno supports features for complex policies, including API lookups and external data management. Kyverno also offers an extended CEL library with useful functions necessary for complex policies.
8. Image verification
Kyverno offers built-in verification of OCI (Open Container Initiative) image and artifact signatures, using Sigstore’s Cosign or CNCF’s Notary projects. This allows implementing software supply chain security use cases and achieving high levels of SLSA (Supply-chain Levels for Software Artifacts.)
9. Policy-based automation
Besides validating and mutating resources, policies are an essential tool for automating several complex platform engineering tasks. For example, policies can be used to automatically generate secure defaults, or resources like network policies, on flexible triggers such as a namespace creation or when a label is added. This allows a tight control loop, and can be used to replace custom controllers with declarative and scalable policy as code.
10. Kyverno everywhere
While Kubernetes Policy types can only be applied to Kubernetes resources, Kyverno policies can be applied to any JSON or YAML payload including Terraform or OpenTofu manifests, other IaC manifests such as CDK, and build artifacts such as Dockerfiles.
Kyverno enables a unified policy as code approach, which is essential for platform engineering teams that manage both Kubernetes clusters, and pipelines for CI/CD and IaC.
Conclusion
Kyverno is fully compatible with Kubernetes policies, and is designed to seamlessly support and extend Kubernetes policy types. It applies Kubernetes policies to existing resources and can also provide policy reporting and exception management for Kubernetes policies.
Like Kubernetes policies, Kyverno policies also use the Common Expressions Language (CEL) and extend the Kubernetes policy declarations with additional fields, and extended CEL libraries, required for complex policies and advanced policy as code use cases.
This allows having a mix of Kubernetes and Kyverno policies managed by Kyverno. You can get started with Kubernetes policies and then upgrade to Kyverno policies for advanced use cases.
If you have existing Kubernetes policies, you can use Kyverno to apply them to existing resources, produce reports, apply the policies off-cluster, and perform unit and behavioral tests.
If you are starting out, you can use Kyverno policy types. Wherever possible Kyverno will automatically generate and manage Kubernetes policies for optimal performance. For complex policies, which cannot be handled in the API server, Kyverno will execute these during admission controls and periodically as background scans.
Regardless of where you start, with Kyverno you get a powerful and complete policy as code solution for Kubernetes and all your policy-based authorization needs!
Cryptocurrency ATMs
CNN has a great piece about how cryptocurrency ATMs are used to scam people out of their money. The fees are usurious, and they’re a common place for scammers to send victims to buy cryptocurrency for them. The companies behind the ATMs, at best, do not care about the harm they cause; the profits are just too good.
Simplified patching with Red Hat Enterprise Linux and Red Hat Insights
Apple’s Bug Bounty Program
Apple is now offering a $2M bounty for a zero-click exploit. According to the Apple website:
Today we’re announcing the next major chapter for Apple Security Bounty, featuring the industry’s highest rewards, expanded research categories, and a flag system for researchers to objectively demonstrate vulnerabilities and obtain accelerated awards.
- We’re doubling our top award to $2 million for exploit chains that can achieve similar goals as sophisticated mercenary spyware attacks. This is an unprecedented amount in the industry and the largest payout offered by any bounty program we’re aware of and our bonus system, providing additional rewards for Lockdown Mode bypasses and vulnerabilities discovered in beta software, can more than double this reward, with a maximum payout in excess of $5 million. We’re also doubling or significantly increasing rewards in many other categories to encourage more intensive research. This includes $100,000 for a complete Gatekeeper bypass, and $1 million for broad unauthorized iCloud access, as no successful exploit has been demonstrated to date in either category. ...
Patch Tuesday, October 2025 ‘End of 10’ Edition
Microsoft today released software updates to plug a whopping 172 security holes in its Windows operating systems, including at least two vulnerabilities that are already being actively exploited. October’s Patch Tuesday also marks the final month that Microsoft will ship security updates for Windows 10 systems. If you’re running a Windows 10 PC and you’re unable or unwilling to migrate to Windows 11, read on for other options.

The first zero-day bug addressed this month (CVE-2025-24990) involves a third-party modem driver called Agere Modem that’s been bundled with Windows for the past two decades. Microsoft responded to active attacks on this flaw by completely removing the vulnerable driver from Windows.
The other zero-day is CVE-2025-59230, an elevation of privilege vulnerability in Windows Remote Access Connection Manager (also known as RasMan), a service used to manage remote network connections through virtual private networks (VPNs) and dial-up networks.
“While RasMan is a frequent flyer on Patch Tuesday, appearing more than 20 times since January 2022, this is the first time we’ve seen it exploited in the wild as a zero day,” said Satnam Narang, senior staff research engineer at Tenable.
Narang notes that Microsoft Office users should also take note of CVE-2025-59227 and CVE-2025-59234, a pair of remote code execution bugs that take advantage of “Preview Pane,” meaning that the target doesn’t even need to open the file for exploitation to occur. To execute these flaws, an attacker would social engineer a target into previewing an email with a malicious Microsoft Office document.
Speaking of Office, Microsoft quietly announced this week that Microsoft Word will now automatically save documents to OneDrive, Microsoft’s cloud platform. Users who are uncomfortable saving all of their documents to Microsoft’s cloud can change this in Word’s settings; ZDNet has a useful how-to on disabling this feature.
Kev Breen, senior director of threat research at Immersive, called attention to CVE-2025-59287, a critical remote code execution bug in the Windows Server Update Service (WSUS) — the very same Windows service responsible for downloading security patches for Windows Server versions. Microsoft says there are no signs this weakness is being exploited yet. But with a threat score of 9.8 out of possible 10 and marked “exploitation more likely,” CVE-2025-59287 can be exploited without authentication and is an easy “patch now” candidate.
“Microsoft provides limited information, stating that an unauthenticated attacker with network access can send untrusted data to the WSUS server, resulting in deserialization and code execution,” Breen wrote. “As WSUS is a trusted Windows service that is designed to update privileged files across the file system, an attacker would have free rein over the operating system and could potentially bypass some EDR detections that ignore or exclude the WSUS service.”
For more on other fixes from Redmond today, check out the SANS Internet Storm Center monthly roundup, which indexes all of the updates by severity and urgency.
Windows 10 isn’t the only Microsoft OS that is reaching end-of-life today; Exchange Server 2016, Exchange Server 2019, Skype for Business 2016, Windows 11 IoT Enterprise Version 22H2, and Outlook 2016 are some of the other products that Microsoft is sunsetting today.

If you’re running any Windows 10 systems, you’ve probably already determined whether your PC meets the technical hardware specs recommended for the Windows 11 OS. If you’re reluctant or unable to migrate a Windows 10 system to Windows 11, there are alternatives to simply continuing to use Windows 10 without ongoing security updates.
One option is to pay for another year’s worth of security updates through Microsoft’s Extended Security Updates (ESU) program. The cost is just $30 if you don’t have a Microsoft account, and apparently free if you register the PC to a Microsoft account. This video breakdown from Ask Your Computer Guy does a good job of walking Windows 10 users through this process. Microsoft emphasizes that ESU enrollment does not provide other types of fixes, feature improvements or product enhancements. It also does not come with technical support.

If your Windows 10 system is associated with a Microsoft account and signed in when you visit Windows Update, you should see an option to enroll in extended updates. Image: https://www.youtube.com/watch?v=SZH7MlvOoPM
Windows 10 users also have the option of installing some flavor of Linux instead. Anyone seriously considering this option should check out the website endof10.org, which includes a plethora of tips and a DIY installation guide.
Linux Mint is a great option for Linux newbies. Like most modern Linux versions, Mint will run on anything with a 64-bit CPU that has at least 2GB of memory, although 4GB is recommended. In other words, it will run on almost any computer produced in the last decade.
Linux Mint also is likely to be the most intuitive interface for regular Windows users, and it is largely configurable without any fuss at the text-only command-line prompt. Mint and other flavors of Linux come with LibreOffice, which is an open source suite of tools that includes applications similar to Microsoft Office, and it can open, edit and save documents as Microsoft Office files.
If you’d prefer to give Linux a test drive before installing it on a Windows PC, you can always just download it to a removable USB drive. From there, reboot the computer (with the removable drive plugged in) and select the option at startup to run the operating system from the external USB drive. If you don’t see an option for that after restarting, try restarting again and hitting the F8 button, which should open a list of bootable drives. Here’s a fairly thorough tutorial that walks through exactly how to do all this.
And if this is your first time trying out Linux, relax and have fun: The nice thing about a “live” version of Linux (as it’s called when the operating system is run from a removable drive such as a CD or a USB stick) is that none of your changes persist after a reboot. Even if you somehow manage to break something, a restart will return the system back to its original state.
As ever, if you experience any difficulties during or after applying this month’s batch of patches, please leave a note about it in the comments below.
Upcoming Speaking Engagements
This is a current list of where and when I am scheduled to speak:
- I and Nathan E. Sanders will be giving a book talk on Rewiring Democracy at the Harvard Kennedy School’s Ash Center in Cambridge, Massachusetts, USA, on October 22, 2025 at noon ET.
- I and Nathan E. Sanders will be speaking and signing books at the Cambridge Public Library in Cambridge, Massachusetts, USA, on October 22, 2025 at 6:00 PM ET. The event is sponsored by Harvard Bookstore.
- I and Nathan E. Sanders will give a virtual talk about our book Rewiring Democracy on October 23, 2025 at 1:00 PM ET. The event is hosted by Data & Society...
The Trump Administration’s Increased Use of Social Media Surveillance
This chilling paragraph is in a comprehensive Brookings report about the use of tech to deport people from the US:
The administration has also adapted its methods of social media surveillance. Though agencies like the State Department have gathered millions of handles and monitored political discussions online, the Trump administration has been more explicit in who it’s targeting. Secretary of State Marco Rubio announced a new, zero-tolerance “Catch and Revoke” strategy, which uses AI to monitor the public speech of foreign nationals and revoke visas...
Rewiring Democracy is Coming Soon
My latest book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, will be published in just over a week. No reviews yet, but can read chapters 12 and <a href=https://newpublic.substack.com/p/2ddffc17-a033-4f98-83fa-11376b30c6cd”>34 (of 43 chapters total).
You can order the book pretty much everywhere, and a copy signed by me <a href=”https://www.schneier.com/product/rewiring-democracy-hardcover/’>here.
Please help spread the word. I want this book to make a splash when it’s public. Leave a review on whatever site you buy it from. Or make a TikTok video. Or do whatever you kids do these days. Is anyone a SlashDot contributor? I’d like the book to be announced there...
AI and the Future of American Politics
Two years ago, Americans anxious about the forthcoming 2024 presidential election were considering the malevolent force of an election influencer: artificial intelligence. Over the past several years, we have seen plenty of warning signs from elections worldwide demonstrating how AI can be used to propagate misinformation and alter the political landscape, whether by trolls on social media, foreign influencers, or even a street magician. AI is poised to play a more volatile role than ever before in America’s next federal election in 2026. We can already see how different groups of political actors are approaching AI. Professional campaigners are using AI to accelerate the traditional tactics of electioneering; organizers are using it to reinvent how movements are built; and citizens are using it both to express themselves and amplify their side’s messaging. Because there are so few rules, and so little prospect of regulatory action, around AI’s role in politics, there is no oversight of these activities, and no safeguards against the dramatic potential impacts for our democracy...
Kubewarden 1.29.1 Patch Release
Friday Squid Blogging: Sperm Whale Eating a Giant Squid
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
DDoS Botnet Aisuru Blankets US ISPs in Record DDoS
The world’s largest and most disruptive botnet is now drawing a majority of its firepower from compromised Internet-of-Things (IoT) devices hosted on U.S. Internet providers like AT&T, Comcast and Verizon, new evidence suggests. Experts say the heavy concentration of infected devices at U.S. providers is complicating efforts to limit collateral damage from the botnet’s attacks, which shattered previous records this week with a brief traffic flood that clocked in at nearly 30 trillion bits of data per second.
Since its debut more than a year ago, the Aisuru botnet has steadily outcompeted virtually all other IoT-based botnets in the wild, with recent attacks siphoning Internet bandwidth from an estimated 300,000 compromised hosts worldwide.
The hacked systems that get subsumed into the botnet are mostly consumer-grade routers, security cameras, digital video recorders and other devices operating with insecure and outdated firmware, and/or factory-default settings. Aisuru’s owners are continuously scanning the Internet for these vulnerable devices and enslaving them for use in distributed denial-of-service (DDoS) attacks that can overwhelm targeted servers with crippling amounts of junk traffic.
As Aisuru’s size has mushroomed, so has its punch. In May 2025, KrebsOnSecurity was hit with a near-record 6.35 terabits per second (Tbps) attack from Aisuru, which was then the largest assault that Google’s DDoS protection service Project Shield had ever mitigated. Days later, Aisuru shattered that record with a data blast in excess of 11 Tbps.
By late September, Aisuru was publicly flexing DDoS capabilities topping 22 Tbps. Then on October 6, its operators heaved a whopping 29.6 terabits of junk data packets each second at a targeted host. Hardly anyone noticed because it appears to have been a brief test or demonstration of Aisuru’s capabilities: The traffic flood lasted less only a few seconds and was pointed at an Internet server that was specifically designed to measure large-scale DDoS attacks.

A measurement of an Oct. 6 DDoS believed to have been launched through multiple botnets operated by the owners of the Aisuru botnet. Image: DDoS Analyzer Community on Telegram.
Aisuru’s overlords aren’t just showing off. Their botnet is being blamed for a series of increasingly massive and disruptive attacks. Although recent assaults from Aisuru have targeted mostly ISPs that serve online gaming communities like Minecraft, those digital sieges often result in widespread collateral Internet disruption.
For the past several weeks, ISPs hosting some of the Internet’s top gaming destinations have been hit with a relentless volley of gargantuan attacks that experts say are well beyond the DDoS mitigation capabilities of most organizations connected to the Internet today.
Steven Ferguson is principal security engineer at Global Secure Layer (GSL), an ISP in Brisbane, Australia. GSL hosts TCPShield, which offers free or low-cost DDoS protection to more than 50,000 Minecraft servers worldwide. Ferguson told KrebsOnSecurity that on October 8, TCPShield was walloped with a blitz from Aisuru that flooded its network with more than 15 terabits of junk data per second.
Ferguson said that after the attack subsided, TCPShield was told by its upstream provider OVH that they were no longer welcome as a customer.
“This was causing serious congestion on their Miami external ports for several weeks, shown publicly via their weather map,” he said, explaining that TCPShield is now solely protected by GSL.
Traces from the recent spate of crippling Aisuru attacks on gaming servers can be still seen at the website blockgametracker.gg, which indexes the uptime and downtime of the top Minecraft hosts. In the following example from a series of data deluges on the evening of September 28, we can see an Aisuru botnet campaign briefly knocked TCPShield offline.

An Aisuru botnet attack on TCPShield (AS64199) on Sept. 28 can be seen in the giant downward spike in the middle of this uptime graphic. Image: grafana.blockgametracker.gg.
Paging through the same uptime graphs for other network operators listed shows almost all of them suffered brief but repeated outages around the same time. Here is the same uptime tracking for Minecraft servers on the network provider Cosmic (AS30456), and it shows multiple large dips that correspond to game server outages caused by Aisuru.

Multiple DDoS attacks from Aisuru can be seen against the Minecraft host Cosmic on Sept. 28. The sharp downward spikes correspond to brief but enormous attacks from Aisuru. Image: grafana.blockgametracker.gg.
BOTNETS R US
Ferguson said he’s been tracking Aisuru for about three months, and recently he noticed the botnet’s composition shifted heavily toward infected systems at ISPs in the United States. Ferguson shared logs from an attack on October 8 that indexed traffic by the total volume sent through each network provider, and the logs showed that 11 of the top 20 traffic sources were U.S. based ISPs.
AT&T customers were by far the biggest U.S. contributors to that attack, followed by botted systems on Charter Communications, Comcast, T-Mobile and Verizon, Ferguson found. He said the volume of data packets per second coming from infected IoT hosts on these ISPs is often so high that it has started to affect the quality of service that ISPs are able to provide to adjacent (non-botted) customers.
“The impact extends beyond victim networks,” Ferguson said. “For instance we have seen 500 gigabits of traffic via Comcast’s network alone. This amount of egress leaving their network, especially being so US-East concentrated, will result in congestion towards other services or content trying to be reached while an attack is ongoing.”
Roland Dobbins is principal engineer at Netscout. Dobbins said Ferguson is spot on, noting that while most ISPs have effective mitigations in place to handle large incoming DDoS attacks, many are far less prepared to manage the inevitable service degradation caused by large numbers of their customers suddenly using some or all available bandwidth to attack others.
“The outbound and cross-bound DDoS attacks can be just as disruptive as the inbound stuff,” Dobbin said. “We’re now in a situation where ISPs are routinely seeing terabit-per-second plus outbound attacks from their networks that can cause operational problems.”
“The crying need for effective and universal outbound DDoS attack suppression is something that is really being highlighted by these recent attacks,” Dobbins continued. “A lot of network operators are learning that lesson now, and there’s going to be a period ahead where there’s some scrambling and potential disruption going on.”
KrebsOnSecurity sought comment from the ISPs named in Ferguson’s report. Charter Communications pointed to a recent blog post on protecting its network, stating that Charter actively monitors for both inbound and outbound attacks, and that it takes proactive action wherever possible.
“In addition to our own extensive network security, we also aim to reduce the risk of customer connected devices contributing to attacks through our Advanced WiFi solution that includes Security Shield, and we make Security Suite available to our Internet customers,” Charter wrote in an emailed response to questions. “With the ever-growing number of devices connecting to networks, we encourage customers to purchase trusted devices with secure development and manufacturing practices, use anti-virus and security tools on their connected devices, and regularly download security patches.”
A spokesperson for Comcast responded, “Currently our network is not experiencing impacts and we are able to handle the traffic.”
9 YEARS OF MIRAI
Aisuru is built on the bones of malicious code that was leaked in 2016 by the original creators of the Mirai IoT botnet. Like Aisuru, Mirai quickly outcompeted all other DDoS botnets in its heyday, and obliterated previous DDoS attack records with a 620 gigabit-per-second siege that sidelined this website for nearly four days in 2016.
The Mirai botmasters likewise used their crime machine to attack mostly Minecraft servers, but with the goal of forcing Minecraft server owners to purchase a DDoS protection service that they controlled. In addition, they rented out slices of the Mirai botnet to paying customers, some of whom used it to mask the sources of other types of cybercrime, such as click fraud.

A depiction of the outages caused by the Mirai botnet attacks against the internet infrastructure firm Dyn on October 21, 2016. Source: Downdetector.com.
Dobbins said Aisuru’s owners also appear to be renting out their botnet as a distributed proxy network that cybercriminal customers anywhere in the world can use to anonymize their malicious traffic and make it appear to be coming from regular residential users in the U.S.
“The people who operate this botnet are also selling (it as) residential proxies,” he said. “And that’s being used to reflect application layer attacks through the proxies on the bots as well.”
The Aisuru botnet harkens back to its predecessor Mirai in another intriguing way. One of its owners is using the Telegram handle “9gigsofram,” which corresponds to the nickname used by the co-owner of a Minecraft server protection service called Proxypipe that was heavily targeted in 2016 by the original Mirai botmasters.
Robert Coelho co-ran Proxypipe back then along with his business partner Erik “9gigsofram” Buckingham, and has spent the past nine years fine-tuning various DDoS mitigation companies that cater to Minecraft server operators and other gaming enthusiasts. Coelho said he has no idea why one of Aisuru’s botmasters chose Buckingham’s nickname, but added that it might say something about how long this person has been involved in the DDoS-for-hire industry.
“The Aisuru attacks on the gaming networks these past seven day have been absolutely huge, and you can see tons of providers going down multiple times a day,” Coelho said.
Coelho said the 15 Tbps attack this week against TCPShield was likely only a portion of the total attack volume hurled by Aisuru at the time, because much of it would have been shoved through networks that simply couldn’t process that volume of traffic all at once. Such outsized attacks, he said, are becoming increasingly difficult and expensive to mitigate.
“It’s definitely at the point now where you need to be spending at least a million dollars a month just to have the network capacity to be able to deal with these attacks,” he said.
RAPID SPREAD
Aisuru has long been rumored to use multiple zero-day vulnerabilities in IoT devices to aid its rapid growth over the past year. XLab, the Chinese security company that was the first to profile Aisuru’s rise in 2024, warned last month that one of the Aisuru botmasters had compromised the firmware distribution website for Totolink, a maker of low-cost routers and other networking gear.
“Multiple sources indicate the group allegedly compromised a router firmware update server in April and distributed malicious scripts to expand the botnet,” XLab wrote on September 15. “The node count is currently reported to be around 300,000.”

A malicious script implanted into a Totolink update server in April 2025. Image: XLab.
Aisuru’s operators received an unexpected boost to their crime machine in August when the U.S. Department Justice charged the alleged proprietor of Rapper Bot, a DDoS-for-hire botnet that competed directly with Aisuru for control over the global pool of vulnerable IoT systems.
Once Rapper Bot was dismantled, Aisuru’s curators moved quickly to commandeer vulnerable IoT devices that were suddenly set adrift by the government’s takedown, Dobbins said.
“Folks were arrested and Rapper Bot control servers were seized and that’s great, but unfortunately the botnet’s attack assets were then pieced out by the remaining botnets,” he said. “The problem is, even if those infected IoT devices are rebooted and cleaned up, they will still get re-compromised by something else generally within minutes of being plugged back in.”

A screenshot shared by XLabs showing the Aisuru botmasters recently celebrating a record-breaking 7.7 Tbps DDoS. The user at the top has adopted the name “Ethan J. Foltz” in a mocking tribute to the alleged Rapper Bot operator who was arrested and charged in August 2025.
BOTMASTERS AT LARGE
XLab’s September blog post cited multiple unnamed sources saying Aisuru is operated by three cybercriminals: “Snow,” who’s responsible for botnet development; “Tom,” tasked with finding new vulnerabilities; and “Forky,” responsible for botnet sales.
KrebsOnSecurity interviewed Forky in our May 2025 story about the record 6.3 Tbps attack from Aisuru. That story that identified Forky as a 21-year-old man from Sao Paulo, Brazil who has been extremely active in the DDoS-for-hire scene since at least 2022. The FBI has seized Forky’s DDoS-for-hire domains several times over the years.

Like the original Mirai botmasters, Forky also operates a DDoS mitigation service called Botshield. Forky declined to discuss the makeup of his ISP’s clientele, or to clarify whether Botshield was more of a hosting provider or a DDoS mitigation firm. However, Forky has posted on Telegram about Botshield successfully mitigating large DDoS attacks launched against other DDoS-for-hire services.
In our previous interview, Forky acknowledged being involved in the development and marketing of Aisuru, but denied participating in attacks launched by the botnet.
Reached for comment earlier this month, Forky continued to maintain his innocence, claiming that he also is still trying to figure out who the current Aisuru botnet operators are in real life (Forky said the same thing in our May interview).
But after a week of promising juicy details, Forky came up empty-handed once again. Suspecting that Forky was merely being coy, I asked him how someone so connected to the DDoS-for-hire world could still be mystified on this point, and suggested that his inability or unwillingness to blame anyone else for Aisuru would not exactly help his case.
At this, Forky verbally bristled at being pressed for more details, and abruptly terminated our interview.
“I’m not here to be threatened with ignorance because you are stressed,” Forky replied. “They’re blaming me for those new attacks. Pretty much the whole world (is) due to your blog.”
Autonomous AI Hacking and the Future of Cybersecurity
AI agents are now hacking computers. They’re getting better at all phases of cyberattacks, faster than most of us expected. They can chain together different aspects of a cyber operation, and hack autonomously, at computer speeds and scale. This is going to change everything.
Over the summer, hackers proved the concept, industry institutionalized it, and criminals operationalized it. In June, AI company XBOW took the top spot on HackerOne’s US leaderboard after submitting over 1,000 new vulnerabilities in just a few months. In August, the seven teams competing in DARPA’s AI Cyber Challenge ...
Flok License Plate Surveillance
The company Flok is surveilling us as we drive:
A retired veteran named Lee Schmidt wanted to know how often Norfolk, Virginia’s 176 Flock Safety automated license-plate-reader cameras were tracking him. The answer, according to a U.S. District Court lawsuit filed in September, was more than four times a day, or 526 times from mid-February to early July. No, there’s no warrant out for Schmidt’s arrest, nor is there a warrant for Schmidt’s co-plaintiff, Crystal Arrington, whom the system tagged 849 times in roughly the same period.
You might think this sounds like it violates the Fourth Amendment, which protects American citizens from unreasonable searches and seizures without probable cause. Well, so does the American Civil Liberties Union. Norfolk, Virginia Judge Jamilah LeCruise also agrees, and in 2024 she ruled that plate-reader data obtained without a search warrant couldn’t be used against a defendant in a robbery case...
Karmada v1.15 Released! Enhanced Resource Awareness for Multi-Template Workloads
Karmada is an open multi-cloud and multi-cluster container orchestration engine designed to help users deploy and operate business applications in a multi-cloud environment. With its compatibility with the native Kubernetes API, Karmada can smoothly migrate single-cluster workloads while still maintaining coordination with the surrounding Kubernetes ecosystem tools.
Karmada v1.15 has been released, this version includes the following new features:
- Precise resource awareness for multi-template workloads
- Enhanced cluster-level failover functionality
- Structured logging
- Significant performance improvements for Karmada controllers and schedulers
Overview of New Features
Precise Resource Awareness for Multi-Template Workloads
Karmada utilizes a resource interpreter to retrieve the replica count and resource requests of workloads. Based on this data, it calculates the total resource requirements of the workloads, thereby enabling advanced capabilities such as resource-aware scheduling and federated quota management. This mechanism works well for traditional single-template workloads. However, many AI and big data application workloads (e.g., FlinkDeployments, PyTorchJobs, and RayJobs) consist of multiple Pod templates or components, each with unique resource demands. Since the resource interpreter can only process resource requests from a single template and fails to accurately reflect differences between multiple templates, the resource calculation for multi-template workloads is not precise enough.
In this version, Karmada has strengthened its resource awareness for multi-template workloads. By extending the resource interpreter, Karmada can now obtain the replica count and resource requests of different templates within the same workload, ensuring data accuracy. This improvement also provides more reliable and granular data support for federated quota management of multi-template workloads.
Suppose you deploy a FlinkDeployment with the following resource-related configuration:
spec: jobManager: replicas: 1 resource: cpu: 1 memory: 1024m taskManager: replicas: 1 resource: cpu: 2 memory: 2048mThrough ResourceBinding, you can view the replica count and resource requests of each template in the FlinkDeployment parsed by the resource interpreter.
spec: components: – name: jobmanager replicaRequirements: resourceRequest: cpu: “1” memory: “1.024” replicas: 1 – name: taskmanager replicaRequirements: resourceRequest: cpu: “2” memory: “2.048” replicas: 1At this point, the resource usage of the FlinkDeployment calculated by FederatedResourceQuota is as follows:
status: overallUsed: cpu: “3” memory: 3072mNote: This feature is currently in the Alpha stage and requires enabling the MultiplePodTemplatesScheduling feature gate to use.
As multi-template workloads are widely adopted in cloud-native environments, Karmada is committed to providing stronger support for them. In upcoming versions, we will further enhance scheduling support for multi-template workloads based on this feature and offer more granular resource-aware scheduling—stay tuned for more updates!
For more information about this feature, please refer to: Multi-Pod Template Support.
Enhanced Cluster-Level Failover Functionality
In previous versions, Karmada provided basic cluster-level failover capabilities, allowing cluster-level application migration to be triggered through custom failure conditions. To meet the requirement of preserving the running state of stateful applications during cluster failover, Karmada v1.15 supports an application state preservation policy for cluster failover. For big data processing applications (e.g., Flink), this capability enables restarting from the pre-failure checkpoint and seamlessly resuming data processing to the state before the restart, thus avoiding duplicate data processing.
The community has introduced a new StatePreservation field under .spec.failover.cluster in the PropagationPolicy/ClusterPropagationPolicy API. This field is used to define policies for preserving and restoring state data of stateful applications during failover. Combined with this policy, when an application is migrated from a failed cluster to another cluster, key data can be extracted from the original resource configuration.
The state preservation policy StatePreservation includes a series of StatePreservationRule configurations. It uses JSONPath to specify the segments of state data that need to be preserved and leverages the associated AliasLabelName to pass the data to the migrated cluster.
Taking a Flink application as an example: in a Flink application, jobID is a unique identifier used to distinguish and manage different Flink jobs. When a cluster fails, the Flink application can use jobID to restore the state of the job before the failure and continue execution from the failure point. The specific configuration and steps are as follows:
apiVersion: policy.karmada.io/v1alpha1kind: PropagationPolicy
metadata:
name: foo
spec:
#…
failover:
cluster:
purgeMode: Directly
statePreservation:
rules:
– aliasLabelName: application.karmada.io/cluster-failover-jobid
jsonPath: “{ .jobStatus.jobID }”
- Before migration, the Karmada controller extracts the job ID according to the path configured by the user.
- During migration, the Karmada controller injects the extracted job ID into the Flink application configuration in the form of a label, such as application.karmada.io/cluster-failover-jobid: <jobID>.
- Kyverno running in the member cluster intercepts the Flink application creation request, obtains the checkpoint data storage path of the job based on the jobID (e.g., /<shared-path>/<job-namespace>/<jobId>/checkpoints/xxx), and then configures initialSavepointPath to indicate starting from the savepoint.
- The Flink application starts based on the checkpoint data under initialSavepointPath, thereby inheriting the final state saved before migration.
This capability is widely applicable to stateful applications that can start from a specific savepoint. These applications can follow the above process to implement state persistence and restoration for cluster-level failover.
Note: This feature is currently in the Alpha stage and requires enabling the StatefulFailoverInjection feature gate to use.
Function Constraints:
- The application must be restricted to run in a single cluster.
- The migration cleanup policy (PurgeMode) is limited to Directly—this means ensuring that the failed application is deleted from the old cluster before being restored in the new cluster to guarantee data consistency.
Structured Logging
Logs are critical tools for recording events, states, and behaviors during system operation, and are widely used for troubleshooting, performance monitoring, and security auditing. Karmada components provide rich runtime logs to help users quickly locate issues and trace execution scenarios. In previous versions, Karmada only supported unstructured text logs, which were difficult to parse and query efficiently, limiting its integration capabilities in modern observability systems.
Karmada v1.15 introduces support for structured logging, which can be configured to output in JSON format using the –logging-format=json startup flag. An example of structured logging is as follows:
{ “ts”:“日志时间戳”, “logger”:”cluster_status_controller”, “level”: “info”, “msg”:”Syncing cluster status”, “clusterName”:”member1″}The introduction of structured logging significantly improves the usability and observability of logs:
- Efficient Integration: Integration with mainstream logging systems such as Elastic, Loki, and Splunk, without relying on complex regular expressions or log parsers.
- Efficient Query: Structured fields support fast retrieval and analysis, significantly improving troubleshooting efficiency.
- Enhanced Observability: Key context information (e.g., cluster name, log level) is presented as structured fields, facilitating cross-component and cross-time event correlation for accurate issue localization.
- Maintainability: Structured logging makes it easier for developers and operators to maintain, parse, and evolve log formats as the system changes.
Significant Performance Improvements for Karmada Controllers and Schedulers
In this version, the Karmada performance optimization team has continued to focus on improving the performance of Karmada’s key components, achieving significant progress in both controllers and schedulers.
In terms of the controller, by introducing controller-runtime priority queue, the controller can give priority to responding to user-triggered resource changes after a restart or leader transition, thereby significantly reducing the downtime during service restart and failover processes.
The test environment included 5,000 Deployments, 2,500 Policies, and 5,000 ResourceBindings. The Deployment and Policy were updated when the controller restarted with a large number of pending events still in the work queue. Test results showed that the controller could immediately respond to and prioritize processing these update events, verifying the effectiveness of this optimization.
Note: This feature is currently in the Alpha stage and requires enabling the ControllerPriorityQueue feature gate to use.
In terms of the scheduler, by reducing redundant computations in the scheduling process and decreasing the number of remote call requests, the scheduling efficiency of the Karmada scheduler has been significantly improved.
Tests were conducted to record the time taken to schedule 5,000 ResourceBindings with the precise scheduling component karmada-scheduler-estimator enabled. The results are as follows:
- The scheduler throughput QPS increased from approximately 15 to about 22, representing a 46% performance improvement.
- The number of gRPC requests decreased from approximately 10,000 to around 5,000, a reduction of 50%.
These tests confirm that the performance of Karmada controllers and schedulers has been greatly improved in version 1.15. In the future, we will continue to conduct systematic performance optimizations for controllers and schedulers.
For the detailed test report, please refer to [Performance] Overview of performance improvements for v1.15.
Acknowledging Our Contributors
The Karmada v1.15 release includes 269 code commits from 39 contributors. We would like to extend our sincere gratitude to all the contributors:
@abhi0324@abhinav-1305@Arhell@Bhaumik10@CaesarTY@cbaenziger@deefreak@dekaihu@devarsh10@greenmoon55@iawia002@jabellard@jennryaz@liaolecheng@linyao22@LivingCcj@liwang0513@mohamedawnallah@mohit-nagaraj@mszacillo@RainbowMango@ritzdevp@ryanwuer@samzong@seanlaii@SunsetB612@tessapham@wangbowen1401@warjiang@wenhuwang@whitewindmills@whosefriendA@XiShanYongYe-Chang@zach593@zclyne@zhangsquared@zhuyulicfc49@zhzhuang-zju@zzklachlan
References:
[1] Karmada: https://karmada.io/
[2] Karmada v1.15: https://github.com/karmada-io/karmada/releases/tag/v1.15.0
[3] Multi-Pod Template Support: https://github.com/karmada-io/karmada/tree/master/docs/proposals/scheduling/multi-podtemplate-support
[4] [Performance] Overview of performance improvements for v1.15: https://github.com/karmada-io/karmada/issues/6516
[5] Karmada GitHub:https://github.com/karmada-io/karmada