You are here

Feed aggregator

Indirect Prompt Injection Attacks Against LLM Assistants

Schneier on Security - Wed, 09/03/2025 - 07:00

Really good research on practical attacks against LLM agents.

Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous

Abstract: The growing integration of LLMs into applications has introduced new security risks, notably known as Promptware­—maliciously engineered prompts designed to manipulate LLMs to compromise the CIA triad of these applications. While prior research warned about a potential shift in the threat landscape for LLM-powered applications, the risk posed by Promptware is frequently perceived as low. In this paper, we investigate the risk Promptware poses to users of Gemini-powered assistants (web application, mobile application, and Google Assistant). We propose a novel Threat Analysis and Risk Assessment (TARA) framework to assess Promptware risks for end users. Our analysis focuses on a new variant of Promptware called Targeted Promptware Attacks, which leverage indirect prompt injection via common user interactions such as emails, calendar invitations, and shared documents. We demonstrate 14 attack scenarios applied against Gemini-powered assistants across five identified threat classes: Short-term Context Poisoning, Permanent Memory Poisoning, Tool Misuse, Automatic Agent Invocation, and Automatic App Invocation. These attacks highlight both digital and physical consequences, including spamming, phishing, disinformation campaigns, data exfiltration, unapproved user video streaming, and control of home automation devices. We reveal Promptware’s potential for on-device lateral movement, escaping the boundaries of the LLM-powered application, to trigger malicious actions using a device’s applications. Our TARA reveals that 73% of the analyzed threats pose High-Critical risk to end users. We discuss mitigations and reassess the risk (in response to deployed mitigations) and show that the risk could be reduced significantly to Very Low-Medium. We disclosed our findings to Google, which deployed dedicated mitigations...

Categories: Software Security

Security beyond the model: Introducing AI system cards

Red Hat Security - Tue, 09/02/2025 - 20:00
AI is one of the most significant innovations to emerge in the last 5 years. Generative AI (gen AI) models are now smaller, faster, and cheaper to run. They can solve mathematical problems, analyze situations, and even reason about cause‑and‑effect relationships to generate insights that once required human expertise. On its own, an AI model is merely a set of trained weights and mathematical operations, an impressive engine, but one sitting idle on a test bench. Business value only emerges when that model is embedded within a complete AI system: data pipelines feed it clean, context‑
Categories: Software Security

Learn about Confidential Clusters

Red Hat Security - Tue, 09/02/2025 - 20:00
The Confidential Clusters project integrates confidential computing technology into Kubernetes clusters. It's an end-to-end solution that provides data confidentiality on cloud platforms by isolating a cluster from its underlying infrastructure. In a confidential cluster, all nodes run on top of confidential virtual machines (cVM). Before a node can join the cluster and access secrets, the platform and environment's authenticity are verified through remote attestation. This process involves communication with a trusted remote server.Confidential Clusters enables you to use Red Hat OpenShift,
Categories: Software Security

Kubernetes v1.34: Introducing CPU Manager Static Policy Option for Uncore Cache Alignment

Kubernetes Blog - Tue, 09/02/2025 - 14:30

A new CPU Manager Static Policy Option called prefer-align-cpus-by-uncorecache was introduced in Kubernetes v1.32 as an alpha feature, and has graduated to beta in Kubernetes v1.34. This CPU Manager Policy Option is designed to optimize performance for specific workloads running on processors with a split uncore cache architecture. In this article, I'll explain what that means and why it's useful.

Understanding the feature

What is uncore cache?

Until relatively recently, nearly all mainstream computer processors had a monolithic last-level-cache cache that was shared across every core in a multiple CPU package. This monolithic cache is also referred to as uncore cache (because it is not linked to a specific core), or as Level 3 cache. As well as the Level 3 cache, there is other cache, commonly called Level 1 and Level 2 cache, that is associated with a specific CPU core.

In order to reduce access latency between the CPU cores and their cache, recent AMD64 and ARM architecture based processors have introduced a split uncore cache architecture, where the last-level-cache is divided into multiple physical caches, that are aligned to specific CPU groupings within the physical package. The shorter distances within the CPU package help to reduce latency. Diagram showing monolithic cache on the left and split cache on the right

Kubernetes is able to place workloads in a way that accounts for the cache topology within the CPU package(s).

Cache-aware workload placement

The matrix below shows the CPU-to-CPU latency measured in nanoseconds (lower is better) when passing a packet between CPUs, via its cache coherence protocol on a processor that uses split uncore cache. In this example, the processor package consists of 2 uncore caches. Each uncore cache serves 8 CPU cores. Table showing CPU-to-CPU latency figures Blue entries in the matrix represent latency between CPUs sharing the same uncore cache, while grey entries indicate latency between CPUs corresponding to different uncore caches. Latency between CPUs that correspond to different caches are higher than the latency between CPUs that belong to the same cache.

With prefer-align-cpus-by-uncorecache enabled, the static CPU Manager attempts to allocates CPU resources for a container, such that all CPUs assigned to a container share the same uncore cache. This policy operates on a best-effort basis, aiming to minimize the distribution of a container's CPU resources across uncore caches, based on the container's requirements, and accounting for allocatable resources on the node.

By running a workload, where it can, on a set of CPUS that use the smallest feasible number of uncore caches, applications benefit from reduced cache latency (as seen in the matrix above), and from reduced contention against other workloads, which can result in overall higher throughput. The benefit only shows up if your nodes use a split uncore cache topology for their processors.

The following diagram below illustrates uncore cache alignment when the feature is enabled.

Diagram showing an example workload CPU assignment, default static policy, and with prefer-align-cpus-by-uncorecache

By default, Kubernetes does not account for uncore cache topology; containers are assigned CPU resources using a packed methodology. As a result, Container 1 and Container 2 can experience a noisy neighbor impact due to cache access contention on Uncore Cache 0. Additionally, Container 2 will have CPUs distributed across both caches which can introduce a cross-cache latency.

With prefer-align-cpus-by-uncorecache enabled, each container is isolated on an individual cache. This resolves the cache contention between the containers and minimizes the cache latency for the CPUs being utilized.

Use cases

Common use cases can include telco applications like vRAN, Mobile Packet Core, and Firewalls. It's important to note that the optimization provided by prefer-align-cpus-by-uncorecache can be dependent on the workload. For example, applications that are memory bandwidth bound may not benefit from uncore cache alignment, as utilizing more uncore caches can increase memory bandwidth access.

Enabling the feature

To enable this feature, set the CPU Manager Policy to static and enable the CPU Manager Policy Options with prefer-align-cpus-by-uncorecache.

For Kubernetes 1.34, the feature is in the beta stage and requires the CPUManagerPolicyBetaOptions feature gate to also be enabled.

Append the following to the kubelet configuration file:

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
featureGates:
 ...
 CPUManagerPolicyBetaOptions: true
cpuManagerPolicy: "static"
cpuManagerPolicyOptions:
 prefer-align-cpus-by-uncorecache: "true"
reservedSystemCPUs: "0"
...

If you're making this change to an existing node, remove the cpu_manager_state file and then restart kubelet.

prefer-align-cpus-by-uncorecache can be enabled on nodes with a monolithic uncore cache processor. The feature will mimic a best-effort socket alignment effect and will pack CPU resources on the socket similar to the default static CPU Manager policy.

Further reading

See Node Resource Managers to learn more about the CPU Manager and the available policies.

Reference the documentation for prefer-align-cpus-by-uncorecache here.

Please see the Kubernetes Enhancement Proposal for more information on how prefer-align-cpus-by-uncorecache is implemented.

Getting involved

This feature is driven by SIG Node. If you are interested in helping develop this feature, sharing feedback, or participating in any other ongoing SIG Node projects, please attend the SIG Node meeting for more details.

Categories: CNCF Projects, Kubernetes

1965 Cryptanalysis Training Workbook Released by the NSA

Schneier on Security - Tue, 09/02/2025 - 07:08

In the early 1960s, National Security Agency cryptanalyst and cryptanalysis instructor Lambros D. Callimahos coined the term “Stethoscope” to describe a diagnostic computer program used to unravel the internal structure of pre-computer ciphertexts. The term appears in the newly declassified September 1965 document Cryptanalytic Diagnosis with the Aid of a Computer, which compiled 147 listings from this tool for Callimahos’s course, CA-400: NSA Intensive Study Program in General Cryptanalysis.

The listings in the report are printouts from the Stethoscope program, run on the NSA’s Bogart computer, showing statistical and structural data extracted from encrypted messages, but the encrypted messages themselves are not included. They were used in NSA training programs to teach analysts how to interpret ciphertext behavior without seeing the original message...

Categories: Software Security

The Ongoing Fallout from a Breach at AI Chatbot Maker Salesloft

Krebs on Security - Mon, 09/01/2025 - 17:55

The recent mass-theft of authentication tokens from Salesloft, whose AI chatbot is used by a broad swath of corporate America to convert customer interaction into Salesforce leads, has left many companies racing to invalidate the stolen credentials before hackers can exploit them. Now Google warns the breach goes far beyond access to Salesforce data, noting the hackers responsible also stole valid authentication tokens for hundreds of online services that customers can integrate with Salesloft, including Slack, Google Workspace, Amazon S3, Microsoft Azure, and OpenAI.

Salesloft says its products are trusted by 5,000+ customers. Some of the bigger names are visible on the company’s homepage.

Salesloft disclosed on August 20 that, “Today, we detected a security issue in the Drift application,” referring to the technology that powers an AI chatbot used by so many corporate websites. The alert urged customers to re-authenticate the connection between the Drift and Salesforce apps to invalidate their existing authentication tokens, but it said nothing then to indicate those tokens had already been stolen.

On August 26, the Google Threat Intelligence Group (GTIG) warned that unidentified hackers tracked as UNC6395 used the access tokens stolen from Salesloft to siphon large amounts of data from numerous corporate Salesforce instances. Google said the data theft began as early as Aug. 8, 2025 and lasted through at least Aug. 18, 2025, and that the incident did not involve any vulnerability in the Salesforce platform.

Google said the attackers have been sifting through the massive data haul for credential materials such as AWS keys, VPN credentials, and credentials to the cloud storage provider Snowflake.

“If successful, the right credentials could allow them to further compromise victim and client environments, as well as pivot to the victim’s clients or partner environments,” the GTIG report stated.

The GTIG updated its advisory on August 28 to acknowledge the attackers used the stolen tokens to access email from “a very small number of Google Workspace accounts” that were specially configured to integrate with Salesloft. More importantly, it warned organizations to immediately invalidate all tokens stored in or connected to their Salesloft integrations — regardless of the third-party service in question.

“Given GTIG’s observations of data exfiltration associated with the campaign, organizations using Salesloft Drift to integrate with third-party platforms (including but not limited to Salesforce) should consider their data compromised and are urged to take immediate remediation steps,” Google advised.

On August 28, Salesforce blocked Drift from integrating with its platform, and with its productivity platforms Slack and Pardot.

The Salesloft incident comes on the heels of a broad social engineering campaign that used voice phishing to trick targets into connecting a malicious app to their organization’s Salesforce portal. That campaign led to data breaches and extortion attacks affecting a number of companies including Adidas, Allianz Life and Qantas.

On August 5, Google disclosed that one of its corporate Salesforce instances was compromised by the attackers, which the GTIG has dubbed UNC6040 (“UNC” stands for “uncategorized threat group”). Google said the extortionists consistently claimed to be the threat group ShinyHunters, and that the group appeared to be preparing to escalate its extortion attacks by launching a data leak site.

ShinyHunters is an amorphous threat group known for using social engineering to break into cloud platforms and third-party IT providers, and for posting dozens of stolen databases to cybercrime communities like the now-defunct Breachforums.

The ShinyHunters brand dates back to 2020, and the group has been credited with or taken responsibility for dozens of data leaks that exposed hundreds of millions of breached records. The group’s member roster is thought to be somewhat fluid, drawing mainly from active denizens of the Com, a mostly English-language cybercrime community scattered across an ocean of Telegram and Discord servers.

Recorded Future’s Alan Liska told Bleeping Computer that the overlap in the “tools, techniques and procedures” used by ShinyHunters and the Scattered Spider extortion group likely indicate some crossover between the two groups.

To muddy the waters even further, on August 28 a Telegram channel that now has nearly 40,000 subscribers was launched under the intentionally confusing banner “Scattered LAPSUS$ Hunters 4.0,” wherein participants have repeatedly claimed responsibility for the Salesloft hack without actually sharing any details to prove their claims.

The Telegram group has been trying to attract media attention by threatening security researchers at Google and other firms. It also is using the channel’s sudden popularity to promote a new cybercrime forum called “Breachstars,” which they claim will soon host data stolen from victim companies who refuse to negotiate a ransom payment.

The “Scattered Lapsus$ Hunters 4.0” channel on Telegram now has roughly 40,000 subscribers.

But Austin Larsen, a principal threat analyst at Google’s threat intelligence group, said there is no compelling evidence to attribute the Salesloft activity to ShinyHunters or to other known groups at this time.

“Their understanding of the incident seems to come from public reporting alone,” Larsen told KrebsOnSecurity, referring to the most active participants in the Scattered LAPSUS$ Hunters 4.0 Telegram channel.

Joshua Wright, a senior technical director at Counter Hack, is credited with coining the term “authorization sprawl” to describe one key reason that social engineering attacks from groups like Scattered Spider and ShinyHunters so often succeed: They abuse legitimate user access tokens to move seamlessly between on-premises and cloud systems.

Wright said this type of attack chain often goes undetected because the attacker sticks to the resources and access already allocated to the user.

“Instead of the conventional chain of initial access, privilege escalation and endpoint bypass, these threat actors are using centralized identity platforms that offer single sign-on (SSO) and integrated authentication and authorization schemes,” Wright wrote in a June 2025 column. “Rather than creating custom malware, attackers use the resources already available to them as authorized users.”

It remains unclear exactly how the attackers gained access to all Salesloft Drift authentication tokens. Salesloft announced on August 27 that it hired Mandiant, Google Cloud’s incident response division, to investigate the root cause(s).

“We are working with Salesloft Drift to investigate the root cause of what occurred and then it’ll be up to them to publish that,” Mandiant Consulting CTO Charles Carmakal told Cyberscoop. “There will be a lot more tomorrow, and the next day, and the next day.”

Categories: Software Security

Kubernetes v1.34: DRA has graduated to GA

Kubernetes Blog - Mon, 09/01/2025 - 14:30

Kubernetes 1.34 is here, and it has brought a huge wave of enhancements for Dynamic Resource Allocation (DRA)! This release marks a major milestone with many APIs in the resource.k8s.io group graduating to General Availability (GA), unlocking the full potential of how you manage devices on Kubernetes. On top of that, several key features have moved to beta, and a fresh batch of new alpha features promise even more expressiveness and flexibility.

Let's dive into what's new for DRA in Kubernetes 1.34!

The core of DRA is now GA

The headline feature of the v1.34 release is that the core of DRA has graduated to General Availability.

Kubernetes Dynamic Resource Allocation (DRA) provides a flexible framework for managing specialized hardware and infrastructure resources, such as GPUs or FPGAs. DRA provides APIs that enable each workload to specify the properties of the devices it needs, but leaving it to the scheduler to allocate actual devices, allowing increased reliability and improved utilization of expensive hardware.

With the graduation to GA, DRA is stable and will be part of Kubernetes for the long run. The community can still expect a steady stream of new features being added to DRA over the next several Kubernetes releases, but they will not make any breaking changes to DRA. So users and developers of DRA drivers can start adopting DRA with confidence.

Starting with Kubernetes 1.34, DRA is enabled by default; the DRA features that have reached beta are also enabled by default. That's because the default API version for DRA is now the stable v1 version, and not the earlier versions (eg: v1beta1 or v1beta2) that needed explicit opt in.

Features promoted to beta

Several powerful features have been promoted to beta, adding more control, flexibility, and observability to resource management with DRA.

Admin access labelling has been updated. In v1.34, you can restrict device support to people (or software) authorized to use it. This is meant as a way to avoid privilege escalation if a DRA driver grants additional privileges when admin access is requested and to avoid accessing devices which are in use by normal applications, potentially in another namespace. The restriction works by ensuring that only users with access to a namespace with the resource.k8s.io/admin-access: "true" label are authorized to create ResourceClaim or ResourceClaimTemplates objects with the adminAccess field set to true. This ensures that non-admin users cannot misuse the feature.

Prioritized list lets users specify a list of acceptable devices for their workloads, rather than just a single type of device. So while the workload might run best on a single high-performance GPU, it might also be able to run on 2 mid-level GPUs. The scheduler will attempt to satisfy the alternatives in the list in order, so the workload will be allocated the best set of devices available on the node.

The kubelet's API has been updated to report on Pod resources allocated through DRA. This allows node monitoring agents to know the allocated DRA resources for Pods on a node and makes it possible to use the DRA information in the PodResources API to develop new features and integrations.

New alpha features

Kubernetes 1.34 also introduces several new alpha features that give us a glimpse into the future of resource management with DRA.

Extended resource mapping support in DRA allows cluster administrators to advertise DRA-managed resources as extended resources, allowing developers to consume them using the familiar, simpler request syntax while still benefiting from dynamic allocation. This makes it possible for existing workloads to start using DRA without modifications, simplifying the transition to DRA for both application developers and cluster administrators.

Consumable capacity introduces a flexible device sharing model where multiple, independent resource claims from unrelated pods can each be allocated a share of the same underlying physical device. This new capability is managed through optional, administrator-defined sharing policies that govern how a device's total capacity is divided and enforced by the platform for each request. This allows for sharing of devices in scenarios where pre-defined partitions are not viable. A blog about this feature is coming soon.

Binding conditions improve scheduling reliability for certain classes of devices by allowing the Kubernetes scheduler to delay binding a pod to a node until its required external resources, such as attachable devices or FPGAs, are confirmed to be fully prepared. This prevents premature pod assignments that could lead to failures and ensures more robust, predictable scheduling by explicitly modeling resource readiness before the pod is committed to a node.

Resource health status for DRA improves observability by exposing the health status of devices allocated to a Pod via Pod Status. This works whether the device is allocated through DRA or Device Plugin. This makes it easier to understand the cause of an unhealthy device and respond properly. A blog about this feature is coming soon.

What’s next?

While DRA got promoted to GA this cycle, the hard work on DRA doesn't stop. There are several features in alpha and beta that we plan to bring to GA in the next couple of releases and we are looking to continue to improve performance, scalability and reliability of DRA. So expect an equally ambitious set of features in DRA for the 1.35 release.

Getting involved

A good starting point is joining the WG Device Management Slack channel and meetings, which happen at US/EU and EU/APAC friendly time slots.

Not all enhancement ideas are tracked as issues yet, so come talk to us if you want to help or have some ideas yourself! We have work to do at all levels, from difficult core changes to usability enhancements in kubectl, which could be picked up by newcomers.

Acknowledgments

A huge thanks to the new contributors to DRA this cycle:

Categories: CNCF Projects, Kubernetes

Friday Squid Blogging: Catching Humboldt Squid

Schneier on Security - Fri, 08/29/2025 - 17:04

First-person account of someone accidentally catching several Humboldt squid on a fishing line. No photos, though.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Categories: Software Security

Kubernetes v1.34: Finer-Grained Control Over Container Restarts

Kubernetes Blog - Fri, 08/29/2025 - 14:30

With the release of Kubernetes 1.34, a new alpha feature is introduced that gives you more granular control over container restarts within a Pod. This feature, named Container Restart Policy and Rules, allows you to specify a restart policy for each container individually, overriding the Pod's global restart policy. In addition, it also allows you to conditionally restart individual containers based on their exit codes. This feature is available behind the alpha feature gate ContainerRestartRules.

This has been a long-requested feature. Let's dive into how it works and how you can use it.

The problem with a single restart policy

Before this feature, the restartPolicy was set at the Pod level. This meant that all containers in a Pod shared the same restart policy (Always, OnFailure, or Never). While this works for many use cases, it can be limiting in others.

For example, consider a Pod with a main application container and an init container that performs some initial setup. You might want the main container to always restart on failure, but the init container should only run once and never restart. With a single Pod-level restart policy, this wasn't possible.

Introducing per-container restart policies

With the new ContainerRestartRules feature gate, you can now specify a restartPolicy for each container in your Pod's spec. You can also define restartPolicyRules to control restarts based on exit codes. This gives you the fine-grained control you need to handle complex scenarios.

Use cases

Let's look at some real-life use cases where per-container restart policies can be beneficial.

In-place restarts for training jobs

In ML research, it's common to orchestrate a large number of long-running AI/ML training workloads. In these scenarios, workload failures are unavoidable. When a workload fails with a retriable exit code, you want the container to restart quickly without rescheduling the entire Pod, which consumes a significant amount of time and resources. Restarting the failed container "in-place" is critical for better utilization of compute resources. The container should only restart "in-place" if it failed due to a retriable error; otherwise, the container and Pod should terminate and possibly be rescheduled.

This can now be achieved with container-level restartPolicyRules. The workload can exit with different codes to represent retriable and non-retriable errors. With restartPolicyRules, the workload can be restarted in-place quickly, but only when the error is retriable.

Try-once init containers

Init containers are often used to perform initialization work for the main container, such as setting up environments and credentials. Sometimes, you want the main container to always be restarted, but you don't want to retry initialization if it fails.

With a container-level restartPolicy, this is now possible. The init container can be executed only once, and its failure would be considered a Pod failure. If the initialization succeeds, the main container can be always restarted.

Pods with multiple containers

For Pods that run multiple containers, you might have different restart requirements for each container. Some containers might have a clear definition of success and should only be restarted on failure. Others might need to be always restarted.

This is now possible with a container-level restartPolicy, allowing individual containers to have different restart policies.

How to use it

To use this new feature, you need to enable the ContainerRestartRules feature gate on your Kubernetes cluster control-plane and worker nodes running Kubernetes 1.34+. Once enabled, you can specify the restartPolicy and restartPolicyRules fields in your container definitions.

Here are some examples:

Example 1: Restarting on specific exit codes

In this example, the container should restart if and only if it fails with a retriable error, represented by exit code 42.

To achieve this, the container has restartPolicy: Never, and a restart policy rule that tells Kubernetes to restart the container in-place if it exits with code 42.

apiVersion: v1
kind: Pod
metadata:
 name: restart-on-exit-codes
 annotations:
 kubernetes.io/description: "This Pod only restart the container only when it exits with code 42."
spec:
 restartPolicy: Never
 containers:
 - name: restart-on-exit-codes
 image: docker.io/library/busybox:1.28
 command: ['sh', '-c', 'sleep 60 && exit 0']
 restartPolicy: Never  # Container restart policy must be specified if rules are specified
 restartPolicyRules: # Only restart the container if it exits with code 42
 - action: Restart
 exitCodes:
 operator: In
 values: [42]

Example 2: A try-once init container

In this example, a Pod should always be restarted once the initialization succeeds. However, the initialization should only be tried once.

To achieve this, the Pod has an Always restart policy. The init-once init container will only try once. If it fails, the Pod will fail. This allows the Pod to fail if the initialization failed, but also keep running once the initialization succeeds.

apiVersion: v1
kind: Pod
metadata:
 name: fail-pod-if-init-fails
 annotations:
 kubernetes.io/description: "This Pod has an init container that runs only once. After initialization succeeds, the main container will always be restarted."
spec:
 restartPolicy: Always
 initContainers:
 - name: init-once  # This init container will only try once. If it fails, the Pod will fail.
 image: docker.io/library/busybox:1.28
 command: ['sh', '-c', 'echo "Failing initialization" && sleep 10 && exit 1']
 restartPolicy: Never
 containers:
 - name: main-container # This container will always be restarted once initialization succeeds.
 image: docker.io/library/busybox:1.28
 command: ['sh', '-c', 'sleep 1800 && exit 0']

Example 3: Containers with different restart policies

In this example, there are two containers with different restart requirements. One should always be restarted, while the other should only be restarted on failure.

This is achieved by using a different container-level restartPolicy on each of the two containers.

apiVersion: v1
kind: Pod
metadata:
 name: on-failure-pod
 annotations:
 kubernetes.io/description: "This Pod has two containers with different restart policies."
spec:
 containers:
 - name: restart-on-failure
 image: docker.io/library/busybox:1.28
 command: ['sh', '-c', 'echo "Not restarting after success" && sleep 10 && exit 0']
 restartPolicy: OnFailure
 - name: restart-always
 image: docker.io/library/busybox:1.28
 command: ['sh', '-c', 'echo "Always restarting" && sleep 1800 && exit 0']
 restartPolicy: Always

Learn more

Roadmap

More actions and signals to restart Pods and containers are coming! Notably, there are plans to add support for restarting the entire Pod. Planning and discussions on these features are in progress. Feel free to share feedback or requests with the SIG Node community!

Your feedback is welcome!

This is an alpha feature, and the Kubernetes project would love to hear your feedback. Please try it out. This feature is driven by the SIG Node. If you are interested in helping develop this feature, sharing feedback, or participating in any other ongoing SIG Node projects, please reach out to the SIG Node community!

You can reach SIG Node by several means:

Categories: CNCF Projects, Kubernetes

Baggage Tag Scam

Schneier on Security - Fri, 08/29/2025 - 07:01

I just heard about this:

There’s a travel scam warning going around the internet right now: You should keep your baggage tags on your bags until you get home, then shred them, because scammers are using luggage tags to file fraudulent claims for missing baggage with the airline.

First, the scam is possible. I had a bag destroyed by baggage handlers on a recent flight, and all the information I needed to file a claim was on my luggage tag. I have no idea if I will successfully get any money from the airline, or what form it will be in, or how it will be tied to my name, but at least the first step is possible...

Categories: Software Security

Red Hat Trusted Artifact Signer can now be hosted on RHEL

Red Hat Security - Thu, 08/28/2025 - 20:00
Organizations looking to better understand the lineage of their software artifacts have begun to adopt signing as a way to improve their security posture. By applying digital signatures to software artifacts, trust can be established to verify that assets have not been substituted or tampered with through the software development and delivery process.Red Hat Trusted Artifact Signer, a key component of Red Hat’s Trusted Software Supply Chain portfolio, provides a suite of tools that supports signing and verifying assets from first commit to deployment. Since Trusted Artifact Signer was first
Categories: Software Security

Kubernetes v1.34: User preferences (kuberc) are available for testing in kubectl 1.34

Kubernetes Blog - Thu, 08/28/2025 - 14:30

Have you ever wished you could enable interactive delete, by default, in kubectl? Or maybe, you'd like to have custom aliases defined, but not necessarily generate hundreds of them manually? Look no further. SIG-CLI has been working hard to add user preferences to kubectl, and we are happy to announce that this functionality is reaching beta as part of the Kubernetes v1.34 release.

How it works

A full description of this functionality is available in our official documentation, but this blog post will answer both of the questions from the beginning of this article.

Before we dive into details, let's quickly cover what the user preferences file looks like and where to place it. By default, kubectl will look for kuberc file in your default kubeconfig directory, which is $HOME/.kube. Alternatively, you can specify this location using --kuberc option or the KUBERC environment variable.

Just like every Kubernetes manifest, kuberc file will start with an apiVersion and kind:

apiVersion: kubectl.config.k8s.io/v1beta1
kind: Preference
# the user preferences will follow here

Defaults

Let's start by setting default values for kubectl command options. Our goal is to always use interactive delete, which means we want the --interactive option for kubectl delete to always be set to true. This can be achieved with the following addition to our kuberc file:

defaults:
- command: delete
 options:
 - name: interactive
 default: "true"

In the above example, I'm introducing defaults section, which allows users to define default values for kubectl options. In this case, we're setting the interactive option for kubectl delete to be true by default. This default can be overridden if a user explicitly provides a different value such as kubectl delete --interactive=false, in which case the explicit option takes precedence.

Another highly encouraged default from SIG-CLI, is using Server-Side Apply. To do so, you can add the following snippet to your preferences:

# continuing defaults section
- command: apply
 options:
 - name: server-side
 default: "true"

Aliases

The ability to define aliases allows us to save precious seconds when typing commands. I bet that you most likely have one defined for kubectl, because typing seven letters is definitely longer than just pressing k.

For this reason, the ability to define aliases was a must-have when we decided to implement user preferences, alongside defaulting. To define an alias for any of the built-in commands, expand your kuberc file with the following addition:

aliases:
- name: gns
 command: get
 prependArgs:
 - namespace
 options:
 - name: output
 default: json

There's a lot going on above, so let me break this down. First, we're introducing a new section: aliases. Here, we're defining a new alias gns, which is mapped to the command get command. Next, we're defining arguments (namespace resource) that will be inserted right after the command name. Additionally, we're setting --output=json option for this alias. The structure of options block is identical to the one in the defaults section.

You probably noticed that we've introduced a mechanism for prepending arguments, and you might wonder if there is a complementary setting for appending them (in other words, adding to the end of the command, after user-provided arguments). This can be achieved through appendArgs block, which is presented below:

# continuing aliases section
- name: runx
 command: run
 options:
 - name: image
 default: busybox
 - name: namespace
 default: test-ns
 appendArgs:
 - --
 - custom-arg

Here, we're introducing another alias: runx, which invokes kubectl run command, passing --image and --namespace options with predefined values, and also appending -- and custom-arg at the end of the invocation.

Debugging

We hope that kubectl user preferences will open up new possibilities for our users. Whenever you're in doubt, feel free to run kubectl with increased verbosity. At -v=5, you should get all the possible debugging information from this feature, which will be crucial when reporting issues.

To learn more, I encourage you to read through our official documentation and the actual proposal.

Get involved

Kubectl user preferences feature has reached beta, and we are very interested in your feedback. We'd love to hear what you like about it and what problems you'd like to see it solve. Feel free to join SIG-CLI slack channel, or open an issue against kubectl repository. You can also join us at our community meetings, which happen every other Wednesday, and share your stories with us.

Categories: CNCF Projects, Kubernetes

Affiliates Flock to ‘Soulless’ Scam Gambling Machine

Krebs on Security - Thu, 08/28/2025 - 13:21

Last month, KrebsOnSecurity tracked the sudden emergence of hundreds of polished online gaming and wagering websites that lure people with free credits and eventually abscond with any cryptocurrency funds deposited by players. We’ve since learned that these scam gambling sites have proliferated thanks to a new Russian affiliate program called “Gambler Panel” that bills itself as a “soulless project that is made for profit.”

A machine-translated version of Gambler Panel’s affiliate website.

The scam begins with deceptive ads posted on social media that claim the wagering sites are working in partnership with popular athletes or social media personalities. The ads invariably state that by using a supplied “promo code,” interested players can claim a $2,500 credit on the advertised gaming website.

The gaming sites ask visitors to create a free account to claim their $2,500 credit, which they can use to play any number of extremely polished video games that ask users to bet on each action. However, when users try to cash out any “winnings” the gaming site will reject the request and prompt the user to make a “verification deposit” of cryptocurrency — typically around $100 — before any money can be distributed.

Those who deposit cryptocurrency funds are soon pressed into more wagering and making additional deposits. And — shocker alert — all players eventually lose everything they’ve invested in the platform.

The number of scam gambling or “scambling” sites has skyrocketed in the past month, and now we know why: The sites all pull their gaming content and detailed strategies for fleecing players straight from the playbook created by Gambler Panel, a Russian-language affiliate program that promises affiliates up to 70 percent of the profits.

Gambler Panel’s website gambler-panel[.]com links to a helpful wiki that explains the scam from cradle to grave, offering affiliates advice on how best to entice visitors, keep them gambling, and extract maximum profits from each victim.

“We have a completely self-written from scratch FAKE CASINO engine that has no competitors,” Gambler Panel’s wiki enthuses. “Carefully thought-out casino design in every pixel, a lot of audits, surveys of real people and test traffic floods were conducted, which allowed us to create something that has no doubts about the legitimacy and trustworthiness even for an inveterate gambling addict with many years of experience.”

Gambler Panel explains that the one and only goal of affiliates is to drive traffic to these scambling sites by any and all means possible.

A machine-translated portion of Gambler Panel’s singular instruction for affiliates: Drive traffic to these scambling sites by any means available.

“Unlike white gambling affiliates, we accept absolutely any type of traffic, regardless of origin, the only limitation is the CIS countries,” the wiki continued, referring to a common prohibition against scamming people in Russia and former Soviet republics in the Commonwealth of Independent States.

The program’s website claims it has more than 20,000 affiliates, who earn a minimum of $10 for each verification deposit. Interested new affiliates must first get approval from the group’s Telegram channel, which currently has around 2,500 active users.

The Gambler Panel channel is replete with images of affiliate panels showing the daily revenue of top affiliates, scantily-clad young women promoting the Gambler logo, and fast cars that top affiliates claimed they bought with their earnings.

A machine-translated version of the wiki for the affiliate program Gambler Panel.

The apparent popularity of this scambling niche is a consequence of the program’s ease of use and detailed instructions for successfully reproducing virtually every facet of the scam. Indeed, much of the tutorial focuses on advice and ready-made templates to help even novice affiliates drive traffic via social media websites, particularly on Instagram and TikTok.

Gambler Panel also walks affiliates through a range of possible responses to questions from users who are trying to withdraw funds from the platform. This section, titled “Rules for working in Live chat,” urges scammers to respond quickly to user requests (1-7 minutes), and includes numerous strategies for keeping the conversation professional and the user on the platform as long as possible.

A machine-translated version of the Gambler Panel’s instructions on managing chat support conversations with users.

The connection between Gambler Panel and the explosion in the number of scambling websites was made by a 17-year-old developer who operates multiple Discord servers that have been flooded lately with misleading ads for these sites.

The researcher, who asked to be identified only by the nickname “Thereallo,” said Gambler Panel has built a scalable business product for other criminals.

“The wiki is kinda like a ‘how to scam 101’ for criminals written with the clarity you would expect from a legitimate company,” Thereallo said. “It’s clean, has step by step guides, and treats their scam platform like a real product. You could swap out the content, and it could be any documentation for startups.”

“They’ve minimized their own risk — spreading the links on Discord / Facebook / YT Shorts, etc. — and outsourced it to a hungry affiliate network, just like a franchise,” Thereallo wrote in response to questions.

“A centralized platform that can serve over 1,200 domains with a shared user base, IP tracking, and a custom API is not at all a trivial thing to build,” Thereallo said. “It’s a scalable system designed to be a resilient foundation for thousands of disposable scam sites.”

The security firm Silent Push has compiled a list of the latest domains associated with the Gambler Panel, available here (.csv).

Categories: Software Security

The UK May Be Dropping Its Backdoor Mandate

Schneier on Security - Thu, 08/28/2025 - 07:00

The US Director of National Intelligence is reporting that the UK government is dropping its backdoor mandate against the Apple iPhone. For now, at least, assuming that Tulsi Gabbard is reporting this accurately.

Categories: Software Security

Metal3.io becomes a CNCF incubating project

CNCF Blog Projects Category - Wed, 08/27/2025 - 15:00

The CNCF Technical Oversight Committee (TOC) has voted to accept Metal3.io as a CNCF incubating project. Metal3.io joins a growing ecosystem of technologies tackling real-world challenges at the edge of cloud native infrastructure.

What is Metal3.io?

The Metal3.io project (pronounced: “Metal Kubed”) provides components for bare metal host management with Kubernetes. You can enroll your bare metal machines, provision operating system images, and then, if you like, deploy Kubernetes clusters to them. From there, operating and upgrading your Kubernetes clusters can be handled by Metal3.io. Moreover, Metal3.io is itself a Kubernetes application, so it runs on Kubernetes and uses Kubernetes resources and APIs as its interface.

Metal3.io is also one of the providers for the Kubernetes subproject Cluster API. Cluster API provides infrastructure-agnostic Kubernetes lifecycle management, and Metal3.io brings the bare metal implementation.

Key Milestones and Ecosystem Growth

The project was started in 2019 by Red Hat and was quickly joined by Ericsson. Metal3.io then joined the CNCF sandbox in September 2020. 

Metal3.io has steadily matured and grown during the sandbox phase, with:

  • 57 active contributing organizations, led by Ericsson and Red Hat.
  • An active community organizing weekly online meetings with working group updates, issue triaging, design discussions, etc.
  • Organizations such as Fujitsu, Ikea, SUSE, Ericsson, and Red Hat among the growing list of adopters.
  • New features and API iterations, including IP address management, node reuse, firmware settings, and updates management both in provisioning time and on day 2, as well as remediation for the bare metal hosts.
  • A new operator, called the Ironic Standalone Operator, has been introduced to replace the shell-based deployment method for Ironic.
  • Added robust security processes, regular scans of dependencies, a vulnerability disclosure process, and automated dependency updates. 

Integrations Across the Cloud Native Landscape

Metal3.io connects seamlessly with many CNCF projects, including:

  • Kubernetes: Metal3.io builds on the success of Kubernetes and makes use of CustomResourceDefinitions
  • Cluster API: Turn the bare metal servers into Kubernetes clusters
  • Cert-manager: Certificates for webhooks, etc.
  • Ironic: Handles the hardware for Metal3.io by interacting with baseboard management controllers
  • Prometheus: Metal3.io exposes metrics in a format that Prometheus can scrape

Technical Components

  • Baremetal Operator (BMO): Exposes parts of the Ironic API as a Kubernetes native API
  • Cluster API Provider Metal³ (CAPM3): Provides integration with Cluster API
  • IP Address Manager (IPAM): Handles IP addresses and pools
  • Ironic Standalone Operator (IrSO): Makes it easy to deploy Ironic on Kubernetes
  • Ironic-Image: Container image for Ironic

Community Highlights

  • 1523 GitHub Stars 
  • 8368 merged pull requests 
  • 1434 issues
  • 186 contributors
  • 187 Releases

Maintainer Perspective

“As a maintainer of the Metal3.io project, I’m proud of its growth towards becoming one of the leading solutions for running Kubernetes on bare metal. I take pride in how it has evolved beyond provisioning bare metal only to support broader lifecycle needs, ensuring users can sustain and operate their bare metal deployments effectively. Equally rewarding has been seeing the community come together to establish strong processes and governance, positioning Metal3.io for CNCF incubation.”

Kashif Khan, Maintainer, Metal3.io

“Metal3.io is a testament to the power of collaboration across open source communities. It marries the battle-tested hardware support of the Ironic project with the Kubernetes API paradigm, using a lightweight Kubernetes-native deployment model. I am delighted to see it begin incubation with CNCF. I have no doubt that the forum the Metal3.io project provides will continue to drive progress in integration between Kubernetes and bare metal.”

Zane Bitter,  Maintainer, Metal3.io

From the TOC

“Metal3.io addresses a critical need for cloud native infrastructure by making bare metal as manageable and Kubernetes-native as any other platform. The project’s steady growth, technical maturity, and strong integration with the Kubernetes ecosystem made it a clear choice for incubation. We’re excited to support Metal3.io as it continues to empower organizations deploying Kubernetes at the edge and beyond.”

Ricardo Rocha, TOC Sponsor

Looking Ahead

Metal3.io’s roadmap for 2025 includes:

  • New API revisions for CAPM3, BMO, and IPAM
  • Maturing IPAM as a Cluster API IPAM provider
  • Multi-tenancy support
  • Support for architectures other than x86_64, i.e., ARM
  • Improve DHCP-less provisioning
  • Simplifying Ironic deployment with IrSO 

As a CNCF-hosted project, Metal3.io is part of a neutral foundation aligned with its technical interests, as well as the larger Linux Foundation, which provides governance, marketing support, and community outreach. Metal3.io joins incubating technologies ArtifactHUB, Backstage, Buildpacks, Chaos Mesh, Cloud Custodian, Container Network Interface (CNI), Contour, Cortex, Crossplane, Dragonfly, Emissary-Ingress, Flatcar, gRPC, Karmada, Keptn, Keycloak, Knative, Kubeflow, Kubescape, KubeVela, KubeVirt, Kyverno, Litmus, Longhorn, NATS, Notary, OpenCost, OpenFeature, OpenKruise, OpenTelemetry, OpenYurt, Operator Framework, Strimzi, Thanos, Volcano, and wasmCloud. For more information on maturity requirements for each level, please visit the CNCF Graduation Criteria.

We look forward to seeing how Metal3.io continues to evolve with the backing of the CNCF community.

Learn more: https://www.cncf.io/projects/metal%C2%B3/ 

Categories: CNCF Projects

Kubernetes v1.34: Of Wind & Will (O' WaW)

Kubernetes Blog - Wed, 08/27/2025 - 14:30

Editors: Agustina Barbetta, Alejandro Josue Leon Bellido, Graziano Casto, Melony Qin, Dipesh Rawat

Similar to previous releases, the release of Kubernetes v1.34 introduces new stable, beta, and alpha features. The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community.

This release consists of 58 enhancements. Of those enhancements, 23 have graduated to Stable, 22 have entered Beta, and 13 have entered Alpha.

There are also some deprecations and removals in this release; make sure to read about those.

 Three bears sail a wooden ship with a flag featuring a paw and a helm symbol on the sail, as wind blows across the ocean

A release powered by the wind around us — and the will within us.

Every release cycle, we inherit winds that we don't really control — the state of our tooling, documentation, and the historical quirks of our project. Sometimes these winds fill our sails, sometimes they push us sideways or die down.

What keeps Kubernetes moving isn't the perfect winds, but the will of our sailors who adjust the sails, man the helm, chart the courses and keep the ship steady. The release happens not because conditions are always ideal, but because of the people who build it, the people who release it, and the bears ^, cats, dogs, wizards, and curious minds who keep Kubernetes sailing strong — no matter which way the wind blows.

This release, Of Wind & Will (O' WaW), honors the winds that have shaped us, and the will that propels us forward.

^ Oh, and you wonder why bears? Keep wondering!

Spotlight on key updates

Kubernetes v1.34 is packed with new features and improvements. Here are a few select updates the Release Team would like to highlight!

Stable: The core of DRA is GA

Dynamic Resource Allocation (DRA) enables more powerful ways to select, allocate, share, and configure GPUs, TPUs, NICs and other devices.

Since the v1.30 release, DRA has been based around claiming devices using structured parameters that are opaque to the core of Kubernetes. This enhancement took inspiration from dynamic provisioning for storage volumes. DRA with structured parameters relies on a set of supporting API kinds: ResourceClaim, DeviceClass, ResourceClaimTemplate, and ResourceSlice API types under resource.k8s.io, while extending the .spec for Pods with a new resourceClaims field.
The resource.k8s.io/v1 APIs have graduated to stable and are now available by default.

This work was done as part of KEP #4381 led by WG Device Management.

Beta: Projected ServiceAccount tokens for kubelet image credential providers

The kubelet credential providers, used for pulling private container images, traditionally relied on long-lived Secrets stored on the node or in the cluster. This approach increased security risks and management overhead, as these credentials were not tied to the specific workload and did not rotate automatically.
To solve this, the kubelet can now request short-lived, audience-bound ServiceAccount tokens for authenticating to container registries. This allows image pulls to be authorized based on the Pod's own identity rather than a node-level credential.
The primary benefit is a significant security improvement. It eliminates the need for long-lived Secrets for image pulls, reducing the attack surface and simplifying credential management for both administrators and developers.

This work was done as part of KEP #4412 led by SIG Auth and SIG Node.

Alpha: Support for KYAML, a Kubernetes dialect of YAML

KYAML aims to be a safer and less ambiguous YAML subset, and was designed specifically for Kubernetes. Whatever version of Kubernetes you use, starting from Kubernetes v1.34 you are able to use KYAML as a new output format for kubectl.

KYAML addresses specific challenges with both YAML and JSON. YAML's significant whitespace requires careful attention to indentation and nesting, while its optional string-quoting can lead to unexpected type coercion (for example: "The Norway Bug"). Meanwhile, JSON lacks comment support and has strict requirements for trailing commas and quoted keys.

You can write KYAML and pass it as an input to any version of kubectl, because all KYAML files are also valid as YAML. With kubectl v1.34, you are also able to request KYAML output (as in kubectl get -o kyaml …) by setting environment variable KUBECTL_KYAML=true. If you prefer, you can still request the output in JSON or YAML format.

This work was done as part of KEP #5295 led by SIG CLI.

Features graduating to Stable

This is a selection of some of the improvements that are now stable following the v1.34 release.

Delayed creation of Job’s replacement Pods

By default, Job controllers create replacement Pods immediately when a Pod starts terminating, causing both Pods to run simultaneously. This can cause resource contention in constrained clusters, where the replacement Pod may struggle to find available nodes until the original Pod fully terminates. The situation can also trigger unwanted cluster autoscaler scale-ups. Additionally, some machine learning frameworks like TensorFlow and JAX require only one Pod per index to run at a time, making simultaneous Pod execution problematic. This feature introduces .spec.podReplacementPolicy in Jobs. You may choose to create replacement Pods only when the Pod is fully terminated (has .status.phase: Failed). To do this, set .spec.podReplacementPolicy: Failed.
Introduced as alpha in v1.28, this feature has graduated to stable in v1.34.

This work was done as part of KEP #3939 led by SIG Apps.

Recovery from volume expansion failure

This feature allows users to cancel volume expansions that are unsupported by the underlying storage provider, and retry volume expansion with smaller values that may succeed.
Introduced as alpha in v1.23, this feature has graduated to stable in v1.34.

This work was done as part of KEP #1790 led by SIG Storage.

VolumeAttributesClass for volume modification

VolumeAttributesClass has graduated to stable in v1.34. VolumeAttributesClass is a generic, Kubernetes-native API for modifying volume parameters like provisioned IO. It allows workloads to vertically scale their volumes on-line to balance cost and performance, if supported by their provider.
Like all new volume features in Kubernetes, this API is implemented via the container storage interface (CSI). Your provisioner-specific CSI driver must support the new ModifyVolume API which is the CSI side of this feature.

This work was done as part of KEP #3751 led by SIG Storage.

Structured authentication configuration

Kubernetes v1.29 introduced a configuration file format to manage API server client authentication, moving away from the previous reliance on a large set of command-line options. The AuthenticationConfiguration kind allows administrators to support multiple JWT authenticators, CEL expression validation, and dynamic reloading. This change significantly improves the manageability and auditability of the cluster's authentication settings - and has graduated to stable in v1.34.

This work was done as part of KEP #3331 led by SIG Auth.

Finer-grained authorization based on selectors

Kubernetes authorizers, including webhook authorizers and the built-in node authorizer, can now make authorization decisions based on field and label selectors in incoming requests. When you send list, watch or deletecollection requests with selectors, the authorization layer can now evaluate access with that additional context.

For example, you can write an authorization policy that only allows listing Pods bound to a specific .spec.nodeName. The client (perhaps the kubelet on a particular node) must specify the field selector that the policy requires, otherwise the request is forbidden. This change makes it feasible to set up least privilege rules, provided that the client knows how to conform to the restrictions you set. Kubernetes v1.34 now supports more granular control in environments like per-node isolation or custom multi-tenant setups.

This work was done as part of KEP #4601 led by SIG Auth.

Restrict anonymous requests with fine-grained controls

Instead of fully enabling or disabling anonymous access, you can now configure a strict list of endpoints where unauthenticated requests are allowed. This provides a safer alternative for clusters that rely on anonymous access to health or bootstrap endpoints like /healthz, /readyz, or /livez.

With this feature, accidental RBAC misconfigurations that grant broad access to anonymous users can be avoided without requiring changes to external probes or bootstrapping tools.

This work was done as part of KEP #4633 led by SIG Auth.

More efficient requeueing through plugin-specific callbacks

The kube-scheduler can now make more accurate decisions about when to retry scheduling Pods that were previously unschedulable. Each scheduling plugin can now register callback functions that tell the scheduler whether an incoming cluster event is likely to make a rejected Pod schedulable again.

This reduces unnecessary retries and improves overall scheduling throughput - especially in clusters using dynamic resource allocation. The feature also lets certain plugins skip the usual backoff delay when it is safe to do so, making scheduling faster in specific cases.

This work was done as part of KEP #4247 led by SIG Scheduling.

Ordered Namespace deletion

Semi-random resource deletion order can create security gaps or unintended behavior, such as Pods persisting after their associated NetworkPolicies are deleted.
This improvement introduces a more structured deletion process for Kubernetes namespaces to ensure secure and deterministic resource removal. By enforcing a structured deletion sequence that respects logical and security dependencies, this approach ensures Pods are removed before other resources.
This feature was introduced in Kubernetes v1.33 and graduated to stable in v1.34. The graduation improves security and reliability by mitigating risks from non-deterministic deletions, including the vulnerability described in CVE-2024-7598.

This work was done as part of KEP #5080 led by SIG API Machinery.

Streaming list responses

Handling large list responses in Kubernetes previously posed a significant scalability challenge. When clients requested extensive resource lists, such as thousands of Pods or Custom Resources, the API server was required to serialize the entire collection of objects into a single, large memory buffer before sending it. This process created substantial memory pressure and could lead to performance degradation, impacting the overall stability of the cluster.
To address this limitation, a streaming encoding mechanism for collections (list responses) has been introduced. For the JSON and Kubernetes Protobuf response formats, that streaming mechanism is automatically active and the associated feature gate is stable. The primary benefit of this approach is the avoidance of large memory allocations on the API server, resulting in a much smaller and more predictable memory footprint. Consequently, the cluster becomes more resilient and performant, especially in large-scale environments where frequent requests for extensive resource lists are common.

This work was done as part of KEP #5116 led by SIG API Machinery.

Resilient watch cache initialization

Watch cache is a caching layer inside kube-apiserver that maintains an eventually consistent cache of cluster state stored in etcd. In the past, issues could occur when the watch cache was not yet initialized during kube-apiserver startup or when it required re-initialization.

To address these issues, the watch cache initialization process has been made more resilient to failures, improving control plane robustness and ensuring controllers and clients can reliably establish watches. This improvement was introduced as beta in v1.31 and is now stable.

This work was done as part of KEP #4568 led by SIG API Machinery and SIG Scalability.

Relaxing DNS search path validation

Previously, the strict validation of a Pod's DNS search path in Kubernetes often created integration challenges in complex or legacy network environments. This restrictiveness could block configurations that were necessary for an organization's infrastructure, forcing administrators to implement difficult workarounds.
To address this, relaxed DNS validation was introduced as alpha in v1.32 and has now graduated to stable in v1.34. A common use case involves Pods that need to communicate with both internal Kubernetes services and external domains. By setting a single dot (.) as the first entry in the searches list of the Pod's .spec.dnsConfig, administrators can prevent the system's resolver from appending the cluster's internal search domains to external queries. This avoids generating unnecessary DNS requests to the internal DNS server for external hostnames, improving efficiency and preventing potential resolution errors.

This work was done as part of KEP #4427 led by SIG Network.

Support for Direct Service Return (DSR) in Windows kube-proxy

DSR provides performance optimizations by allowing return traffic routed through load balancers to bypass the load balancer and respond directly to the client, reducing load on the load balancer and improving overall latency. For information on DSR on Windows, read Direct Server Return (DSR) in a nutshell.
Initially introduced in v1.14, this feature has graduated to stable in v1.34.

This work was done as part of KEP #5100 led by SIG Windows.

Sleep action for Container lifecycle hooks

A Sleep action for containers’ PreStop and PostStart lifecycle hooks was introduced to provide a straightforward way to manage graceful shutdowns and improve overall container lifecycle management.
The Sleep action allows containers to pause for a specified duration after starting or before termination. Using a negative or zero sleep duration returns immediately, resulting in a no-op.
The Sleep action was introduced in Kubernetes v1.29, with zero value support added in v1.32. Both features graduated to stable in v1.34.

This work was done as part of KEP #3960 and KEP #4818 led by SIG Node.

Linux node swap support

Historically, the lack of swap support in Kubernetes could lead to workload instability, as nodes under memory pressure often had to terminate processes abruptly. This particularly affected applications with large but infrequently accessed memory footprints and prevented more graceful resource management.

To address this, configurable per-node swap support was introduced in v1.22. It has progressed through alpha and beta stages and has graduated to stable in v1.34. The primary mode, LimitedSwap, allows Pods to use swap within their existing memory limits, providing a direct solution to the problem. By default, the kubelet is configured with NoSwap mode, which means Kubernetes workloads cannot use swap.

This feature improves workload stability and allows for more efficient resource utilization. It enables clusters to support a wider variety of applications, especially in resource-constrained environments, though administrators must consider the potential performance impact of swapping.

This work was done as part of KEP #2400 led by SIG Node.

Allow special characters in environment variables

The environment variable validation rules in Kubernetes have been relaxed to allow nearly all printable ASCII characters in variable names, excluding =. This change supports scenarios where workloads require nonstandard characters in variable names - for example, frameworks like .NET Core that use : to represent nested configuration keys.

The relaxed validation applies to environment variables defined directly in Pod spec, as well as those injected using envFrom references to ConfigMaps and Secrets.

This work was done as part of KEP #4369 led by SIG Node.

Taint management is separated from Node lifecycle

Historically, the TaintManager's logic for applying NoSchedule and NoExecute taints to nodes based on their condition (NotReady, Unreachable, etc.) was tightly coupled with the node lifecycle controller. This tight coupling made the code harder to maintain and test, and it also limited the flexibility of the taint-based eviction mechanism. This KEP refactors the TaintManager into its own separate controller within the Kubernetes controller manager. It is an internal architectural improvement designed to increase code modularity and maintainability. This change allows the logic for taint-based evictions to be tested and evolved independently, but it has no direct user-facing impact on how taints are used.

This work was done as part of KEP #3902 led by SIG Scheduling and SIG Node.

New features in Beta

This is a selection of some of the improvements that are now beta following the v1.34 release.

Pod-level resource requests and limits

Defining resource needs for Pods with multiple containers has been challenging, as requests and limits could only be set on a per-container basis. This forced developers to either over-provision resources for each container or meticulously divide the total desired resources, making configuration complex and often leading to inefficient resource allocation. To simplify this, the ability to specify resource requests and limits at the Pod level was introduced. This allows developers to define an overall resource budget for a Pod, which is then shared among its constituent containers. This feature was introduced as alpha in v1.32 and has graduated to beta in v1.34, with HPA now supporting pod-level resource specifications.

The primary benefit is a more intuitive and straightforward way to manage resources for multi-container Pods. It ensures that the total resources used by all containers do not exceed the Pod's defined limits, leading to better resource planning, more accurate scheduling, and more efficient utilization of cluster resources.

This work was done as part of KEP #2837 led by SIG Scheduling and SIG Autoscaling.

.kuberc file for kubectl user preferences

A .kuberc configuration file allows you to define preferences for kubectl, such as default options and command aliases. Unlike the kubeconfig file, the .kuberc configuration file does not contain cluster details, usernames or passwords.
This feature was introduced as alpha in v1.33, gated behind the environment variable KUBECTL_KUBERC. It has graduated to beta in v1.34 and is enabled by default.

This work was done as part of KEP #3104 led by SIG CLI.

External ServiceAccount token signing

Traditionally, Kubernetes manages ServiceAccount tokens using static signing keys that are loaded from disk at kube-apiserver startup. This feature introduces an ExternalJWTSigner gRPC service for out-of-process signing, enabling Kubernetes distributions to integrate with external key management solutions (for example, HSMs, cloud KMSes) for ServiceAccount token signing instead of static disk-based keys.

Introduced as alpha in v1.32, this external JWT signing capability advances to beta and is enabled by default in v1.34.

This work was done as part of KEP #740 led by SIG Auth.

DRA features in beta

Admin access for secure resource monitoring

DRA supports controlled administrative access via the adminAccess field in ResourceClaims or ResourceClaimTemplates, allowing cluster operators to access devices already in use by others for monitoring or diagnostics. This privileged mode is limited to users authorized to create such objects in namespaces labeled resource.k8s.io/admin-access: "true", ensuring regular workloads remain unaffected. Graduating to beta in v1.34, this feature provides secure introspection capabilities while preserving workload isolation through namespace-based authorization checks.

This work was done as part of KEP #5018 led by WG Device Management and SIG Auth.

Prioritized alternatives in ResourceClaims and ResourceClaimTemplates

While a workload might run best on a single high-performance GPU, it might also be able to run on two mid-level GPUs.
With the feature gate DRAPrioritizedList (now enabled by default), ResourceClaims and ResourceClaimTemplates get a new field named firstAvailable. This field is an ordered list that allows users to specify that a request may be satisfied in different ways, including allocating nothing at all if specific hardware is not available. The scheduler will attempt to satisfy the alternatives in the list in order, so the workload will be allocated the best set of devices available in the cluster.

This work was done as part of KEP #4816 led by WG Device Management.

The kubelet reports allocated DRA resources

The kubelet's API has been updated to report on Pod resources allocated through DRA. This allows node monitoring agents to discover the allocated DRA resources for Pods on a node. Additionally, it enables node components to use the PodResourcesAPI and leverage this DRA information when developing new features and integrations.
Starting from Kubernetes v1.34, this feature is enabled by default.

This work was done as part of KEP #3695 led by WG Device Management.

kube-scheduler non-blocking API calls

The kube-scheduler makes blocking API calls during scheduling cycles, creating performance bottlenecks. This feature introduces asynchronous API handling through a prioritized queue system with request deduplication, allowing the scheduler to continue processing Pods while API operations complete in the background. Key benefits include reduced scheduling latency, prevention of scheduler thread starvation during API delays, and immediate retry capability for unschedulable Pods. The implementation maintains backward compatibility and adds metrics for monitoring pending API operations.

This work was done as part of KEP #5229 led by SIG Scheduling.

Mutating admission policies

MutatingAdmissionPolicies offer a declarative, in-process alternative to mutating admission webhooks. This feature leverages CEL's object instantiation and JSON Patch strategies, combined with Server Side Apply’s merge algorithms.
This significantly simplifies admission control by allowing administrators to define mutation rules directly in the API server.
Introduced as alpha in v1.32, mutating admission policies has graduated to beta in v1.34.

This work was done as part of KEP #3962 led by SIG API Machinery.

Snapshottable API server cache

The kube-apiserver's caching mechanism (watch cache) efficiently serves requests for the latest observed state. However, list requests for previous states (for example, via pagination or by specifying a resourceVersion) often bypass this cache and are served directly from etcd. This direct etcd access significantly increases performance costs and can lead to stability issues, particularly with large resources, due to memory pressure from transferring large data blobs.
With the ListFromCacheSnapshot feature gate enabled by default, kube-apiserver will attempt to serve the response from snapshots if one is available with resourceVersion older than requested. The kube-apiserver starts with no snapshots, creates a new snapshot on every watch event, and keeps them until it detects etcd is compacted or if cache is full with events older than 75 seconds. If the provided resourceVersion is unavailable, the server will fallback to etcd.

This work was done as part of KEP #4988 led by SIG API Machinery.

Tooling for declarative validation of Kubernetes-native types

Prior to this release, validation rules for the APIs built into Kubernetes were written entirely by hand, which makes them difficult for maintainers to discover, understand, improve or test. There was no single way to find all the validation rules that might apply to an API. Declarative validation benefits Kubernetes maintainers by making API development, maintenance, and review easier while enabling programmatic inspection for better tooling and documentation. For people using Kubernetes libraries to write their own code (for example: a controller), the new approach streamlines adding new fields through IDL tags, rather than complex validation functions. This change helps speed up API creation by automating validation boilerplate, and provides more relevant error messages by performing validation on versioned types.​​​​​​​​​​​​​​​​
This enhancement (which graduated to beta in v1.33 and continues as beta in v1.34) brings CEL-based validation rules to native Kubernetes types. It allows for more granular and declarative validation to be defined directly in the type definitions, improving API consistency and developer experience.

This work was done as part of KEP #5073 led by SIG API Machinery.

Streaming informers for list requests

The streaming informers feature, which has been in beta since v1.32, gains further beta refinements in v1.34. This capability allows list requests to return data as a continuous stream of objects from the API server’s watch cache, rather than assembling paged results directly from etcd. By reusing the same mechanics used for watch operations, the API server can serve large datasets while keeping memory usage steady and avoiding allocation spikes that can affect stability.

In this release, the kube-apiserver and kube-controller-manager both take advantage of the new WatchList mechanism by default. For the kube-apiserver, this means list requests are streamed more efficiently, while the kube-controller-manager benefits from a more memory-efficient and predictable way to work with informers. Together, these improvements reduce memory pressure during large list operations, and improve reliability under sustained load, making list streaming more predictable and efficient.

This work was done as part of KEP #3157 led by SIG API Machinery and SIG Scalability.

Graceful node shutdown handling for Windows nodes

The kubelet on Windows nodes can now detect system shutdown events and begin graceful termination of running Pods. This mirrors existing behavior on Linux and helps ensure workloads exit cleanly during planned shutdowns or restarts.
When the system begins shutting down, the kubelet reacts by using standard termination logic. It respects the configured lifecycle hooks and grace periods, giving Pods time to stop before the node powers off. The feature relies on Windows pre-shutdown notifications to coordinate this process. This enhancement improves workload reliability during maintenance, restarts, or system updates. It is now in beta and enabled by default.

This work was done as part of KEP #4802 led by SIG Windows.

In-place Pod resize improvements

Graduated to beta and enabled by default in v1.33, in-place Pod resizing receives further improvements in v1.34. These include support for decreasing memory usage and integration with Pod-level resources.

This feature remains in beta in v1.34. For detailed usage instructions and examples, refer to the documentation: Resize CPU and Memory Resources assigned to Containers.

This work was done as part of KEP #1287 led by SIG Node and SIG Autoscaling.

New features in Alpha

This is a selection of some of the improvements that are now alpha following the v1.34 release.

Pod certificates for mTLS authentication

Authenticating workloads within a cluster, especially for communication with the API server, has primarily relied on ServiceAccount tokens. While effective, these tokens aren't always ideal for establishing a strong, verifiable identity for mutual TLS (mTLS) and can present challenges when integrating with external systems that expect certificate-based authentication.
Kubernetes v1.34 introduces a built-in mechanism for Pods to obtain X.509 certificates via PodCertificateRequests. The kubelet can request and manage certificates for Pods, which can then be used to authenticate to the Kubernetes API server and other services using mTLS. The primary benefit is a more robust and flexible identity mechanism for Pods. It provides a native way to implement strong mTLS authentication without relying solely on bearer tokens, aligning Kubernetes with standard security practices and simplifying integrations with certificate-aware observability and security tooling.

This work was done as part of KEP #4317 led by SIG Auth.

"Restricted" Pod security standard now forbids remote probes

The host field within probes and lifecycle handlers allows users to specify an entity other than the podIP for the kubelet to probe. However, this opens up a route for misuse and for attacks that bypass security controls, since the host field could be set to any value, including security sensitive external hosts, or localhost on the node. In Kubernetes v1.34, Pods only meet the Restricted Pod security standard if they either leave the host field unset, or if they don't even use this kind of probe. You can use Pod security admission, or a third party solution, to enforce that Pods meet this standard. Because these are security controls, check the documentation to understand the limitations and behavior of the enforcement mechanism you choose.

This work was done as part of KEP #4940 led by SIG Auth.

Use .status.nominatedNodeName to express Pod placement

When the kube-scheduler takes time to bind Pods to Nodes, cluster autoscalers may not understand that a Pod will be bound to a specific Node. Consequently, they may mistakenly consider the Node as underutilized and delete it.
To address this issue, the kube-scheduler can use .status.nominatedNodeName not only to indicate ongoing preemption but also to express Pod placement intentions. By enabling the NominatedNodeNameForExpectation feature gate, the scheduler uses this field to indicate where a Pod will be bound. This exposes internal reservations to help external components make informed decisions.

This work was done as part of KEP #5278 led by SIG Scheduling.

DRA features in alpha

Resource health status for DRA

It can be difficult to know when a Pod is using a device that has failed or is temporarily unhealthy, which makes troubleshooting Pod crashes challenging or impossible.
Resource Health Status for DRA improves observability by exposing the health status of devices allocated to a Pod in the Pod’s status. This makes it easier to identify the cause of Pod issues related to unhealthy devices and respond appropriately.
To enable this functionality, the ResourceHealthStatus feature gate must be enabled, and the DRA driver must implement the DRAResourceHealth gRPC service.

This work was done as part of KEP #4680 led by WG Device Management.

Extended resource mapping

Extended resource mapping provides a simpler alternative to DRA's expressive and flexible approach by offering a straightforward way to describe resource capacity and consumption. This feature enables cluster administrators to advertise DRA-managed resources as extended resources, allowing application developers and operators to continue using the familiar container’s .spec.resources syntax to consume them.
This enables existing workloads to adopt DRA without modifications, simplifying the transition to DRA for both application developers and cluster administrators.

This work was done as part of KEP #5004 led by WG Device Management.

DRA consumable capacity

Kubernetes v1.33 added support for resource drivers to advertise slices of a device that are available, rather than exposing the entire device as an all-or-nothing resource. However, this approach couldn't handle scenarios where device drivers manage fine-grained, dynamic portions of a device resource based on user demand, or share those resources independently of ResourceClaims, which are restricted by their spec and namespace.
Enabling the DRAConsumableCapacity feature gate (introduced as alpha in v1.34) allows resource drivers to share the same device, or even a slice of a device, across multiple ResourceClaims or across multiple DeviceRequests. The feature also extends the scheduler to support allocating portions of device resources, as defined in the capacity field. This DRA feature improves device sharing across namespaces and claims, tailoring it to Pod needs. It enables drivers to enforce capacity limits, enhances scheduling, and supports new use cases like bandwidth-aware networking and multi-tenant sharing.

This work was done as part of KEP #5075 led by WG Device Management.

Device binding conditions

The Kubernetes scheduler gets more reliable by delaying binding a Pod to a Node until its required external resources, such as attachable devices or FPGAs, are confirmed to be ready.
This delay mechanism is implemented in the PreBind phase of the scheduling framework. During this phase, the scheduler checks whether all required device conditions are satisfied before proceeding with binding. This enables coordination with external device controllers, ensuring more robust, predictable scheduling.

This work was done as part of KEP #5007 led by WG Device Management.

Container restart rules

Currently, all containers within a Pod will follow the same .spec.restartPolicy when exited or crashed. However, Pods that run multiple containers might have different restart requirements for each container. For example, for init containers used to perform initialization, you may not want to retry initialization if they fail. Similarly, in ML research environments with long-running training workloads, containers that fail with retriable exit codes should restart quickly in place, rather than triggering Pod recreation and losing progress.
Kubernetes v1.34 introduces the ContainerRestartRules feature gate. When enabled, a restartPolicy can be specified for each container within a Pod. A restartPolicyRules list can also be defined to override restartPolicy based on the last exit code. This provides the fine-grained control needed to handle complex scenarios and better utilization of compute resources.

This work was done as part of KEP #5307 led by SIG Node.

Load environment variables from files created in runtime

Application developers have long requested greater flexibility in declaring environment variables. Traditionally, environment variables are declared on the API server side via static values, ConfigMaps, or Secrets.

Behind the EnvFiles feature gate, Kubernetes v1.34 introduces the ability to declare environment variables at runtime. One container (typically an init container) can generate the variable and store it in a file, and a subsequent container can start with the environment variable loaded from that file. This approach eliminates the need to "wrap" the target container's entry point, enabling more flexible in-Pod container orchestration.

This feature particularly benefits AI/ML training workloads, where each Pod in a training Job requires initialization with runtime-defined values.

This work was done as part of KEP #5307 led by SIG Node.

Graduations, deprecations, and removals in v1.34

Graduations to stable

This lists all the features that graduated to stable (also known as general availability). For a full list of updates including new features and graduations from alpha to beta, see the release notes.

This release includes a total of 23 enhancements promoted to stable:

Deprecations and removals

As Kubernetes develops and matures, features may be deprecated, removed, or replaced with better ones to improve the project's overall health. See the Kubernetes deprecation and removal policy for more details on this process. Kubernetes v1.34 includes a couple of deprecations.

Manual cgroup driver configuration is deprecated

Historically, configuring the correct cgroup driver has been a pain point for users running Kubernetes clusters. Kubernetes v1.28 added a way for the kubelet to query the CRI implementation and find which cgroup driver to use. That automated detection is now strongly recommended and support for it has graduated to stable in v1.34. If your CRI container runtime does not support the ability to report the croup driver it needs, you should upgrade or change your container runtime. The cgroupDriver configuration setting in the kubelet configuration file is now deprecated. The corresponding command-line option --cgroup-driver was previously deprecated, as Kubernetes recommends using the configuration file instead. Both the configuration setting and command-line option will be removed in a future release, that removal will not happen before the v1.36 minor release.

This work was done as part of KEP #4033 led by SIG Node.

Kubernetes to end containerd 1.x support in v1.36

While Kubernetes v1.34 still supports containerd 1.7 and other LTS releases of containerd, as a consequence of automated cgroup driver detection, the Kubernetes SIG Node community has formally agreed upon a final support timeline for containerd v1.X. The last Kubernetes release to offer this support will be v1.35 (aligned with containerd 1.7 EOL). This is an early warning that if you are using containerd 1.X, consider switching to 2.0+ soon. You are able to monitor the kubelet_cri_losing_support metric to determine if any nodes in your cluster are using a containerd version that will soon be outdated.

This work was done as part of KEP #4033 led by SIG Node.

PreferClose traffic distribution is deprecated

The spec.trafficDistribution field within a Kubernetes Service allows users to express preferences for how traffic should be routed to Service endpoints.

KEP-3015 deprecates PreferClose and introduces two additional values: PreferSameZone and PreferSameNode. PreferSameZone is an alias for the existing PreferClose to clarify its semantics. PreferSameNode allows connections to be delivered to a local endpoint when possible, falling back to a remote endpoint when not possible.

This feature was introduced in v1.33 behind the PreferSameTrafficDistribution feature gate. It has graduated to beta in v1.34 and is enabled by default.

This work was done as part of KEP #3015 led by SIG Network.

Release notes

Check out the full details of the Kubernetes v1.34 release in our release notes.

Availability

Kubernetes v1.34 is available for download on GitHub or on the Kubernetes download page.

To get started with Kubernetes, check out these interactive tutorials or run local Kubernetes clusters using minikube. You can also easily install v1.34 using kubeadm.

Release Team

Kubernetes is only possible with the support, commitment, and hard work of its community. Each release team is made up of dedicated community volunteers who work together to build the many pieces that make up the Kubernetes releases you rely on. This requires the specialized skills of people from all corners of our community, from the code itself to its documentation and project management.

We honor the memory of Rodolfo "Rodo" Martínez Vega, a dedicated contributor whose passion for technology and community building left a mark on the Kubernetes community. Rodo served as a member of the Kubernetes Release Team across multiple releases, including v1.22-v1.23 and v1.25-v1.30, demonstrating unwavering commitment to the project's success and stability.
Beyond his Release Team contributions, Rodo was deeply involved in fostering the Cloud Native LATAM community, helping to bridge language and cultural barriers in the space. His work on the Spanish version of Kubernetes documentation and the CNCF Glossary exemplified his dedication to making knowledge accessible to Spanish-speaking developers worldwide. Rodo's legacy lives on through the countless community members he mentored, the releases he helped deliver, and the vibrant LATAM Kubernetes community he helped cultivate.

We would like to thank the entire Release Team for the hours spent hard at work to deliver the Kubernetes v1.34 release to our community. The Release Team's membership ranges from first-time shadows to returning team leads with experience forged over several release cycles. A very special thanks goes out to our release lead, Vyom Yadav, for guiding us through a successful release cycle, for his hands-on approach to solving challenges, and for bringing the energy and care that drives our community forward.

Project Velocity

The CNCF K8s DevStats project aggregates a number of interesting data points related to the velocity of Kubernetes and various sub-projects. This includes everything from individual contributions to the number of companies that are contributing and is an illustration of the depth and breadth of effort that goes into evolving this ecosystem.

During the v1.34 release cycle, which spanned 15 weeks from 19th May 2025 to 27th August 2025, Kubernetes received contributions from as many as 106 different companies and 491 individuals. In the wider cloud native ecosystem, the figure goes up to 370 companies, counting 2235 total contributors.

Note that "contribution" counts when someone makes a commit, code review, comment, creates an issue or PR, reviews a PR (including blogs and documentation) or comments on issues and PRs.
If you are interested in contributing, visit Getting Started on our contributor website.

Source for this data:

Event Update

Explore upcoming Kubernetes and cloud native events, including KubeCon + CloudNativeCon, KCD, and other notable conferences worldwide. Stay informed and get involved with the Kubernetes community!

August 2025

September 2025

October 2025

November 2025

December 2025

You can find the latest event details here.

Upcoming Release Webinar

Join members of the Kubernetes v1.34 Release Team on Wednesday, September 24th 2025 at 4:00 PM (UTC), to learn about the release highlights of this release. For more information and registration, visit the event page on the CNCF Online Programs site.

Get Involved

The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below. Thank you for your continued feedback and support.

Categories: CNCF Projects, Kubernetes

Kubernetes 1.34 Of Wind & Will, With Vyom Yadav

Kubernetes Podcast - Wed, 08/27/2025 - 08:04

Vyom Yadav is a software engineer in the security team at Canonical and a member of the Kubernetes Security Response Committee. We talked about the new Release theme and what major updates, deprecations and removals to expect in this version.

 

Do you have something cool to share? Some questions? Let us know:

- web: kubernetespodcast.com

- mail: [email protected]

- twitter: @kubernetespod

- bluesky: @kubernetespodcast.com

News of the week Links from the interview
Categories: Kubernetes

We Are Still Unable to Secure LLMs from Malicious Inputs

Schneier on Security - Wed, 08/27/2025 - 07:07

Nice indirect prompt injection attack:

Bargury’s attack starts with a poisoned document, which is shared to a potential victim’s Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) It looks like an official document on company meeting policies. But inside the document, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. The prompt is written in white text in a size-one font, something that a human is unlikely to see but a machine will still read.

In a proof of concept video of the attack...

Categories: Software Security

Kubewarden 1.28 Release

Kubewarden Blog - Tue, 08/26/2025 - 20:00
Kubewarden 1.28 has emerged refreshed from a bath in the lake (just like my dog on the morning walk before writing this post!). This release cycle comes mainly with improvements on policies, though some stack features plus kwctl bugfixes also bubbled up. Supporting Hauler for air-gap installs With 1.28, our Helm chart releases now include a Hauler YAML manifest. Hauler is an Open Source project that provides a declarative way of saving all artifacts needed for air-gap installs, along with a tool (the hauler cli) that works with it without requiring operators to adopt a specific workflow.
Categories: Web Assembly

DSLRoot, Proxies, and the Threat of ‘Legal Botnets’

Krebs on Security - Tue, 08/26/2025 - 10:05

The cybersecurity community on Reddit responded in disbelief this month when a self-described Air National Guard member with top secret security clearance began questioning the arrangement they’d made with company called DSLRoot, which was paying $250 a month to plug a pair of laptops into the Redditor’s high-speed Internet connection in the United States. This post examines the history and provenance of DSLRoot, one of the oldest “residential proxy” networks with origins in Russia and Eastern Europe.

The query about DSLRoot came from a Reddit user “Sacapoopie,” who did not respond to questions. This user has since deleted the original question from their post, although some of their replies to other Reddit cybersecurity enthusiasts remain in the thread. The original post was indexed here by archive.is, and it began with a question:

“I have been getting paid 250$ a month by a residential IP network provider named DSL root to host devices in my home,” Sacapoopie wrote. “They are on a separate network than what we use for personal use. They have dedicated DSL connections (one per host) to the ISP that provides the DSL coverage. My family used Starlink. Is this stupid for me to do? They just sit there and I get paid for it. The company pays the internet bill too.”

Many Redditors said they assumed Sacapoopie’s post was a joke, and that nobody with a cybersecurity background and top-secret (TS/SCI) clearance would agree to let some shady residential proxy company introduce hardware into their network. Other readers pointed to a slew of posts from Sacapoopie in the Cybersecurity subreddit over the past two years about their work on cybersecurity for the Air National Guard.

When pressed for more details by fellow Redditors, Sacapoopie described the equipment supplied by DSLRoot as “just two laptops hardwired into a modem, which then goes to a dsl port in the wall.”

“When I open the computer, it looks like [they] have some sort of custom application that runs and spawns several cmd prompts,” the Redditor explained. “All I can infer from what I see in them is they are making connections.”

When asked how they became acquainted with DSLRoot, Sacapoopie told another user they discovered the company and reached out after viewing an advertisement on a social media platform.

“This was probably 5-6 years ago,” Sacapoopie wrote. “Since then I just communicate with a technician from that company and I help trouble shoot connectivity issues when they arise.”

Reached for comment, DSLRoot said its brand has been unfairly maligned thanks to that Reddit discussion. The unsigned email said DSLRoot is fully transparent about its goals and operations, adding that it operates under full consent from its “regional agents,” the company’s term for U.S. residents like Sacapoopie.

“As although we support honest journalism, we’re against of all kinds of ‘low rank/misleading Yellow Journalism’ done for the sake of cheap hype,” DSLRoot wrote in reply. “It’s obvious to us that whoever is doing this, is either lacking a proper understanding of the subject or doing it intentionally to gain exposure by misleading those who lack proper understanding,” DSLRoot wrote in answer to questions about the company’s intentions.

“We monitor our clients and prohibit any illegal activity associated with our residential proxies,” DSLRoot continued. “We honestly didn’t know that the guy who made the Reddit post was a military guy. Be it an African-American granny trying to pay her rent or a white kid trying to get through college, as long as they can provide an Internet line or host phones for us — we’re good.”

WHAT IS DSLROOT?

DSLRoot is sold as a residential proxy service on the forum BlackHatWorld under the name DSLRoot and GlobalSolutions. The company is based in the Bahamas and was formed in 2012. The service is advertised to people who are not in the United States but who want to seem like they are. DSLRoot pays people in the United States to run the company’s hardware and software — including 5G mobile devices — and in return it rents those IP addresses as dedicated proxies to customers anywhere in the world — priced at $190 per month for unrestricted access to all locations.

The DSLRoot website.

The GlobalSolutions account on BlackHatWorld lists a Telegram account and a WhatsApp number in Mexico. DSLRoot’s profile on the marketing agency digitalpoint.com from 2010 shows their previous username on the forum was “Incorptoday.” GlobalSolutions user accounts at bitcointalk[.]org and roclub[.]com include the email clickdesk@instantvirtualcreditcards[.]com.

Passive DNS records from DomainTools.com show instantvirtualcreditcards[.]com shared a host back then — 208.85.1.164 — with just a handful of domains, including dslroot[.]com, regacard[.]com, 4groot[.]com, residential-ip[.]com, 4gemperor[.]com, ip-teleport[.]com, proxysource[.]net and proxyrental[.]net.

Cyber intelligence firm Intel 471 finds GlobalSolutions registered on BlackHatWorld in 2016 using the email address [email protected]. This user shared that their birthday is March 7, 1984.

Several negative reviews about DSLRoot on the forums noted that the service was operated by a BlackHatWorld user calling himself “USProxyKing.” Indeed, Intel 471 shows this user told fellow forum members in 2013 to contact him at the Skype username “dslroot.”

USProxyKing on BlackHatWorld, soliciting installations of his adware via torrents and file-sharing sites.

USProxyKing had a reputation for spamming the forums with ads for his residential proxy service, and he ran a “pay-per-install” program where he paid affiliates a small commission each time one of their websites resulted in the installation of his unspecified “adware” programs — presumably a program that turned host PCs into proxies. On the other end of the business, USProxyKing sold that pay-per-install access to others wishing to distribute questionable software — at $1 per installation.

Private messages indexed by Intel 471 show USProxyKing also raised money from nearly 20 different BlackHatWorld members who were promised shareholder positions in a new business that would offer robocalling services capable of placing 2,000 calls per minute.

Constella Intelligence, a platform that tracks data exposed in breaches, finds that same IP address GlobalSolutions used to register at BlackHatWorld was also used to create accounts at a handful of sites, including a GlobalSolutions user account at WebHostingTalk that supplied the email address [email protected]. Also registered to [email protected] are the domains dslbay[.]com, dslhub[.]net, localsim[.]com, rdslpro[.]com, virtualcards[.]biz/cc, and virtualvisa[.]cc.

Recall that DSLRoot’s profile on digitalpoint.com was previously named Incorptoday. DomainTools says [email protected] is associated with almost two dozen domains going back to 2008, including incorptoday[.]com, a website that offers to incorporate businesses in several states, including Delaware, Florida and Nevada, for prices ranging from $450 to $550.

As we can see in this archived copy of the site from 2013, IncorpToday also offered a premiere service for $750 that would allow the customer’s new company to have a retail checking account, with no questions asked.

Global Solutions is able to provide access to the U.S. banking system by offering customers prepaid cards that can be loaded with a variety of virtual payment instruments that were popular in Russian-speaking countries at the time, including WebMoney. The cards are limited to $500 balances, but non-Westerners can use them to anonymously pay for goods and services at a variety of Western companies. Cardnow[.]ru, another domain registered to [email protected], demonstrates this in action.

A copy of Incorptoday’s website from 2013 offers non-US residents a service to incorporate a business in Florida, Delaware or Nevada, along with a no-questions-asked checking account, for $750.

WHO IS ANDREI HOLAS?

The oldest domain (2008) registered to [email protected] is andrei[.]me; another is called andreigolos[.]com. DomainTools says these and other domains registered to that email address include the registrant name Andrei Holas, from Huntsville, Ala.

Public records indicate Andrei Holas has lived with his brother — Aliaksandr Holas — at two different addresses in Alabama. Those records state that Andrei Holas’ birthday is in March 1984, and that his brother is slightly younger. The younger brother did not respond to a request for comment.

Andrei Holas maintained an account on the Russian social network Vkontakte under the email address [email protected], an address that shows up in numerous records hacked and leaked from Russian government entities over the past few years.

Those records indicate Andrei Holas and his brother are from Belarus and have maintained an address in Moscow for some time (that address is roughly three blocks away from the main headquarters of the Russian FSB, the successor intelligence agency to the KGB). Hacked Russian banking records show Andrei Holas’ birthday is March 7, 1984 — the same birth date listed by GlobalSolutions on BlackHatWorld.

A 2010 post by [email protected] at the Russian-language forum Ulitka explains that the poster was having trouble getting his B1/B2 visa to visit his brother in the United States, even though he’d previously been approved for two separate guest visas and a student visa. It remains unclear if one, both, or neither of the Holas brothers still lives in the United States. Andrei explained in 2010 that his brother was an American citizen.

LEGAL BOTNETS

We can all wag our fingers at military personnel who should undoubtedly know better than to install Internet hardware from strangers, but in truth there is an endless supply of U.S. residents who will resell their Internet connection if it means they can make a few bucks out of it. And these days, there are plenty of residential proxy providers who will make it worth your while.

Traditionally, residential proxy networks have been constructed using malicious software that quietly turns infected systems into traffic relays that are then sold in shadowy online forums. Most often, this malware gets bundled with popular cracked software and video files that are uploaded to file-sharing networks and that secretly turn the host device into a traffic relay. In fact, USPRoxyKing bragged that he routinely achieved thousands of installs per week via this method alone.

These days, there a number of residential proxy networks that entice users to monetize their unused bandwidth (inviting you to violate the terms of service of your ISP in the process); others, like DSLRoot, act as a communal VPN, and by using the service you gain access to the connections of other proxies (users) by default, but you also agree to share your connection with others.

Indeed, Intel 471’s archives show the GlobalSolutions and DSLRoot accounts routinely received private messages from forum users who were college students or young people trying to make ends meet. Those messages show that many of DSLRoot’s “regional agents” often sought commissions to refer friends interested in reselling their home Internet connections (DSLRoot would offer to cover the monthly cost of the agent’s home Internet connection).

But in an era when North Korean hackers are relentlessly posing as Western IT workers by paying people to host laptop farms in the United States, letting strangers run laptops, mobile devices or any other hardware on your network seems like an awfully risky move regardless of your station in life. As several Redditors pointed out in Sacapoopie’s thread, an Arizona woman was sentenced in July 2025 to 102 months in prison for hosting a laptop farm that helped North Korean hackers secure jobs at more than 300 U.S. companies, including Fortune 500 firms.

Lloyd Davies is the founder of Infrawatch, a London-based security startup that tracks residential proxy networks. Davies said he reverse engineered the software that powers DSLRoot’s proxy service, and found it phones home to the aforementioned domain proxysource[.]net, which sells a service that promises to “get your ads live in multiple cities without getting banned, flagged or ghosted” (presumably a reference to CraigsList ads).

Davies said he found the DSLRoot installer had capabilities to remotely control residential networking equipment across multiple vendor brands.

Image: Infrawatch.app.

“The software employs vendor-specific exploits and hardcoded administrative credentials, suggesting DSLRoot pre-configures equipment before deployment,” Davies wrote in an analysis published today. He said the software performs WiFi network enumeration to identify nearby wireless networks, thereby “potentially expanding targeting capabilities beyond the primary internet connection.”

It’s unclear exactly when the USProxyKing was usurped from his throne, but DSLRoot and its proxy offerings are not what they used to be. Davies said the entire DSLRoot network now has fewer than 300 nodes nationwide, mostly systems on DSL providers like CenturyLink and Frontier.

On Aug. 17, GlobalSolutions posted to BlackHatWorld saying, “We’re restructuring our business model by downgrading to ‘DSL only’ lines (no mobile or cable).” Asked via email about the changes, DSLRoot blamed the decline in his customers on the proliferation of residential proxy services.

“These days it has become almost impossible to compete in this niche as everyone is selling residential proxies and many companies want you to install a piece of software on your phone or desktop so they can resell your residential IPs on a much larger scale,” DSLRoot explained. “So-called ‘legal botnets’ as we see them.”

Categories: Software Security

Pages

Subscribe to articles.innovatingtomorrow.net aggregator