Feed aggregator
Kubernetes v1.34: Pods Report DRA Resource Health
The rise of AI/ML and other high-performance workloads has made specialized hardware like GPUs, TPUs, and FPGAs a critical component of many Kubernetes clusters. However, as discussed in a previous blog post about navigating failures in Pods with devices, when this hardware fails, it can be difficult to diagnose, leading to significant downtime. With the release of Kubernetes v1.34, we are excited to announce a new alpha feature that brings much-needed visibility into the health of these devices.
This work extends the functionality of KEP-4680, which first introduced a mechanism for reporting the health of devices managed by Device Plugins. Now, this capability is being extended to Dynamic Resource Allocation (DRA). Controlled by the ResourceHealthStatus feature gate, this enhancement allows DRA drivers to report device health directly into a Pod's .status field, providing crucial insights for operators and developers.
Why expose device health in Pod status?
For stateful applications or long-running jobs, a device failure can be disruptive and costly. By exposing device health in the .status field for a Pod, Kubernetes provides a standardized way for users and automation tools to quickly diagnose issues. If a Pod is failing, you can now check its status to see if an unhealthy device is the root cause, saving valuable time that might otherwise be spent debugging application code.
How it works
This feature introduces a new, optional communication channel between the Kubelet and DRA drivers, built on three core components.
A new gRPC health service
A new gRPC service, DRAResourceHealth, is defined in the dra-health/v1alpha1 API group. DRA drivers can implement this service to stream device health updates to the Kubelet. The service includes a NodeWatchResources server-streaming RPC that sends the health status (Healthy, Unhealthy, or Unknown) for the devices it manages.
Kubelet integration
The Kubelet’s DRAPluginManager discovers which drivers implement the health service. For each compatible driver, it starts a long-lived NodeWatchResources stream to receive health updates. The DRA Manager then consumes these updates and stores them in a persistent healthInfoCache that can survive Kubelet restarts.
Populating the Pod status
When a device's health changes, the DRA manager identifies all Pods affected by the change and triggers a Pod status update. A new field, allocatedResourcesStatus, is now part of the v1.ContainerStatus API object. The Kubelet populates this field with the current health of each device allocated to the container.
A practical example
If a Pod is in a CrashLoopBackOff state, you can use kubectl describe pod <pod-name> to inspect its status. If an allocated device has failed, the output will now include the allocatedResourcesStatus field, clearly indicating the problem:
status:
containerStatuses:
- name: my-gpu-intensive-container
# ... other container statuses
allocatedResourcesStatus:
- name: "claim:my-gpu-claim"
resources:
- resourceID: "example.com/gpu-a1b2-c3d4"
health: "Unhealthy"
This explicit status makes it clear that the issue is with the underlying hardware, not the application.
Now you can improve the failure detection logic to react on the unhealthy devices associated with the Pod by de-scheduling a Pod.
How to use this feature
As this is an alpha feature in Kubernetes v1.34, you must take the following steps to use it:
- Enable the
ResourceHealthStatusfeature gate on your kube-apiserver and kubelets. - Ensure you are using a DRA driver that implements the
v1alpha1 DRAResourceHealthgRPC service.
DRA drivers
If you are developing a DRA driver, make sure to think about device failure detection strategy and ensure that your driver is integrated with this feature. This way, your driver will improve the user experience and simplify debuggability of hardware issues.
What's next?
This is the first step in a broader effort to improve how Kubernetes handles device failures. As we gather feedback on this alpha feature, the community is planning several key enhancements before graduating to Beta:
- Detailed health messages: To improve the troubleshooting experience, we plan to add a human-readable message field to the gRPC API. This will allow DRA drivers to provide specific context for a health status, such as "GPU temperature exceeds threshold" or "NVLink connection lost".
- Configurable health timeouts: The timeout for marking a device's health as "Unknown" is currently hardcoded. We plan to make this configurable, likely on a per-driver basis, to better accommodate the different health-reporting characteristics of various hardware.
- Improved post-mortem troubleshooting: We will address a known limitation where health updates may not be applied to pods that have already terminated. This fix will ensure that the health status of a device at the time of failure is preserved, which is crucial for troubleshooting batch jobs and other "run-to-completion" workloads.
This feature was developed as part of KEP-4680, and community feedback is crucial as we work toward graduating it to Beta. We have more improvements of device failure handling in k8s and encourage you to try it out and share your experiences with the SIG Node community!
Hacking Electronic Safes
Vulnerabilities in electronic safes that use Securam Prologic locks:
While both their techniques represent glaring security vulnerabilities, Omo says it’s the one that exploits a feature intended as a legitimate unlock method for locksmiths that’s the more widespread and dangerous. “This attack is something where, if you had a safe with this kind of lock, I could literally pull up the code right now with no specialized hardware, nothing,” Omo says. “All of a sudden, based on our testing, it seems like people can get into almost any Securam Prologic lock in the world.”...
Kubernetes v1.34: Moving Volume Group Snapshots to v1beta2
Volume group snapshots were introduced as an Alpha feature with the Kubernetes 1.27 release and moved to Beta in the Kubernetes 1.32 release. The recent release of Kubernetes v1.34 moved that support to a second beta. The support for volume group snapshots relies on a set of extension APIs for group snapshots. These APIs allow users to take crash consistent snapshots for a set of volumes. Behind the scenes, Kubernetes uses a label selector to group multiple PersistentVolumeClaims for snapshotting. A key aim is to allow you restore that set of snapshots to new volumes and recover your workload based on a crash consistent recovery point.
This new feature is only supported for CSI volume drivers.
What's new in Beta 2?
While testing the beta version, we encountered an issue where the restoreSize field is not set for individual VolumeSnapshotContents and VolumeSnapshots if CSI driver does not implement the ListSnapshots RPC call.
We evaluated various options here and decided to make this change releasing a new beta for the API.
Specifically, a VolumeSnapshotInfo struct is added in v1beta2, it contains information for an individual volume snapshot that is a member of a volume group snapshot. VolumeSnapshotInfoList, a list of VolumeSnapshotInfo, is added to VolumeGroupSnapshotContentStatus, replacing VolumeSnapshotHandlePairList. VolumeSnapshotInfoList is a list of snapshot information returned by the CSI driver to identify snapshots on the storage system. VolumeSnapshotInfoList is populated by the csi-snapshotter sidecar based on the CSI CreateVolumeGroupSnapshotResponse returned by the CSI driver's CreateVolumeGroupSnapshot call.
The existing v1beta1 API objects will be converted to the new v1beta2 API objects by a conversion webhook.
What’s next?
Depending on feedback and adoption, the Kubernetes project plans to push the volume group snapshot implementation to general availability (GA) in a future release.
How can I learn more?
- The design spec for the volume group snapshot feature.
- The code repository for volume group snapshot APIs and controller.
- CSI documentation on the group snapshot feature.
How do I get involved?
This project, like all of Kubernetes, is the result of hard work by many contributors from diverse backgrounds working together. On behalf of SIG Storage, I would like to offer a huge thank you to the contributors who stepped up these last few quarters to help the project reach beta:
- Ben Swartzlander (bswartz)
- Hemant Kumar (gnufied)
- Jan Šafránek (jsafrane)
- Madhu Rajanna (Madhu-1)
- Michelle Au (msau42)
- Niels de Vos (nixpanic)
- Leonardo Cecchi (leonardoce)
- Saad Ali (saad-ali)
- Xing Yang (xing-yang)
- Yati Padia (yati1998)
For those interested in getting involved with the design and development of CSI or any part of the Kubernetes Storage system, join the Kubernetes Storage Special Interest Group (SIG). We always welcome new contributors.
We also hold regular Data Protection Working Group meetings. New attendees are welcome to join our discussions.
Self-Replicating Worm Hits 180+ Software Packages
At least 187 code packages made available through the JavaScript repository NPM have been infected with a self-replicating worm that steals credentials from developers and publishes those secrets on GitHub, experts warn. The malware, which briefly infected multiple code packages from the security vendor CrowdStrike, steals and publishes even more credentials every time an infected package is installed.

Image: https://en.wikipedia.org/wiki/Sandworm_(Dune)
The novel malware strain is being dubbed Shai-Hulud — after the name for the giant sandworms in Frank Herbert’s Dune novel series — because it publishes any stolen credentials in a new public GitHub repository that includes the name “Shai-Hulud.”
“When a developer installs a compromised package, the malware will look for a npm token in the environment,” said Charlie Eriksen, a researcher for the Belgian security firm Aikido. “If it finds it, it will modify the 20 most popular packages that the npm token has access to, copying itself into the package, and publishing a new version.”
At the center of this developing maelstrom are code libraries available on NPM (short for “Node Package Manager”), which acts as a central hub for JavaScript development and provides the latest updates to widely-used JavaScript components.
The Shai-Hulud worm emerged just days after unknown attackers launched a broad phishing campaign that spoofed NPM and asked developers to “update” their multi-factor authentication login options. That attack led to malware being inserted into at least two-dozen NPM code packages, but the outbreak was quickly contained and was narrowly focused on siphoning cryptocurrency payments.

Image: aikido.dev
In late August, another compromise of an NPM developer resulted in malware being added to “nx,” an open-source code development toolkit with as many as six million weekly downloads. In the nx compromise, the attackers introduced code that scoured the user’s device for authentication tokens from programmer destinations like GitHub and NPM, as well as SSH and API keys. But instead of sending those stolen credentials to a central server controlled by the attackers, the malicious nx code created a new public repository in the victim’s GitHub account, and published the stolen data there for all the world to see and download.
Last month’s attack on nx did not self-propagate like a worm, but this Shai-Hulud malware does and bundles reconnaissance tools to assist in its spread. Namely, it uses the open-source tool TruffleHog to search for exposed credentials and access tokens on the developer’s machine. It then attempts to create new GitHub actions and publish any stolen secrets.
“Once the first person got compromised, there was no stopping it,” Aikido’s Eriksen told KrebsOnSecurity. He said the first NPM package compromised by this worm appears to have been altered on Sept. 14, around 17:58 UTC.
The security-focused code development platform socket.dev reports the Shai-Halud attack briefly compromised at least 25 NPM code packages managed by CrowdStrike. Socket.dev said the affected packages were quickly removed by the NPM registry.
In a written statement shared with KrebsOnSecurity, CrowdStrike said that after detecting several malicious packages in the public NPM registry, the company swiftly removed them and rotated its keys in public registries.
“These packages are not used in the Falcon sensor, the platform is not impacted and customers remain protected,” the statement reads, referring to the company’s widely-used endpoint threat detection service. “We are working with NPM and conducting a thorough investigation.”
A writeup on the attack from StepSecurity found that for cloud-specific operations, the malware enumerates AWS, Azure and Google Cloud Platform secrets. It also found the entire attack design assumes the victim is working in a Linux or macOS environment, and that it deliberately skips Windows systems.
StepSecurity said Shai-Hulud spreads by using stolen NPM authentication tokens, adding its code to the top 20 packages in the victim’s account.
“This creates a cascading effect where an infected package leads to compromised maintainer credentials, which in turn infects all other packages maintained by that user,” StepSecurity’s Ashish Kurmi wrote.
Eriksen said Shai-Hulud is still propagating, although its spread seems to have waned in recent hours.
“I still see package versions popping up once in a while, but no new packages have been compromised in the last ~6 hours,” Eriksen said. “But that could change now as the east coast starts working. I would think of this attack as a ‘living’ thing almost, like a virus. Because it can lay dormant for a while, and if just one person is suddenly infected by accident, they could restart the spread. Especially if there’s a super-spreader attack.”
For now, it appears that the web address the attackers were using to exfiltrate collected data was disabled due to rate limits, Eriksen said.
Nicholas Weaver is a researcher with the International Computer Science Institute, a nonprofit in Berkeley, Calif. Weaver called the Shai-Hulud worm “a supply chain attack that conducts a supply chain attack.” Weaver said NPM (and all other similar package repositories) need to immediately switch to a publication model that requires explicit human consent for every publication request using a phish-proof 2FA method.
“Anything less means attacks like this are going to continue and become far more common, but switching to a 2FA method would effectively throttle these attacks before they can spread,” Weaver said. “Allowing purely automated processes to update the published packages is now a proven recipe for disaster.”
Microsoft Still Uses RC4
Senator Ron Wyden has asked the Federal Trade Commission to investigate Microsoft over its continued use of the RC4 encryption algorithm. The letter talks about a hacker technique called Kerberoasting, that exploits the Kerberos authentication system.
Teach Your robots.txt a New Trick (for AI)
Kubernetes v1.34: Decoupled Taint Manager Is Now Stable
This enhancement separates the responsibility of managing node lifecycle and pod eviction into two distinct components. Previously, the node lifecycle controller handled both marking nodes as unhealthy with NoExecute taints and evicting pods from them. Now, a dedicated taint eviction controller manages the eviction process, while the node lifecycle controller focuses solely on applying taints. This separation not only improves code organization but also makes it easier to improve taint eviction controller or build custom implementations of the taint based eviction.
What's new?
The feature gate SeparateTaintEvictionController has been promoted to GA in this release.
Users can optionally disable taint-based eviction by setting --controllers=-taint-eviction-controller
in kube-controller-manager.
How can I learn more?
For more details, refer to the KEP and to the beta announcement article: Kubernetes 1.29: Decoupling taint manager from node lifecycle controller.
How to get involved?
We offer a huge thank you to all the contributors who helped with design, implementation, and review of this feature and helped move it from beta to stable:
- Ed Bartosh (@bart0sh)
- Yuan Chen (@yuanchen8911)
- Aldo Culquicondor (@alculquicondor)
- Baofa Fan (@carlory)
- Sergey Kanzhelev (@SergeyKanzhelev)
- Tim Bannister (@lmktfy)
- Maciej Skoczeń (@macsko)
- Maciej Szulik (@soltysh)
- Wojciech Tyczynski (@wojtek-t)
Lawsuit About WhatsApp Security
Attaullah Baig, WhatsApp’s former head of security, has filed a whistleblower lawsuit alleging that Facebook deliberately failed to fix a bunch of security flaws, in violation of its 2019 settlement agreement with the Federal Trade Commission.
The lawsuit, alleging violations of the whistleblower protection provision of the Sarbanes-Oxley Act passed in 2002, said that in 2022, roughly 100,000 WhatsApp users had their accounts hacked every day. By last year, the complaint alleged, as many as 400,000 WhatsApp users were getting locked out of their accounts each day as a result of such account takeovers...
Preparing your organization for the quantum future
Upcoming Speaking Engagements
This is a current list of where and when I am scheduled to speak:
- I’m speaking and signing books at the Cambridge Public Library on October 22, 2025 at 6 PM ET. The event is sponsored by Harvard Bookstore.
- I’m giving a virtual talk about my book Rewiring Democracy at 1 PM ET on October 23, 2025. The event is hosted by Data & Society. More details to come.
- I’m speaking at the World Forum for Democracy in Strasbourg, France, November 5-7, 2025.
- I’m speaking and signing books at the University of Toronto Bookstore in Toronto, Ontario, Canada on November 14, 2025. Details to come...
Assessing the Quality of Dried Squid
Nondestructive detection of multiple dried squid qualities by hyperspectral imaging combined with 1D-KAN-CNN
Abstract: Given that dried squid is a highly regarded marine product in Oriental countries, the global food industry requires a swift and noninvasive quality assessment of this product. The current study therefore uses visiblenear-infrared (VIS-NIR) hyperspectral imaging and deep learning (DL) methodologies. We acquired and preprocessed VIS-NIR (4001000 nm) hyperspectral reflectance images of 93 dried squid samples. Important wavelengths were selected using competitive adaptive reweighted sampling, principal component analysis, and the successive projections algorithm. Based on a Kolmogorov-Arnold network (KAN), we introduce a one-dimensional, KAN convolutional neural network (1D-KAN-CNN) for nondestructive measurements of fat, protein, and total volatile basic nitrogen…...
A Cyberattack Victim Notification Framework
Interesting analysis:
When cyber incidents occur, victims should be notified in a timely manner so they have the opportunity to assess and remediate any harm. However, providing notifications has proven a challenge across industry.
When making notifications, companies often do not know the true identity of victims and may only have a single email address through which to provide the notification. Victims often do not trust these notifications, as cyber criminals often use the pretext of an account compromise as a phishing lure.
[…]
This report explores the challenges associated with developing the native-notification concept and lays out a roadmap for overcoming them. It also examines other opportunities for more narrow changes that could both increase the likelihood that victims will both receive and trust notifications and be able to access support resources...
Kubernetes v1.34: Autoconfiguration for Node Cgroup Driver Goes GA
Historically, configuring the correct cgroup driver has been a pain point for users running new
Kubernetes clusters. On Linux systems, there are two different cgroup drivers:
cgroupfs and systemd. In the past, both the kubelet
and CRI implementation (like CRI-O or containerd) needed to be configured to use
the same cgroup driver, or else the kubelet would misbehave without any explicit
error message. This was a source of headaches for many cluster admins. Now, we've
(almost) arrived at the end of that headache.
Automated cgroup driver detection
In v1.28.0, the SIG Node community introduced the feature gate
KubeletCgroupDriverFromCRI, which instructs the kubelet to ask the CRI
implementation which cgroup driver to use. You can read more here.
After many releases of waiting for each CRI implementation to have major versions released
and packaged in major operating systems, this feature has gone GA as of Kubernetes 1.34.0.
In addition to setting the feature gate, a cluster admin needs to ensure their CRI implementation is new enough:
- containerd: Support was added in v2.0.0
- CRI-O: Support was added in v1.28.0
Announcement: Kubernetes is deprecating containerd v1.y support
While CRI-O releases versions that match Kubernetes versions, and thus CRI-O versions without this behavior are no longer supported, containerd maintains its own release cycle. containerd support for this feature is only in v2.0 and later, but Kubernetes 1.34 still supports containerd 1.7 and other LTS releases of containerd.
The Kubernetes SIG Node community has formally agreed upon a final support
timeline for containerd v1.y. The last Kubernetes release to offer this support
will be the last released version of v1.35, and support will be dropped in
v1.36.0. To assist administrators in managing this future transition,
a new detection mechanism is available. You are able to monitor
the kubelet_cri_losing_support metric to determine if any nodes in your cluster
are using a containerd version that will soon be outdated. The presence of
this metric with a version label of 1.36.0 will indicate that the node's containerd
runtime is not new enough for the upcoming requirements. Consequently, an
administrator will need to upgrade containerd to v2.0 or a later version before,
or at the same time as, upgrading the kubelet to v1.36.0.
The EU Cyber Resilience Act's impact on open source security
Kubernetes v1.34: Mutable CSI Node Allocatable Graduates to Beta
The functionality for CSI drivers to update information about attachable volume count on the nodes, first introduced as Alpha in Kubernetes v1.33, has graduated to Beta in the Kubernetes v1.34 release! This marks a significant milestone in enhancing the accuracy of stateful pod scheduling by reducing failures due to outdated attachable volume capacity information.
Background
Traditionally, Kubernetes CSI drivers report a static maximum volume attachment limit when initializing. However, actual attachment capacities can change during a node's lifecycle for various reasons, such as:
- Manual or external operations attaching/detaching volumes outside of Kubernetes control.
- Dynamically attached network interfaces or specialized hardware (GPUs, NICs, etc.) consuming available slots.
- Multi-driver scenarios, where one CSI driver’s operations affect available capacity reported by another.
Static reporting can cause Kubernetes to schedule pods onto nodes that appear to have capacity but don't, leading to pods stuck in a ContainerCreating state.
Dynamically adapting CSI volume limits
With this new feature, Kubernetes enables CSI drivers to dynamically adjust and report node attachment capacities at runtime. This ensures that the scheduler, as well as other components relying on this information, have the most accurate, up-to-date view of node capacity.
How it works
Kubernetes supports two mechanisms for updating the reported node volume limits:
- Periodic Updates: CSI drivers specify an interval to periodically refresh the node's allocatable capacity.
- Reactive Updates: An immediate update triggered when a volume attachment fails due to exhausted resources (
ResourceExhaustederror).
Enabling the feature
To use this beta feature, the MutableCSINodeAllocatableCount feature gate must be enabled in these components:
kube-apiserverkubelet
Example CSI driver configuration
Below is an example of configuring a CSI driver to enable periodic updates every 60 seconds:
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: example.csi.k8s.io
spec:
nodeAllocatableUpdatePeriodSeconds: 60
This configuration directs kubelet to periodically call the CSI driver's NodeGetInfo method every 60 seconds, updating the node’s allocatable volume count. Kubernetes enforces a minimum update interval of 10 seconds to balance accuracy and resource usage.
Immediate updates on attachment failures
When a volume attachment operation fails due to a ResourceExhausted error (gRPC code 8), Kubernetes immediately updates the allocatable count instead of waiting for the next periodic update. The Kubelet then marks the affected pods as Failed, enabling their controllers to recreate them. This prevents pods from getting permanently stuck in the ContainerCreating state.
Getting started
To enable this feature in your Kubernetes v1.34 cluster:
- Enable the feature gate
MutableCSINodeAllocatableCounton thekube-apiserverandkubeletcomponents. - Update your CSI driver configuration by setting
nodeAllocatableUpdatePeriodSeconds. - Monitor and observe improvements in scheduling accuracy and pod placement reliability.
Next steps
This feature is currently in beta and the Kubernetes community welcomes your feedback. Test it, share your experiences, and help guide its evolution to GA stability.
Join discussions in the Kubernetes Storage Special Interest Group (SIG-Storage) to shape the future of Kubernetes storage capabilities.
Bulletproof Host Stark Industries Evades EU Sanctions
In May 2025, the European Union levied financial sanctions on the owners of Stark Industries Solutions Ltd., a bulletproof hosting provider that materialized two weeks before Russia invaded Ukraine and quickly became a top source of Kremlin-linked cyberattacks and disinformation campaigns. But new findings show those sanctions have done little to stop Stark from simply rebranding and transferring their assets to other corporate entities controlled by its original hosting providers.

Image: Shutterstock.
Materializing just two weeks before Russia invaded Ukraine in 2022, Stark Industries Solutions became a frequent source of massive DDoS attacks, Russian-language proxy and VPN services, malware tied to Russia-backed hacking groups, and fake news. ISPs like Stark are called “bulletproof” providers when they cultivate a reputation for ignoring any abuse complaints or police inquiries about activity on their networks.
In May 2025, the European Union sanctioned one of Stark’s two main conduits to the larger Internet — Moldova-based PQ Hosting — as well as the company’s Moldovan owners Yuri and Ivan Neculiti. The EU Commission said the Neculiti brothers and PQ Hosting were linked to Russia’s hybrid warfare efforts.
But a new report from Recorded Future finds that just prior to the sanctions being announced, Stark rebranded to the[.]hosting, under control of the Dutch entity WorkTitans BV (AS209847) on June 24, 2025. The Neculiti brothers reportedly got a heads up roughly 12 days before the sanctions were announced, when Moldovan and EU media reported on the forthcoming inclusion of the Neculiti brothers in the sanctions package.
In response, the Neculiti brothers moved much of Stark’s considerable address space and other resources over to a new company in Moldova called PQ Hosting Plus S.R.L., an entity reportedly connected to the Neculiti brothers thanks to the re-use of a phone number from the original PQ Hosting.
“Although the majority of associated infrastructure remains attributable to Stark Industries, these changes likely reflect an attempt to obfuscate ownership and sustain hosting services under new legal and network entities,” Recorded Future observed.
Neither the Recorded Future report nor the May 2025 sanctions from the EU mentioned a second critical pillar of Stark’s network that KrebsOnSecurity identified in a May 2024 profile on the notorious bulletproof hoster: The Netherlands-based hosting provider MIRhosting.
MIRhosting is operated by 38-year old Andrey Nesterenko, whose personal website says he is an accomplished concert pianist who began performing publicly at a young age. DomainTools says mirhosting[.]com is registered to Mr. Nesterenko and to Innovation IT Solutions Corp, which lists addresses in London and in Nesterenko’s stated hometown of Nizhny Novgorod, Russia.

Image credit: correctiv.org.
According to the book Inside Cyber Warfare by Jeffrey Carr, Innovation IT Solutions Corp. was responsible for hosting StopGeorgia[.]ru, a hacktivist website for organizing cyberattacks against Georgia that appeared at the same time Russian forces invaded the former Soviet nation in 2008. That conflict was thought to be the first war ever fought in which a notable cyberattack and an actual military engagement happened simultaneously.
Mr. Nesterenko did not respond to requests for comment. In May 2024, Mr. Nesterenko said he couldn’t verify whether StopGeorgia was ever a customer because they didn’t keep records going back that far. But he maintained that Stark Industries Solutions was merely one client of many, and claimed MIRhosting had not received any actionable complaints about abuse on Stark.
However, it appears that MIRhosting is once again the new home of Stark Industries, and that MIRhosting employees are managing both the[.]hosting and WorkTitans — the primary beneficiaries of Stark’s assets.
A copy of the incorporation documents for WorkTitans BV obtained from the Dutch Chamber of Commerce shows WorkTitans also does business under the names Misfits Media and and WT Hosting (considering Stark’s historical connection to Russian disinformation websites, “Misfits Media” is a bit on the nose).

An incorporation document for WorkTitans B.V. from the Netherlands Chamber of Commerce.
The incorporation document says the company was formed in 2019 by a [email protected]. That email address corresponds to a LinkedIn account for a Youssef Zinad, who says their personal websites are worktitans[.]nl and custom-solution[.]nl. The profile also links to a website (etripleasims dot nl) that LinkedIn currently blocks as malicious. All of these websites are or were hosted at MIRhosting.
Although Mr. Zinad’s LinkedIn profile does not mention any employment at MIRhosting, virtually all of his LinkedIn posts over the past year have been reposts of advertisements for MIRhosting’s services.

Mr. Zinad’s LinkedIn profile is full of posts for MIRhosting’s services.
A Google search for Youssef Zinad reveals multiple startup-tracking websites that list him as the founder of the[.]hosting, which censys.io finds is hosted by PQ Hosting Plus S.R.L.
The Dutch Chamber of Commerce document says WorkTitans’ sole shareholder is a company in Almere, Netherlands called Fezzy B.V. Who runs Fezzy? The phone number listed in a Google search for Fezzy B.V. — 31651079755 — also was used to register a Facebook profile for a Youssef Zinad from the same town, according to the breach tracking service Constella Intelligence.
In a series of email exchanges leading up to KrebsOnSecurity’s May 2024 deep dive on Stark, Mr. Nesterenko included Mr. Zinad in the message thread ([email protected]), referring to him as part of the company’s legal team. The Dutch website stagemarkt[.]nl lists Youssef Zinad as an official contact for MIRhosting’s offices in Almere. Mr. Zinad did not respond to requests for comment.

Given the above, it is difficult to argue with the Recorded Future report on Stark’s rebranding, which concluded that “the EU’s sanctioning of Stark Industries was largely ineffective, as affiliated infrastructure remained operational and services were rapidly re-established under new branding, with no significant or lasting disruption.”
Navigating AI risk: Building a trusted foundation with Red Hat
Kubernetes v1.34: Use An Init Container To Define App Environment Variables
Kubernetes typically uses ConfigMaps and Secrets to set environment variables, which introduces additional API calls and complexity, For example, you need to separately manage the Pods of your workloads and their configurations, while ensuring orderly updates for both the configurations and the workload Pods.
Alternatively, you might be using a vendor-supplied container that requires environment variables (such as a license key or a one-time token), but you don’t want to hard-code them or mount volumes just to get the job done.
If that's the situation you are in, you now have a new (alpha) way to
achieve that. Provided you have the EnvFiles
feature gate
enabled across your cluster, you can tell the kubelet to load a container's
environment variables from a volume (the volume must be part of the Pod that
the container belongs to).
this feature gate allows you to load environment variables directly from a file in an emptyDir volume
without actually mounting that file into the container.
It’s a simple yet elegant solution to some surprisingly common problems.
What’s this all about?
At its core, this feature allows you to point your container to a file,
one generated by an initContainer,
and have Kubernetes parse that file to set your environment variables.
The file lives in an emptyDir volume (a temporary storage space that lasts as long as the pod does),
Your main container doesn’t need to mount the volume.
The kubelet will read the file and inject these variables when the container starts.
How It Works
Here's a simple example:
apiVersion: v1
kind: Pod
spec:
initContainers:
- name: generate-config
image: busybox
command: ['sh', '-c', 'echo "CONFIG_VAR=HELLO" > /config/config.env']
volumeMounts:
- name: config-volume
mountPath: /config
containers:
- name: app-container
image: gcr.io/distroless/static
env:
- name: CONFIG_VAR
valueFrom:
fileKeyRef:
path: config.env
volumeName: config-volume
key: CONFIG_VAR
volumes:
- name: config-volume
emptyDir: {}
Using this approach is a breeze.
You define your environment variables in the pod spec using the fileKeyRef field,
which tells Kubernetes where to find the file and which key to pull.
The file itself resembles the standard for .env syntax (think KEY=VALUE),
and (for this alpha stage at least) you must ensure that it is written into
an emptyDir volume. Other volume types aren't supported for this feature.
At least one init container must mount that emptyDir volume (to write the file),
but the main container doesn’t need to—it just gets the variables handed to it at startup.
A word on security
While this feature supports handling sensitive data such as keys or tokens,
note that its implementation relies on emptyDir volumes mounted into pod.
Operators with node filesystem access could therefore
easily retrieve this sensitive data through pod directory paths.
If storing sensitive data like keys or tokens using this feature, ensure your cluster security policies effectively protect nodes against unauthorized access to prevent exposure of confidential information.
Summary
This feature will eliminate a number of complex workarounds used today, simplifying apps authoring, and opening doors for more use cases. Kubernetes stays flexible and open for feedback. Tell us how you use this feature or what is missing.
DDoS in August
Microsoft Patch Tuesday, September 2025 Edition
Microsoft Corp. today issued security updates to fix more than 80 vulnerabilities in its Windows operating systems and software. There are no known “zero-day” or actively exploited vulnerabilities in this month’s bundle from Redmond, which nevertheless includes patches for 13 flaws that earned Microsoft’s most-dire “critical” label. Meanwhile, both Apple and Google recently released updates to fix zero-day bugs in their devices.

Microsoft assigns security flaws a “critical” rating when malware or miscreants can exploit them to gain remote access to a Windows system with little or no help from users. Among the more concerning critical bugs quashed this month is CVE-2025-54918. The problem here resides with Windows NTLM, or NT LAN Manager, a suite of code for managing authentication in a Windows network environment.
Redmond rates this flaw as “Exploitation More Likely,” and although it is listed as a privilege escalation vulnerability, Kev Breen at Immersive says this one is actually exploitable over the network or the Internet.
“From Microsoft’s limited description, it appears that if an attacker is able to send specially crafted packets over the network to the target device, they would have the ability to gain SYSTEM-level privileges on the target machine,” Breen said. “The patch notes for this vulnerability state that ‘Improper authentication in Windows NTLM allows an authorized attacker to elevate privileges over a network,’ suggesting an attacker may already need to have access to the NTLM hash or the user’s credentials.”
Breen said another patch — CVE-2025-55234, a 8.8 CVSS-scored flaw affecting the Windows SMB client for sharing files across a network — also is listed as privilege escalation bug but is likewise remotely exploitable. This vulnerability was publicly disclosed prior to this month.
“Microsoft says that an attacker with network access would be able to perform a replay attack against a target host, which could result in the attacker gaining additional privileges, which could lead to code execution,” Breen noted.
CVE-2025-54916 is an “important” vulnerability in Windows NTFS — the default filesystem for all modern versions of Windows — that can lead to remote code execution. Microsoft likewise thinks we are more than likely to see exploitation of this bug soon: The last time Microsoft patched an NTFS bug was in March 2025 and it was already being exploited in the wild as a zero-day.
“While the title of the CVE says ‘Remote Code Execution,’ this exploit is not remotely exploitable over the network, but instead needs an attacker to either have the ability to run code on the host or to convince a user to run a file that would trigger the exploit,” Breen said. “This is commonly seen in social engineering attacks, where they send the user a file to open as an attachment or a link to a file to download and run.”
Critical and remote code execution bugs tend to steal all the limelight, but Tenable Senior Staff Research Engineer Satnam Narang notes that nearly half of all vulnerabilities fixed by Microsoft this month are privilege escalation flaws that require an attacker to have gained access to a target system first before attempting to elevate privileges.
“For the third time this year, Microsoft patched more elevation of privilege vulnerabilities than remote code execution flaws,” Narang observed.
On Sept. 3, Google fixed two flaws that were detected as exploited in zero-day attacks, including CVE-2025-38352, an elevation of privilege in the Android kernel, and CVE-2025-48543, also an elevation of privilege problem in the Android Runtime component.
Also, Apple recently patched its seventh zero-day (CVE-2025-43300) of this year. It was part of an exploit chain used along with a vulnerability in the WhatsApp (CVE-2025-55177) instant messenger to hack Apple devices. Amnesty International reports that the two zero-days have been used in “an advanced spyware campaign” over the past 90 days. The issue is fixed in iOS 18.6.2, iPadOS 18.6.2, iPadOS 17.7.10, macOS Sequoia 15.6.1, macOS Sonoma 14.7.8, and macOS Ventura 13.7.8.
The SANS Internet Storm Center has a clickable breakdown of each individual fix from Microsoft, indexed by severity and CVSS score. Enterprise Windows admins involved in testing patches before rolling them out should keep an eye on askwoody.com, which often has the skinny on wonky updates.
AskWoody also reminds us that we’re now just two months out from Microsoft discontinuing free security updates for Windows 10 computers. For those interested in safely extending the lifespan and usefulness of these older machines, check out last month’s Patch Tuesday coverage for a few pointers.
As ever, please don’t neglect to back up your data (if not your entire system) at regular intervals, and feel free to sound off in the comments if you experience problems installing any of these fixes.