Rook Blog
Rook v1.19 Storage Enhancements

The Rook v1.19 release is out! v1.19 is another feature-filled release to improve storage for Kubernetes. Thanks again to the community for all the great support in this journey to deploy storage in production.
The statistics continue to show Rook is widely used in the community, with over 13.3K Github stars, and Slack members and X followers constantly increasing.
If your organization deploys Rook in production, we would love to hear about it. Please see the Adopters page to add your submission. As an upstream project, we don’t track our users, but we appreciate the transparency of those who are deploying Rook!
We have a lot of new features for the Ceph storage provider that we hope you’ll be excited about with the v1.19 release!
NVMe-oF Gateway
NVMe over Fabrics allows RBD volumes to be exposed and accessed via the NVMe/TCP protocol. This enables both Kubernetes pods within the cluster and external clients outside the cluster to connect to Ceph block storage using standard NVMe-oF initiators, providing high-performance block storage access over the network.
NVMe-oF is supported by Ceph starting in the recent Ceph Tentacle release. The initial integration with Rook is now completed, and ready for testing in experimental mode, which means that it is not production ready and only intended for testing. As a large new feature, this will take some time before we declare it stable. Please test out the feature and let us know your feedback!
See the NVMe-oF Configuration Guide to get started.
Ceph CSI 3.16
The v3.16 release of Ceph CSI has a range of features and improvements for the RBD, CephFS, NFS drivers. Similar to v1.18, this release is again supported both by the Ceph CSI operator and Rook’s direct mode of configuration. The Ceph CSI operator is still configured automatically by Rook. We will target v1.20 to fully document the Ceph CSI operator configuration.
In this release, new Ceph CSI features include:
- NVMe-oF CSI driver for provisioning and mounting volumes over the NVMe over Fabrics protocol
- Improved fencing for RBD and CephFS volumes during node failure
- Block volume usage statistics
- Configurable block encryption cipher
Concurrent Cluster Reconciles
Previous to this release, when multiple Ceph clusters are configured in the same cluster, they each have been reconciled serially by Rook. If one cluster is having health issues, it would block all other subsequent clusters from being reconciled.
To improve the reconcile of multiple clusters, Rook now enables clusters to be reconciled concurrently. Concurrency is enabled by increasing the operator setting ROOK_RECONCILE_CONCURRENT_CLUSTERS (in operator.yaml or the helm setting reconcileConcurrentClusters) to a value greater than 1. If resource requests and limits are set on the operator, they may need to be increased to accommodate the concurrent reconciles.
While this is a relatively small change, to be conservative due to the difficulty of testing the concurrency, we have marked this feature experimental. Please let us know if the concurrency works smoothly for you or report any issues!
When clusters are reconciled concurrently, the rook operator log will contain the logging intermingled between all the clusters in progress. To improve the troubleshooting, we have updated many of the log entries with the namespace and/or cluster name.
Breaking changes
There are a few minor changes to be aware of during upgrades.
CephFS
- The behavior of the activeStandby property in the CephFilesystem CRD has changed. When set to false, the standby MDS daemon deployment will be scaled down and removed, rather than only disabling the standby cache while the daemon remains running.
Helm
- The rook-ceph-cluster chart has changed where the Ceph image is defined, to allow separate settings for the repository and tag. See the example values.yaml for the new repository and tag settings. If you were previously specifying the ceph image in the cephClusterSpec, remove it at the time of upgrade while specifying the new properties.
External Clusters
- In external mode, if you specify a Ceph admin keyring (not the default recommendation), Rook will no longer create CSI Ceph clients automatically. The CSI client keyrings will only be created by the external Python script. This removes the duplication between the Python script and the operator from creating the same users.
Versions
Supported Ceph Versions
Rook v1.19 has removed support for Ceph Reef v18 since it has reached end of life. If you are still running Reef, upgrade at least to Ceph Squid v19 before upgrading to Rook v1.19.
Ceph Squid and Ceph Tentacle are the supported versions with Rook v1.19.
Kubernetes v1.30 — v1.35
Kubernetes v1.30 is now the minimum version supported by Rook through the latest K8s release v1.35. Rook CI runs tests against these versions to ensure there are no issues as Kubernetes is updated. If you still require running an older K8s version, we haven’t done anything to prevent running Rook, we simply just do not have test validation on older versions.
What’s Next?
As we continue the journey to develop reliable storage operators for Kubernetes, we look forward to your ongoing feedback. Only with the community is it possible to continue this fantastic momentum.
There are many different ways to get involved in the Rook project, whether as a user or developer. Please join us in helping the project continue to grow on its way beyond the v1.17 milestone!
Rook v1.19 Storage Enhancements was originally published in Rook Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.