Feed aggregator
LLM-Assisted Deanonymization
Turns out that LLMs are good at de-anonymization:
We show that LLM agents can figure out who you are from your anonymous online posts. Across Hacker News, Reddit, LinkedIn, and anonymized interview transcripts, our method identifies users with high precision and scales to tens of thousands of candidates.
While it has been known that individuals can be uniquely identified by surprisingly few attributes, this was often practically limited. Data is often only available in unstructured form and deanonymization used to require human investigators to search and reason based on clues. We show that from a handful of comments, LLMs can infer where you live, what you do, and your interests—then search for you on the web. In our new research, we show that this is not only possible but increasingly practical...
AI trust through open collaboration: A new chapter for responsible innovation
Who is the Kimwolf Botmaster “Dort”?
In early January 2026, KrebsOnSecurity revealed how a security researcher disclosed a vulnerability that was used to build Kimwolf, the world’s largest and most disruptive botnet. Since then, the person in control of Kimwolf — who goes by the handle “Dort” — has coordinated a barrage of distributed denial-of-service (DDoS), doxing and email flooding attacks against the researcher and this author, and more recently caused a SWAT team to be sent to the researcher’s home. This post examines what is knowable about Dort based on public information.
A public “dox” created in 2020 asserted Dort was a teenager from Canada (DOB August 2003) who used the aliases “CPacket” and “M1ce.” A search on the username CPacket at the open source intelligence platform OSINT Industries finds a GitHub account under the names Dort and CPacket that was created in 2017 using the email address [email protected].

Image: osint.industries.
The cyber intelligence firm Intel 471 says [email protected] was used between 2015 and 2019 to create accounts at multiple cybercrime forums, including Nulled (username “Uubuntuu”) and Cracked (user “Dorted”); Intel 471 reports that both of these accounts were created from the same Internet address at Rogers Canada (99.241.112.24).
Dort was an extremely active player in the Microsoft game Minecraft who gained notoriety for their “Dortware” software that helped players cheat. But somewhere along the way, Dort graduated from hacking Minecraft games to enabling far more serious crimes.
Dort also used the nickname DortDev, an identity that was active in March 2022 on the chat server for the prolific cybercrime group known as LAPSUS$. Dort peddled a service for registering temporary email addresses, as well as “Dortsolver,” code that could bypass various CAPTCHA services designed to prevent automated account abuse. Both of these offerings were advertised in 2022 on SIM Land, a Telegram channel dedicated to SIM-swapping and account takeover activity.
The cyber intelligence firm Flashpoint indexed 2022 posts on SIM Land by Dort that show this person developed the disposable email and CAPTCHA bypass services with the help of another hacker who went by the handle “Qoft.”
“I legit just work with Jacob,” Qoft said in 2022 in reply to another user, referring to their exclusive business partner Dort. In the same conversation, Qoft bragged that the two had stolen more than $250,000 worth of Microsoft Xbox Game Pass accounts by developing a program that mass-created Game Pass identities using stolen payment card data.
Who is the Jacob that Qoft referred to as their business partner? The breach tracking service Constella Intelligence finds the password used by [email protected] was reused by just one other email address: [email protected]. Recall that the 2020 dox of Dort said their date of birth was August 2003 (8/03).
Searching this email address at DomainTools.com reveals it was used in 2015 to register several Minecraft-themed domains, all assigned to a Jacob Butler in Ottawa, Canada and to the Ottawa phone number 613-909-9727.
Constella Intelligence finds [email protected] was used to register an account on the hacker forum Nulled in 2016, as well as the account name “M1CE” on Minecraft. Pivoting off the password used by their Nulled account shows it was shared by the email addresses [email protected] and [email protected], the latter being an address at a domain for the Ottawa-Carelton District School Board.
Data indexed by the breach tracking service Spycloud suggests that at one point Jacob Butler shared a computer with his mother and a sibling, which might explain why their email accounts were connected to the password “jacobsplugs.” Neither Jacob nor any of the other Butler household members responded to requests for comment.
The open source intelligence service Epieos finds [email protected] created the GitHub account “MemeClient.” Meanwhile, Flashpoint indexed a deleted anonymous Pastebin.com post from 2017 declaring that MemeClient was the creation of a user named CPacket — one of Dort’s early monikers.
Why is Dort so mad? On January 2, KrebsOnSecurity published The Kimwolf Botnet is Stalking Your Local Network, which explored research into the botnet by Benjamin Brundage, founder of the proxy tracking service Synthient. Brundage figured out that the Kimwolf botmasters were exploiting a little-known weakness in residential proxy services to infect poorly-defended devices — like TV boxes and digital photo frames — plugged into the internal, private networks of proxy endpoints.
By the time that story went live, most of the vulnerable proxy providers had been notified by Brundage and had fixed the weaknesses in their systems. That vulnerability remediation process massively slowed Kimwolf’s ability to spread, and within hours of the story’s publication Dort created a Discord server in my name that began publishing personal information about and violent threats against Brundage, Yours Truly, and others.

Dort and friends incriminating themselves by planning swatting attacks in a public Discord server.
Last week, Dort and friends used that same Discord server (then named “Krebs’s Koinbase Kallers”) to threaten a swatting attack against Brundage, again posting his home address and personal information. Brundage told KrebsOnSecurity that local police officers subsequently visited his home in response to a swatting hoax which occurred around the same time that another member of the server posted a door emoji and taunted Brundage further.

Dort, using the alias “Meow,” taunts Synthient founder Ben Brundage with a picture of a door.
Someone on the server then linked to a cringeworthy (and NSFW) new Soundcloud diss track recorded by the user DortDev that included a stickied message from Dort saying, “Ur dead nigga. u better watch ur fucking back. sleep with one eye open. bitch.”
“It’s a pretty hefty penny for a new front door,” the diss track intoned. “If his head doesn’t get blown off by SWAT officers. What’s it like not having a front door?”
With any luck, Dort will soon be able to tell us all exactly what it’s like.
Update, 10:29 a.m.: Jacob Butler responded to requests for comment, speaking with KrebsOnSecurity briefly via telephone. Butler said he didn’t notice earlier requests for comment because he hasn’t really been online since 2021, after his home was swatted multiple times. He acknowledged making and distributing a Minecraft cheat long ago, but said he hasn’t played the game in years and was not involved in Dortsolver or any other activity attributed to the Dort nickname after 2021.
“It was a really old cheat and I don’t remember the name of it,” Butler said of his Minecraft modification. “I’m very stressed, man. I don’t know if people are going to swat me again or what. After that, I pretty much walked away from everything, logged off and said fuck that. I don’t go online anymore. I don’t know why people would still be going after me, to be completely honest.”
When asked what he does for a living, Butler said he mostly stays home and helps his mom around the house because he struggles with autism and social interaction. He maintains that someone must have compromised one or more of his old accounts and is impersonating him online as Dort.
“Someone is actually probably impersonating me, and now I’m really worried,” Butler said. “This is making me relive everything.”
But there are issues with Butler’s timeline. For example, Jacob’s voice in our phone conversation was remarkably similar to the Jacob/Dort whose voice can be heard in this Sept. 2022 Clash of Code competition between Dort and another coder (Dort lost). At around 6 minutes and 10 seconds into the recording, Dort launches into a cursing tirade that mirrors the stream of profanity in the diss rap that Dortdev posted threatening Brundage. Dort can be heard again at around 16 minutes; at around 26:00, Dort threatens to swat his opponent.
Butler said the voice of Dort is not his, exactly, but rather that of an impersonator who had likely cloned his voice.
“I would like to clarify that was absolutely not me,” Butler said. “There must be someone using a voice changer. Or something of the sorts. Because people were cloning my voice before and sending audio clips of ‘me’ saying outrageous stuff.”
Friday Squid Blogging: Squid Fishing in Peru
Peru has increased its squid catch limit. The article says “giant squid,” but they can’t possibly mean that.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Before You Migrate: Five Surprising Ingress-NGINX Behaviors You Need to Know
As announced November 2025, Kubernetes will retire Ingress-NGINX in March 2026. Despite its widespread usage, Ingress-NGINX is full of surprising defaults and side effects that are probably present in your cluster today. This blog highlights these behaviors so that you can migrate away safely and make a conscious decision about which behaviors to keep. This post also compares Ingress-NGINX with Gateway API and shows you how to preserve Ingress-NGINX behavior in Gateway API. The recurring risk pattern in every section is the same: a seemingly correct translation can still cause outages if it does not consider Ingress-NGINX's quirks.
I'm going to assume that you, the reader, have some familiarity with Ingress-NGINX and the Ingress API.
Most examples use httpbin as the backend.
Also, note that Ingress-NGINX and NGINX Ingress are two separate Ingress controllers. Ingress-NGINX is an Ingress controller maintained and governed by the Kubernetes community that is retiring March 2026. NGINX Ingress is an Ingress controller by F5. Both use NGINX as the dataplane, but are otherwise unrelated. From now on, this blog post only discusses Ingress-NGINX.
1. Regex matches are prefix-based and case insensitive
Suppose that you wanted to route all requests with a path consisting of only three uppercase letters to the httpbin service.
You might create the following Ingress with the nginx.ingress.kubernetes.io/use-regex: "true" annotation and the regex pattern of /[A-Z]{3}.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: regex-match-ingress
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx
rules:
- host: regex-match.example.com
http:
paths:
- path: "/[A-Z]{3}"
pathType: ImplementationSpecific
backend:
service:
name: httpbin
port:
number: 8000
However, because regex matches are prefix and case insensitive, Ingress-NGINX routes any request with a path that starts with any three letters to httpbin:
curl -sS -H "Host: regex-match.example.com" http://<your-ingress-ip>/uuid
The output is similar to:
{
"uuid": "e55ef929-25a0-49e9-9175-1b6e87f40af7"
}
Note: The /uuid endpoint of httpbin returns a random UUID.
A UUID in the response body means that the request was successfully routed to httpbin.
With Gateway API, you can use an HTTP path match with a type of RegularExpression for regular expression path matching.
RegularExpression matches are implementation specific, so check with your Gateway API implementation to verify the semantics of RegularExpression matching.
Popular Envoy-based Gateway API implementations such as Istio1, Envoy Gateway, and Kgateway do a full case-sensitive match.
Thus, if you are unaware that Ingress-NGINX patterns are prefix and case-insensitive, and, unbeknownst to you,
clients or applications send traffic to /uuid (or /uuid/some/other/path), you might create the following HTTP route.
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: regex-match-route
spec:
hostnames:
- regex-match.example.com
parentRefs:
- name: <your gateway> # Change this depending on your use case
rules:
- matches:
- path:
type: RegularExpression
value: "/[A-Z]{3}"
backendRefs:
- name: httpbin
port: 8000
However, if your Gateway API implementation does full case-sensitive matches,
the above HTTP route would not match a request with a path of /uuid.
The above HTTP route would thus cause an outage because requests
that Ingress-NGINX routed to httpbin would fail with a 404 Not Found at the gateway.
To preserve the case-insensitive regex matching, you can use the following HTTP route.
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: regex-match-route
spec:
hostnames:
- regex-match.example.com
parentRefs:
- name: <your gateway> # Change this depending on your use case
rules:
- matches:
- path:
type: RegularExpression
value: "/[a-zA-Z]{3}.*"
backendRefs:
- name: httpbin
port: 8000
Alternatively, the aforementioned proxies support the (?i) flag to indicate case insensitive matches.
Using the flag, the pattern could be (?i)/[a-z]{3}.*.
2. The nginx.ingress.kubernetes.io/use-regex applies to all paths of a host across all (Ingress-NGINX) Ingresses
Now, suppose that you have an Ingress with the nginx.ingress.kubernetes.io/use-regex: "true" annotation, but you want to route
requests with a path of exactly /headers to httpbin.
Unfortunately, you made a typo and set the path to /Header instead of /headers.
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: regex-match-ingress
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx
rules:
- host: regex-match.example.com
http:
paths:
- path: "<some regex pattern>"
pathType: ImplementationSpecific
backend:
service:
name: <your backend>
port:
number: 8000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: regex-match-ingress-other
spec:
ingressClassName: nginx
rules:
- host: regex-match.example.com
http:
paths:
- path: "/Header" # typo here, should be /headers
pathType: Exact
backend:
service:
name: httpbin
port:
number: 8000
Most would expect a request to /headers to respond with a 404 Not Found, since /headers does not match the Exact path of /Header.
However, because the regex-match-ingress Ingress has the nginx.ingress.kubernetes.io/use-regex: "true" annotation and the regex-match.example.com host,
all paths with the regex-match.example.com host are treated as regular expressions across all (Ingress-NGINX) Ingresses.
Since regex patterns are case-insensitive prefix matches, /headers matches the /Header pattern and Ingress-NGINX routes such requests to httpbin.
Running the command
curl -sS -H "Host: regex-match.example.com" http://<your-ingress-ip>/headers
the output looks like:
{
"headers": {
...
}
}
Note: The /headers endpoint of httpbin returns the request headers.
The fact that the response contains the request headers in the body means that the request was successfully routed to httpbin.
Gateway API does not silently convert or interpret Exact and Prefix matches as regex patterns.
So if you converted the above Ingresses into the following HTTP route and
preserved the typo and match types, requests to /headers will respond with a 404 Not Found instead of a 200 OK.
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: regex-match-route
spec:
hostnames:
- regex-match.example.com
rules:
...
- matches:
- path:
type: Exact
value: "/Header"
backendRefs:
- name: httpbin
port: 8000
To keep the case-insensitive prefix matching, you can change
- matches:
- path:
type: Exact
value: "/Header"
to
- matches:
- path:
type: RegularExpression
value: "(?i)/Header"
Or even better, you could fix the typo and change the match to
- matches:
- path:
type: Exact
value: "/headers"
3. Rewrite target implies regex
In this case, suppose you want to rewrite the path of requests with a path of /ip to /uuid before routing them to httpbin, and
as in Section 2, you want to route requests with the path of exactly /headers to httpbin.
However, you accidentally make a typo and set the path to /IP instead of /ip and /Header instead of /headers.
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rewrite-target-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/uuid"
spec:
ingressClassName: nginx
rules:
- host: rewrite-target.example.com
http:
paths:
- path: "/IP"
pathType: Exact
backend:
service:
name: httpbin
port:
number: 8000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rewrite-target-ingress-other
spec:
ingressClassName: nginx
rules:
- host: rewrite-target.example.com
http:
paths:
- path: "/Header"
pathType: Exact
backend:
service:
name: httpbin
port:
number: 8000
The nginx.ingress.kubernetes.io/rewrite-target: "/uuid" annotation
causes requests that match paths in the rewrite-target-ingress Ingress to have their paths rewritten to /uuid before being routed to the backend.
Even though no Ingress has the nginx.ingress.kubernetes.io/use-regex: "true" annotation,
the presence of the nginx.ingress.kubernetes.io/rewrite-target annotation in the rewrite-target-ingress Ingress causes all paths with the rewrite-target.example.com host to be treated as regex patterns.
In other words, the nginx.ingress.kubernetes.io/rewrite-target silently adds the nginx.ingress.kubernetes.io/use-regex: "true" annotation, along with all the side effects discussed above.
For example, a request to /ip has its path rewritten to /uuid because /ip matches the case-insensitive prefix pattern of /IP in the rewrite-target-ingress Ingress.
After running the command
curl -sS -H "Host: rewrite-target.example.com" http://<your-ingress-ip>/ip
the output is similar to:
{
"uuid": "12a0def9-1adg-2943-adcd-1234aadfgc67"
}
Like in the nginx.ingress.kubernetes.io/use-regex example, Ingress-NGINX treats paths of other ingresses with the rewrite-target.example.com host as case-insensitive prefix patterns.
Running the command
curl -sS -H "Host: rewrite-target.example.com" http://<your-ingress-ip>/headers
gives an output that looks like
{
"headers": {
...
}
}
You can configure path rewrites in Gateway API with the HTTP URL rewrite filter which does not silently convert your Exact and Prefix matches into regex patterns.
However, if you are unaware of the side effects of the nginx.ingress.kubernetes.io/rewrite-target annotation
and do not realize that /Header and /IP are both typos, you might create the following
HTTP route.
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: rewrite-target-route
spec:
hostnames:
- rewrite-target.example.com
parentRefs:
- name: <your-gateway>
rules:
- matches:
- path:
type: Exact
value: "/IP"
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplaceFullPath
replaceFullPath: /uuid
backendRefs:
- name: httpbin
port: 8000
- matches:
- path:
# This is an exact match, irrespective of other rules
type: Exact
value: "/Header"
backendRefs:
- name: httpbin
port: 8000
As with Section 2, because /IP is now an Exact match type in your HTTP route, requests to /ip will respond with a 404 Not Found instead of a 200 OK.
Similarly, requests to /headers will also respond with a 404 Not Found instead of a 200 OK.
Thus, this HTTP route will break applications and clients that rely on the /ip and /headers routes.
To fix this, you can change the matches in the HTTP route to be regex matches, and change the path patterns to be case-insensitive prefix matches, as follows.
- matches:
- path:
type: RegularExpression
value: "(?i)/IP.*"
...
- matches:
- path:
type: RegularExpression
value: "(?i)/Header.*"
Or, you can keep the Exact match type and fix the typos.
4. Requests missing a trailing slash are redirected to the same path with a trailing slash
Consider the following Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: trailing-slash-ingress
spec:
ingressClassName: nginx
rules:
- host: trailing-slash.example.com
http:
paths:
- path: "/my-path/"
pathType: Exact
backend:
service:
name: <your-backend>
port:
number: 8000
You might expect Ingress-NGINX to respond to /my-path with a 404 Not Found since the /my-path does not exactly match the Exact path of /my-path/.
However, Ingress-NGINX redirects the request to /my-path/ with a 301 Moved Permanently because the only difference between /my-path and /my-path/ is a trailing slash.
curl -isS -H "Host: trailing-slash.example.com" http://<your-ingress-ip>/my-path
The output looks like:
HTTP/1.1 301 Moved Permanently
...
Location: http://trailing-slash.example.com/my-path/
...
The same applies if you change the pathType to Prefix.
However, the redirect does not happen if the path is a regex pattern.
Conformant Gateway API implementations do not silently configure any kind of redirects.
If clients or downstream services depend on this redirect, a migration to Gateway API that
does not explicitly configure request redirects will cause an outage because
requests to /my-path will now respond with a 404 Not Found instead of a 301 Moved Permanently.
You can explicitly configure redirects using the HTTP request redirect filter as follows:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: trailing-slash-route
spec:
hostnames:
- trailing-slash.example.com
parentRefs:
- name: <your-gateway>
rules:
- matches:
- path:
type: Exact
value: "/my-path"
filters:
requestRedirect:
statusCode: 301
path:
type: ReplaceFullPath
replaceFullPath: /my-path/
- matches:
- path:
type: Exact # or Prefix
value: "/my-path/"
backendRefs:
- name: <your-backend>
port: 8000
5. Ingress-NGINX normalizes URLs
URL normalization is the process of converting a URL into a canonical form before matching it against Ingress rules and routing it. The specifics of URL normalization are defined in RFC 3986 Section 6.2, but some examples are
- removing path segments that are just a
.:my/./path -> my/path - having a
..path segment remove the previous segment:my/../path -> /path - deduplicating consecutive slashes in a path:
my//path -> my/path
Ingress-NGINX normalizes URLs before matching them against Ingress rules. For example, consider the following Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: path-normalization-ingress
spec:
ingressClassName: nginx
rules:
- host: path-normalization.example.com
http:
paths:
- path: "/uuid"
pathType: Exact
backend:
service:
name: httpbin
port:
number: 8000
Ingress-NGINX normalizes the path of the following requests to /uuid.
Now that the request matches the Exact path of /uuid, Ingress-NGINX responds with either a 200 OK response or a 301 Moved Permanently to /uuid.
For the following commands
curl -sS -H "Host: path-normalization.example.com" http://<your-ingress-ip>/uuid
curl -sS -H "Host: path-normalization.example.com" http://<your-ingress-ip>/ip/abc/../../uuid
curl -sSi -H "Host: path-normalization.example.com" http://<your-ingress-ip>////uuid
the outputs are similar to
{
"uuid": "29c77dfe-73ec-4449-b70a-ef328ea9dbce"
}
{
"uuid": "d20d92e8-af57-4014-80ba-cf21c0c4ffae"
}
HTTP/1.1 301 Moved Permanently
...
Location: /uuid
...
Your backends might rely on the Ingress/Gateway API implementation to normalize URLs.
That said, most Gateway API implementations will have some path normalization enabled by default.
For example, Istio, Envoy Gateway, and Kgateway all normalize . and .. segments out of the box.
For more details, check the documentation for each Gateway API implementation that you use.
Conclusion
As we all race to respond to the Ingress-NGINX retirement, I hope this blog post instills some confidence that you can migrate safely and effectively despite all the intricacies of Ingress-NGINX.
SIG Network has also been working on supporting the most common Ingress-NGINX annotations (and some of these unexpected behaviors) in Ingress2Gateway to help you translate Ingress-NGINX configuration into Gateway API, and offer alternatives to unsupported behavior.
SIG Network released Gateway API 1.5 earlier today (27th February 2026), which graduates features such as ListenerSet (that allow app developers to better manage TLS certificates), and the HTTPRoute CORS filter that allows CORS configuration.
-
You can use Istio purely as Gateway API controller with no other service mesh features. ↩︎
Why Tehran’s Two-Tiered Internet Is So Dangerous
Iran is slowly emerging from the most severe communications blackout in its history and one of the longest in the world. Triggered as part of January’s government crackdown against citizen protests nationwide, the regime implemented an internet shutdown that transcends the standard definition of internet censorship. This was not merely blocking social media or foreign websites; it was a total communications shutdown.
Unlike previous Iranian internet shutdowns where Iran’s domestic intranet—the National Information Network (NIN)—remained functional to keep the banking and administrative sectors running, the 2026 blackout ...
Phishing Attacks Against People Seeking Programming Jobs
This is new. North Korean hackers are posing as company recruiters, enticing job candidates to participate in coding challenges. When they run the code they are supposed to work on, it installs malware on their system.
News article.
Kubernetes WG Serving concludes following successful advancement of AI inference support
The Kubernetes Working Group (WG) Serving was created to support development of the AI inference stack on Kubernetes. The goal of this working group was to ensure that Kubernetes is an orchestration platform of choice for inference workloads. This goal has been accomplished, and the working group is now being disbanded.
WG Serving formed workstreams to collect requirements from various model servers, hardware providers, and inference vendors. This work resulted in a common understanding of inference workload specifics and trends and laid the foundation for improvements across many SIGs in Kubernetes.
The working group oversaw several key evolutions related to load balancing and workloads. The inference gateway was adopted as a request scheduler. Multiple groups have worked to standardize AI gateway functionality, and early inference gateway participants went on to seed agent networking work in SIG Network.
The use cases and problem statements gathered by the working group informed the design of AIBrix.
Many of the unresolved problems in distributed inference — especially benchmarking and recommended best practices — have been picked up by the llm-d project, which hybridizes the infrastructure and ML ecosystems and is better able to steer model server co-evolution.
In particular, llm-d and AIBrix represent more appropriate forums for driving requirements to Kubernetes SIGs than this working group. llm-d’s goal is to provide well-lit paths for achieving state-of-the-art inference and aims to provide recommendations that can compose into existing inference user platforms. AIBrix provides a complete platform solution for cost-efficient LLM inference.
WG Serving helped with Kubernetes AI Conformance requirements. The llm-d project is leveraging multiple components from the profile and making recommendations to end users consistent with Kubernetes direction (including Kueue, inference gateway, LWS, DRA, and related efforts). Widely adopted patterns and solutions are expected to go into the conformance program.
All efforts currently running inside WG Serving can be migrated to other working groups or directly to SIGs. Requirements will be discussed in SIGs and in the llm-d community. Specifically:
- Autoscaling-related questions — mostly related to fast bootstrap — will be discussed in SIG Node or SIG Scheduling.
- Multi-host, multi-node work can continue as part of SIG Apps (for example, for the LWS project), and DRA requirements will be discussed in WG Device Management.
- Orchestration topics will be covered by SIG Scheduling and SIG Node.
- Requirements for DRA will be discussed in WG Device Management.
The Gateway API Inference Extension project is already sponsored by SIG Network and will remain there. The Serving Catalog work can be moved to the Inference Perf project. Originally it was designed for a larger scope, but it has been used mostly for inference performance.
The Inference Perf project is sponsored by SIG Scalability, and no change of ownership is needed.
CNCF thanks all contributors who participated in WG Serving and helped advance Kubernetes as a platform for AI inference workloads.
LLMs Generate Predictable Passwords
LLMs are bad at generating passwords:
There are strong noticeable patterns among these 50 passwords that can be seen easily:
- All of the passwords start with a letter, usually uppercase G, almost always followed by the digit 7.
- Character choices are highly uneven for example, L , 9, m, 2, $ and # appeared in all 50 passwords, but 5 and @ only appeared in one password each, and most of the letters in the alphabet never appeared at all.
- There are no repeating characters within any password. Probabilistically, this would be very unlikely if the passwords were truly random but Claude preferred to avoid repeating characters, possibly because it “looks like it’s less random”. ...
Exposing Spin apps on SpinKube with GatewayAPI
The Gateway API isn’t just an “Ingress v2”, it’s an entirely revamped approach for exposing services from within Kubernetes and eliminates the need of encoding routing capabilities into vendor-specific, unstructured annotations. In this post, we will explore how to expose WebAssembly applications built using the CNCF Spin framework and served by SpinKube using the Gateway API.
What is SpinKube
SpinKube, a CNCF sandbox project, is an open-source stack for running serverless WebAssembly applications (Spin apps) on top of Kubernetes. Although SpinKube leverages Kubernetes primitives like Deployments, Services and Pods, there are no containers involved for running your serverless Spin apps at all. Instead, it leverages a containerd-shim implementation and spawns processes on the underlying Kubernetes worker nodes for running Spin apps.
You can learn more about SpinKube and find detailed instructions on how to deploy SpinKube to your Kubernetes cluster at https://spinkube.dev.
What is Gateway API
The Gateway API is the modern, role-oriented successor to the legacy Ingress resource, designed to provide a more expressive and extensible networking interface for Kubernetes. Unlike Ingress, which often relies on a messy sprawl of vendor-specific annotations to handle complex logic, the Gateway API breaks traffic management into atomic resources —GatewayClass, Gateway, and routes (like HTTPRoute or GRPCRoute).
This separation allows infrastructure admins to manage the entry points while giving developers control over how their specific services are exposed, enabling native support for advanced traffic patterns like canary rollouts, header-based routing, and traffic mirroring without the need for bespoke configurations.
To dive deeper into the technical specifications and resource hierarchy, head over to the official Gateway API documentation.
Provisioning a Kubernetes cluster, installing SpinKube and implementing Spin apps are considered beyond the scope of this article. However, you can head over to https://github.com/akamai-developers/exposing-spin-apps-with-gatway-api – a repository containing all source code, along with the necessary instructions for setting up a LKE cluster with SpinKube.
To follow the article’s demo, you’ll deploy the required artifacts to your Kubernetes cluster. Make sure you have the following tools installed:
- The
spinCLI (see https://spinframework.dev for installation instructions) - Rust along with the
wasm32-wasip1target (See https://rust-lang.github.io/rustup/ for detailed instructions) - Both CLIs
kubectlandhelmfor interacting with your Kubernetes cluster(See https://kubernetes.io/docs/tasks/tools/ and https://helm.sh/docs/intro/install respectively)- Once you’ve installed
kubectldownload necessary credentials for accessing your Kubernetes cluster
- Once you’ve installed
Build and deploy the Spin apps to Kubernetes
Let’s start by compiling the source code of our sample Spin apps down to WebAssembly. Doing so is as easy as executing the spin build command from within each application folder:
# Build the greeter application
pushd apps/greeter
spin build
Building component greeter with `cargo build --target wasm32-wasip1 --release`
Finished `release` profile [optimized] target(s) in 0.21s
Finished building all Spin components
popd
# Build the prime_numbers application
pushd apps/prime-numbers
spin build
Building component prime-numbers with `cargo build --target wasm32-wasip1 --release`
Finished `release` profile [optimized] target(s) in 0.18s
Finished building all Spin components
popd
Once the application has been compiled, we use the spin registry push to distribute it as OCI artifact. (If your OCI compliant registry requires authentication, you must login first. Use the spin registry login to authenticate before trying to push).
Tip: For testing purposes, we’ll use ttl.sh an anonymous and ephemeral OCI compliant registry, which allows us to store our applications for 24 hours by simply specifying the TTL as a tag.
# specify variables
greeter_app_artifact=ttl.sh/spin-greeter:24h
primenumbers_app_artifact=ttl.sh/spin-prime-numbers:24h
# optional: Authenticate against registry
oci_reg_server=
oci_reg_user=
oci_reg_password=
spin registry login $oci_reg_server -u $oci_reg_user -p $oci_reg_password
# distribute the Spin applications
pushd apps/greeter
spin registry push $greeter_app_artifact --build
popd
pushd apps/prime-numbers
spin registry push $primenumbers_app_artifact --build
popd
Finally, we use the spin kube scaffold command for generating the necessary Kubernetes manifests.
Tip: Spin does not have any opinions on how you deploy resources to your Kubernetes cluster. You can either use kubectl, create a Helm chart and deploy it using the helm CLI, or describe the desired state and deploy it with GitOps.
For the sake of this article, we’ll simply pipe the generated manifest to kubectl apply. The actual manifests are shown here for illustration purposes:
# Deploy the Spin applications to Kubernets
spin kube scaffold --from $greeter_app_artifact | kubectl apply -f -
spin kube scaffold --from $primenumbers_app_artifact | kubectl apply -f -
apiVersion: core.spinkube.dev/v1alpha1
kind: SpinApp
metadata:
name: spin-greeter
spec:
image: "ttl.sh/spin-greeter:24h"
executor: containerd-shim-spin
replicas: 2
---
apiVersion: core.spinkube.dev/v1alpha1
kind: SpinApp
metadata:
name: spin-prime-numbers
spec:
image: "ttl.sh/spin-prime-numbers:24h"
executor: containerd-shim-spin
replicas: 2
Obviously, there are additional knobs you can turn when executing spin kube scaffold, I highly encourage you to checkout the documentation for the command by providing the --help flag.
Testing the Spin app
We use traditional port-forwarding provided by kubectl to verify that both Spin applications runs as expected:
kubectl port-forward svc/spin-greeter 8080:80
Sent a GET request to the application using curl:
curl -i localhost:8080/hello/Akamai%20Developers
HTTP/1.1 200 OK
content-type: text/plain
transfer-encoding: chunked
date: Mon, 19 Jan 2026 13:55:34 GMT
Hello, Akamai Developers!
Next, let’s test the second Spin application:
kubectl port-forward svc/spin-prime-numbers 8080:80
Again, use curl to invoke one of the endpoints exposed by the Spin app:
curl -i localhost:8080/above/42
HTTP/1.1 200 OK
transfer-encoding: chunked
date: Mon, 19 Jan 2026 17:05:02 GMT
Next prime number above 42 is 43
Now that both apps are working, you can terminate port-forwarding again (`CTRL+C) and dive into exposing both Spin apps.
Installing Gateway API CRDs and Controller
To use the Gateway API, we must install the corresponding Gateway API resources (CRDs) on our cluster along with a Gateway API Controller.
There are several controllers available that implement the Gateway API. You can find a list of available Gateway API controllers at https://gateway-api.sigs.k8s.io/implementations/. We’ll use NGINX Gateway Fabric for now.
To install Gateway API resources run:
kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/standard?ref=v2.3.0" | kubectl apply -f -
To install NGINX Gateway Fabric run:
helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway
Creating cluster-specific Gateway API resources
With the Gateway API controller installed, we will first deploy a Gateway to our cluster. Think of the Gateway as an entry point into your Kubernetes cluster, which could be shared across multiple applications. We’ll now create the spinkube Gateway, which will front our two Spin applications that are already running in the default namespace.
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: spinkube
namespace: default
spec:
gatewayClassName: nginx
listeners:
- protocol: HTTP
port: 8080
name: http
allowedRoutes:
namespaces:
from: Same
Once you’ve deployed the Gateway, you should find a new service being provisioned to the default namespace called spinkube-nginx of type LoadBalancer once the cloud controller has acquired a public IP address, you should find it as part of the output as well.
kubectl get services
NAME TYPE EXTERNAL-IP
spinkube-nginx LoadBalancer 172.238.61.25
Note down the external IP address of the spinkube-nginx service, we’ll use it in a few minutes to send requests to our Spin applications from outside of the cluster!
Creating application-specific Gateway API Resources
As we have deployed two different Spin applications to our Kubernetes cluster, we’ll also create two instances of HTTPRoute and link them to the Gateway we created in the previous section.
Tip: As managing external DNS is beyond the scope of this article, we’ll use simple PathPrefix based routing in combination with a Rewrite filter to route inbound requests to the desired Spin applications.
Create the following HTTPRoute resources in the default namespace:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: greeter
namespace: default
spec:
parentRefs:
- name: spinkube
rules:
- backendRefs:
- name: spin-greeter
port: 80
filters:
- type: URLRewrite
urlRewrite:
path:
replacePrefixMatch: /
type: ReplacePrefixMatch
matches:
- path:
type: PathPrefix
value: /greeter
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: prime-numbers
namespace: default
spec:
parentRefs:
- name: spinkube
rules:
- backendRefs:
- name: spin-prime-numbers
port: 80
filters:
- type: URLRewrite
urlRewrite:
path:
replacePrefixMatch: /
type: ReplacePrefixMatch
matches:
- path:
type: PathPrefix
value: /prime-numbers
Accessing the Spin apps
Having all Kubernetes resources in place, it’s time for a final test. We discovered the public IP address associated with our Gateway earlier in this post. Let’s use curl again for sending requests to both Spin application:
# Send request to the greeter app
curl -i http:///<your_gateway_ip>:8080/greet/hello/Akamai%20Developers
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 19 Jan 2026 16:37:22 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Hello, Akamai Developers!
# Send request to the prime-numbers app
curl -i http://<your_gateway_ip>:8080/prime-numbers/above/999
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 19 Jan 2026 16:37:50 GMT
Transfer-Encoding: chunked
Connection: keep-alive
Next prime number above 999 is 1009
As you can see, our requests get routed to the desired Spin application because of the path prefix (either greeter or prime-numbers).
Conclusion
The Kubernetes Gateway API streamlines how we expose services from within a Kubernetes cluster and allows precise separation of concerns. Cloud infrastructure and cluster operators create and manage resources that could be shared across multiple applications like the Gateway, while application developers provide application (or service) specific resources such as an HTTPRoute.
Especially when running tens or hundreds of different serverless applications on top of SpinKube it’s crucial to have robust and reliable routing in place to ensure applications are accessible from outside of the cluster. The Gateway API for Kubernetes makes managing these a breeze.
Contributors from Akamai collaborate on SpinKube development to deliver this runtime across its global cloud and edge. Additional information is available.at akamai.com.
The nervous system gets a soul: why sovereign cloud is telco’s real second act
Deep Dive: How linkerd-destination works in the Linkerd Service Mesh
This blog post was originally published on Bezaleel Silva’s Medium blog.
Recently, in our daily operations, we took a deep dive into the inner workings of linkerd-destination, one of the most critical components of the Linkerd control plane.
The motivation was simple: as our cluster grew and traffic increased, the question shifted from “Does Linkerd work?” to “How exactly does it react when everything changes at once?”. Frequent deployments, production scaling, security policies being applied — and at the center of all this, the destination service.
Poisoning AI Training Data
All it takes to poison AI training data is to create a website:
I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….
Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...
MCP security: The current situation
The AI Speed Tax: Why Moving Fast is Breaking Things in Cybersecurity
Is AI Good for Democracy?
Politicians fixate on the global race for technological supremacy between US and China. They debate geopolitical implications of chip exports, latest model releases from each country, and military applications of AI. Someday, they believe, we might see advancements in AI tip the scales in a superpower conflict.
But the most important arms race of the 21st century is already happening elsewhere and, while AI is definitely the weapon of choice, combatants are distributed across dozens of domains.
Academic journals are flooded with AI-generated papers, and are turning to AI to help review submissions. Brazil’s ...
Making Harbor production-ready: Essential considerations for deployment
Harbor is an open-source container registry that secures artifacts with policies and role-based access control, ensuring images are scanned for vulnerabilities and signed as trusted. To learn more about Harbor and how to deploy it on a Virtual Machine (VM) and in Kubernetes (K8s), refer to parts 1 and 2 of the series.
While deploying Harbor is straightforward, making it production-ready requires careful consideration of several key aspects. This blog outlines critical factors to ensure your Harbor instance is robust, secure, and scalable for production environments.
For this blog, we will focus on Harbor deployed on Kubernetes via Helm as our base and provide suggestions for this specific deployment.
1. High Availability (HA) and scalability
For a production environment, single points of failure are unacceptable, especially for an image registry that will act as a central repository for storing and pulling images and artifacts for development and production applications. Thus, implementing high availability for Harbor is crucial and involves several key considerations:
- Deploy with an Ingress: Configure a Kubernetes Service of type Ingress controller (e.g. Traefik) in front of your Harbor instances to distribute incoming traffic efficiently and provide a unified entry point along with cert-manager for certificate management. You can specify this in your values.yaml file under:
expose:
type: ingress
tls:
enabled: true
certSource: secret
ingress:
hosts:
core: harbor.yourdomain.com
annotations:
# Specify your ingress class
kubernetes.io/ingress.class: traefik
# Reference your ClusterIssuer (e.g., self-signed or internal CA)
cert-manager.io/cluster-issuer: "harbor-cluster-issuer"
To locate your values.yaml file, refer to the previous blog.
- Utilize multiple Harbor instances: Increase the replica count for critical Harbor components (e.g., core, jobservice, portal, registry, trivy) in your values.yaml to ensure redundancy.
core:
replicas: 3
jobservice:
replicas: 3
portal:
replicas: 3
registry:
replicas: 3
trivy:
replicas: 3
# While not strictly for the HA of the registry itself, consider increasing exporter replicas for robust monitoring availability
exporter:
replicas: 3
# Optionally, if using Ingress, consider increasing the Nginx replicas for improving Ingress availability
nginx:
replicas: 3
Configure shared storage: For persistent data, configure Kubernetes StorageClasses and PersistentVolumes to use shared storage solutions like vSAN or a distributed file system. Specify these in your values.yaml under:
persistence:
enabled: true
resourcePolicy: "keep"
persistentVolumeClaim:
registry:
#If left empty, the kubernetes cluster default storage class will be used
storageClass: "your-storage-class"
jobservice:
storageClass: "your-storage-class"
database:
storageClass: "your-storage-class"
redis:
storageClass: "your-storage-class"
trivy:
storageClass: "your-storage-class"
- Enable database HA (PostgreSQL): While Harbor comes with a built-in PostgreSQL database, it is not recommended for production use as it:
- Lack of high availability (HA): The default internal PostgreSQL setup within the Harbor Helm chart is typically a single instance. This creates a single point of failure. If that database pod goes down, your entire Harbor instance will be unavailable.
- Limited scalability: An embedded database is not designed for independent scaling. If your Harbor usage grows, you might hit database performance bottlenecks that are difficult to address without disrupting Harbor itself.
- Complex lifecycle management: Managing backups, point-in-time recovery, patching, and upgrades for a stateful database directly within an application’s Helm chart can be significantly more complex and error-prone than with dedicated database solutions.
Thus, it is recommended to deploy a highly available PostgreSQL cluster within Kubernetes (e.g., using a Helm chart for Patroni or CloudNativePG) or leverage a managed database service outside the cluster. Configure Harbor to connect to this HA database by updating the values.yaml:
database:
type: "external"
external:
host: "192.168.0.1"
port: "5432"
username: "user"
password: "password"
coreDatabase: "registry"
# If using an existing secret, the key must be "password"
existingSecret: ""
# "disable" - No SSL
# "require" - Always SSL (skip verification)
# "verify-ca" - Always SSL (verify that the certificate presented by the
# server was signed by a trusted CA)
# "verify-full" - Always SSL (verify that the certification presented by the
# server was signed by a trusted CA and the server host name matches the one
# in the certificate)
sslmode: "verify-full"
Implement Redis HA: Deploy a highly available Redis cluster in Kubernetes (e.g., using a Helm chart for Redis Sentinel or Redis Cluster) or utilize a managed Redis service. Configure Harbor to connect to this HA Redis instance by updating redis.type and connection details in values.yaml.
redis:
type: external
external:
addr: "192.168.0.2:6397"
sentinelMasterSet: ""
tlsOptions:
enable: true
username: ""
password: ""
2. Security best practices
Security is paramount for any production system, especially a container registry.
Enable TLS/SSL: Always enable TLS/SSL for all Harbor components.
expose:
tls:
enabled: true
certSource: auto # change to manual if using cert-manager
auto:
commonName: ""
internalTLS:
enabled: true
strong_ssl_ciphers: true
certSource: "auto"
core:
secretName: ""
jobService:
secretName: ""
registry:
secretName: ""
portal:
secretName: ""
trivy:
secretName: ""
Configure authentication and authorization: Leverage Harbor’s supported Authentication and Authorization mechanisms for managing access to Harbor resources. After Harbor deployment, integrate Harbor with enterprise identity providers like LDAP or OIDC by following the Harbor configuration guides: Configure LDAP/Active Directory Authentication or Configure OIDC Provider Authentication.
Implement vulnerability scanning: Ensure vulnerability scanning is enabled in values.yaml. Harbor uses Trivy by default. Verify its activation and configuration within the Helm chart.
trivy:
enabled: true
Activate content trust: Harbor supports multiple content trust mechanisms to ensure the integrity of your artifacts. For modern OCI artifact signing, Cosign and Notation are recommended. Enforce deployment security at the project level within the Harbor UI or via the Harbor API to allow only verified images to be deployed. This ensures that only trusted and cryptographically signed images can be deployed.
- Maintain regular updates: Regularly update your Harbor Helm chart and underlying Kubernetes components to benefit from the latest security patches and bug fixes. Use helm upgrade for this purpose.
- Use robot accounts for automation: Use robot accounts (service accounts) in automation such as CI/CD pipelines to avoid using user credentials. This ensures the robot account with the least required privileges is used to perform the specific task it has been created for, ensuring limited scope.
- Fine grained audit log: In Harbor v2.13.0, Harbor supports the re-direction of specific events in the audit log. For example, an “authentication failure” event can be configured in the audit log and forwarded to a 3rd party syslog endpoint.
3. Storage considerations
Efficient and reliable storage is critical for Harbor’s performance and stability.
- Choose appropriate storage type: Define Kubernetes StorageClasses that align with your underlying infrastructure (e.g., nfs-client, aws-ebs, azure-disk, gcp-pd). Specify these settings in your values.yaml:
persistence:
enabled: true
resourcePolicy: "keep"
imageChartStorage:
#Specify storage type: "filesystem", "azure", "gcs", "s3", "swift", "oss"
type: ""
#Configure specific storage type section based on the selected option
- Estimate storage sizing: Carefully calculate your storage needs based on the anticipated number and size of container images, as well as your defined retention policies. Configure the size for your PersistentVolumeClaims in values.yaml.
- Implement robust backup and recovery: Establish a comprehensive backup strategy for all Harbor data. For Kubernetes-native backups, consider using tools like Velero to back up PersistentVolumes and Kubernetes resources. For object storage, leverage the cloud provider’s backup mechanisms or external backup solutions. Regularly test your recovery procedures.
- Configure and run garbage collection: Set up and routinely execute Harbor’s garbage collection. This can be configured through the Harbor UI by defining a schedule for automated runs to remove unused blobs and efficiently reclaim storage space.
4. Monitoring and alerting
Proactive monitoring and alerting are essential for identifying and addressing issues before they impact users.
Collect Comprehensive Metrics: Deploy Prometheus and configure it to scrape metrics from Harbor components. The Harbor Helm chart exposes Prometheus-compatible endpoints in the values.yaml file. Visualize these metrics using Grafana.
metrics:
enabled: true
core:
path: /metrics
port: 8001
registry:
path: /metrics
port: 8001
jobservice:
path: /metrics
port: 8001
exporter:
path: /metrics
port: 8001
serviceMonitor:
enabled: true
# This label ensures the prometheus operator picks up these monitors
additionalLabels:
release: kube-prometheus-stack
# Example Service Monitor objects:
# Harbor Core (API and Auth Performance)
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: harbor-core
labels:
app: harbor
release: kube-prometheus-stack
spec:
selector:
matchLabels:
app: harbor
component: core
endpoints:
- port: metrics # Defaults to 8001
path: /metrics
interval: 30s
# Harbor Exporter (Business Metrics)
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: harbor-exporter
labels:
app: harbor
release: kube-prometheus-stack
spec:
selector:
matchLabels:
app: harbor
component: exporter
endpoints:
- port: metrics
path: /metrics
interval: 60s # Scraped less frequently as these are high-level stats
- Centralized logging: Implement a centralized logging solution within Kubernetes, such as the ELK stack (Elasticsearch, Logstash, Kibana) or Grafana with Fluentd/Fluent Bit.
- Configure critical alerts: Set up alerting rules in Prometheus (Alertmanager) or Grafana for critical events, such as component failures, high resource utilization (CPU/memory limits), storage nearing capacity, failed vulnerability scans, or unauthorized access attempts. Define these thresholds based on your production requirements.
5. Network configuration
Proper network configuration ensures smooth communication between Harbor components and external clients.
- Configure ingress or load balancer and DNS resolution: As already mentioned, deploy a Kubernetes Ingress controller or Load Balancer to expose Harbor externally. Ensure proper DNS records are configured to point to your Load Balancer’s IP address.
- Set Up proxy settings (if applicable): If Harbor components need to access external resources through a corporate proxy, configure proxy settings within values.yaml. It’s crucial to note that the proxy.components field explicitly defines which Harbor components (e.g., core, jobservice, trivy) will utilize these proxy settings for their external communications.
proxy:
httpProxy:
httpsProxy:
noProxy: 127.0.0.1,localhost,.local,.internal
components:
- core
- jobservice
- trivy
- Allocate sufficient bandwidth: Ensure your Kubernetes cluster’s underlying network infrastructure and nodes have sufficient bandwidth to handle peak image pushes and pulls. Monitor network I/O on nodes running Harbor pods.
Conclusion
By diligently addressing these considerations, you can transform your basic Harbor deployment into a robust, secure, and highly available production-ready container registry. This approach ensures that Harbor serves as a cornerstone of your cloud-native infrastructure, capable of supporting demanding development and production workflows. From implementing High Availability and stringent security measures to optimizing storage and establishing proactive monitoring, each step contributes to a resilient and efficient artifact management system.
Continue reading the Harbor Blog Series on cncf.io:
Blog 1 – Harbor: Enterprise-grade container registry for modern private cloud
Enhancing Security and Transparency: Introducing Private Notifications for Fastly Maintenance and Incidents
Why etcd breaks at scale in Kubernetes
On the Security of Password Managers
Good article on password managers that secretly have a backdoor.
New research shows that these claims aren’t true in all cases, particularly when account recovery is in place or password managers are set to share vaults or organize users into groups. The researchers reverse-engineered or closely analyzed Bitwarden, Dashlane, and LastPass and identified ways that someone with control over the server—either administrative or the result of a compromise—can, in fact, steal data and, in some cases, entire vaults. The researchers also devised other attacks that can weaken the encryption to the point that ciphertext can be converted to plaintext...